API reference¶
Data structures¶
- class apscheduler.Task(*, id, func, executor, max_running_jobs=None, misfire_grace_time=None, state=None)¶
Represents a callable and its surrounding configuration parameters.
- Variables:
id (str) – the unique identifier of this task
func (Callable) – the callable that is called when this task is run
max_running_jobs (int | None) – maximum number of instances of this task that are allowed to run concurrently
misfire_grace_time (timedelta | None) – maximum number of seconds the run time of jobs created for this task are allowed to be late, compared to the scheduled run time
- class apscheduler.Schedule(*, id, task_id, trigger, args=(), kwargs=(), coalesce=CoalescePolicy.latest, misfire_grace_time=None, max_jitter=None, tags=(), next_fire_time=None, last_fire_time=None, acquired_by=None, acquired_until=None)¶
Represents a schedule on which a task will be run.
- Variables:
id (str) – the unique identifier of this schedule
task_id (str) – unique identifier of the task to be run on this schedule
args (tuple) – positional arguments to pass to the task callable
kwargs (dict[str, Any]) – keyword arguments to pass to the task callable
coalesce (CoalescePolicy) – determines what to do when processing the schedule if multiple fire times have become due for this schedule since the last processing
misfire_grace_time (timedelta | None) – maximum number of seconds the scheduled job’s actual run time is allowed to be late, compared to the scheduled run time
max_jitter (timedelta | None) – maximum number of seconds to randomly add to the scheduled time for each job created from this schedule
tags (frozenset[str]) – strings that can be used to categorize and filter the schedule and its derivative jobs
conflict_policy (ConflictPolicy) – determines what to do if a schedule with the same ID already exists in the data store
next_fire_time (datetime) – the next time the task will be run
last_fire_time (datetime | None) – the last time the task was scheduled to run
acquired_by (str | None) – ID of the scheduler that has acquired this schedule for processing
acquired_until (str | None) – the time after which other schedulers are free to acquire the schedule for processing even if it is still marked as acquired
- class apscheduler.Job(*, id=_Nothing.NOTHING, task_id, args=(), kwargs=(), schedule_id=None, scheduled_fire_time=None, jitter=_Nothing.NOTHING, start_deadline=None, result_expiration_time=datetime.timedelta(0), tags=(), created_at=_Nothing.NOTHING, started_at=None, acquired_by=None, acquired_until=None)¶
Represents a queued request to run a task.
- Variables:
id (UUID) – autogenerated unique identifier of the job
task_id (str) – unique identifier of the task to be run
args (tuple) – positional arguments to pass to the task callable
kwargs (dict[str, Any]) – keyword arguments to pass to the task callable
schedule_id (str) – unique identifier of the associated schedule (if the job was derived from a schedule)
scheduled_fire_time (datetime | None) – the time the job was scheduled to run at (if the job was derived from a schedule; includes jitter)
jitter (timedelta | None) – the time that was randomly added to the calculated scheduled run time (if the job was derived from a schedule)
start_deadline (datetime | None) – if the job is started in the worker after this time, it is considered to be misfired and will be aborted
result_expiration_time (timedelta) – minimum amount of time to keep the result available for fetching in the data store
tags (frozenset[str]) – strings that can be used to categorize and filter the job
created_at (datetime) – the time at which the job was created
started_at (datetime | None) – the time at which the execution of the job was started
acquired_by (str | None) – the unique identifier of the worker that has acquired the job for execution
acquired_until (str | None) – the time after which other workers are free to acquire the job for processing even if it is still marked as acquired
- class apscheduler.JobInfo(*, job_id, task_id, schedule_id, scheduled_fire_time, jitter, start_deadline, tags)¶
Contains information about the currently running job.
This information is available in the thread or task where a job is currently being run, available from
current_job
.- Variables:
job_id (UUID) – the unique identifier of the job
task_id (str) – the unique identifier of the task that is being run
schedule_id (str | None) – the unique identifier of the schedule that the job was derived from (if any)
scheduled_fire_time (datetime | None) – the time the job was scheduled to run at (if the job was derived from a schedule; includes jitter)
jitter (timedelta) – the time that was randomly added to the calculated scheduled run time (if the job was derived from a schedule)
start_deadline (datetime | None) – if the job is started in the worker after this time, it is considered to be misfired and will be aborted
tags (frozenset[str]) – strings that can be used to categorize and filter the job
- class apscheduler.JobResult(*, job_id, outcome, finished_at=_Nothing.NOTHING, expires_at, exception=None, return_value=None)¶
Represents the result of running a job.
- Variables:
job_id (UUID) – the unique identifier of the job
outcome (JobOutcome) – indicates how the job ended
finished_at (datetime) – the time when the job ended
exception (BaseException | None) – the exception object if the job ended due to an exception being raised
return_value – the return value from the task function (if the job ran to completion successfully)
- class apscheduler.RetrySettings(*, stop=<tenacity.stop.stop_after_delay object>, wait=<tenacity.wait.wait_exponential object>)¶
Settings for retrying an operation with Tenacity.
- Parameters:
stop (
stop_base
) – defines when to stop tryingwait (
wait_base
) – defines how long to wait between attempts
Schedulers¶
- class apscheduler.schedulers.sync.Scheduler(data_store=None, event_broker=None, *, identity=None, role=SchedulerRole.both, job_executors=None, default_job_executor=None, logger=None)¶
A synchronous scheduler implementation.
- start_in_background()¶
Launch the scheduler in a new thread.
This method registers
atexit
hooks to shut down the scheduler and wait for the thread to finish.- Raises:
RuntimeError – if the scheduler is not in the
stopped
state- Return type:
- subscribe(callback, event_types=None, *, one_shot=False)¶
Subscribe to events.
To unsubscribe, call the
Subscription.unsubscribe()
method on the returned object.- Parameters:
callback – callable to be called with the event object when an event is published
event_types – an iterable of concrete Event classes to subscribe to
one_shot – if
True
, automatically unsubscribe after the first matching event
- class apscheduler.schedulers.async_.AsyncScheduler(*, data_store=_Nothing.NOTHING, event_broker=_Nothing.NOTHING, identity=None, role=SchedulerRole.both, max_concurrent_jobs=100, job_executors=None, default_job_executor=None, logger=<Logger apscheduler.schedulers.async_ (WARNING)>)¶
An asynchronous (AnyIO based) scheduler implementation.
- Parameters:
data_store – the data store for tasks, schedules and jobs
event_broker – the event broker to use for publishing an subscribing events
max_concurrent_jobs – Maximum number of jobs the worker will run at once
role – specifies what the scheduler should be doing when running
process_schedules –
True
to process due schedules in this scheduler
- async add_job(func_or_task_id, *, args=None, kwargs=None, job_executor=None, tags=None, result_expiration_time=0)¶
Add a job to the data store.
- Parameters:
func_or_task_id (str | Callable) –
job_executor (str | None) – name of the job executor to run the task with
args (Iterable | None) – positional arguments to call the target callable with
kwargs (Mapping[str, Any] | None) – keyword arguments to call the target callable with
job_executor – name of the job executor to run the task with
tags (Iterable[str] | None) – strings that can be used to categorize and filter the job
result_expiration_time (timedelta | float) – the minimum time (as seconds, or timedelta) to keep the result of the job available for fetching (the result won’t be saved at all if that time is 0)
- Return type:
UUID
- Returns:
the ID of the newly created job
- async add_schedule(func_or_task_id, trigger, *, id=None, args=None, kwargs=None, job_executor=None, coalesce=CoalescePolicy.latest, misfire_grace_time=None, max_jitter=None, tags=None, conflict_policy=ConflictPolicy.do_nothing)¶
Schedule a task to be run one or more times in the future.
- Parameters:
func_or_task_id (str | Callable) – either a callable or an ID of an existing task definition
trigger (Trigger) – determines the times when the task should be run
id (str | None) – an explicit identifier for the schedule (if omitted, a random, UUID based ID will be assigned)
args (Iterable | None) – positional arguments to be passed to the task function
kwargs (Mapping[str, Any] | None) – keyword arguments to be passed to the task function
job_executor (str | None) – name of the job executor to run the task with
coalesce (CoalescePolicy) – determines what to do when processing the schedule if multiple fire times have become due for this schedule since the last processing
misfire_grace_time (float | timedelta | None) – maximum number of seconds the scheduled job’s actual run time is allowed to be late, compared to the scheduled run time
max_jitter (float | timedelta | None) – maximum number of seconds to randomly add to the scheduled time for each job created from this schedule
tags (Iterable[str] | None) – strings that can be used to categorize and filter the schedule and its derivative jobs
conflict_policy (ConflictPolicy) – determines what to do if a schedule with the same ID already exists in the data store
- Return type:
- Returns:
the ID of the newly added schedule
- async get_job_result(job_id, *, wait=True)¶
Retrieve the result of a job.
- Parameters:
- Raises:
JobLookupError – if
wait=False
and the job result does not exist in the data store- Return type:
- async get_next_event(event_types)¶
Wait until the next event matching one of the given types arrives.
- Parameters:
event_types – an event class or an iterable event classes to subscribe to
- async get_schedule(id)¶
Retrieve a schedule from the data store.
- Parameters:
id (
str
) – the unique identifier of the schedule- Raises:
ScheduleLookupError – if the schedule could not be found
- Return type:
- async get_schedules()¶
Retrieve all schedules from the data store.
- Returns:
a list of schedules, in an unspecified order
- async remove_schedule(id)¶
Remove the given schedule from the data store.
- async run_job(func_or_task_id, *, args=None, kwargs=None, job_executor=None, tags=())¶
Convenience method to add a job and then return its result.
If the job raised an exception, that exception will be reraised here.
- Parameters:
func_or_task_id (str | Callable) – either a callable or an ID of an existing task definition
args (Iterable | None) – positional arguments to be passed to the task function
kwargs (Mapping[str, Any] | None) – keyword arguments to be passed to the task function
job_executor (str | None) – name of the job executor to run the task with
tags (Iterable[str] | None) – strings that can be used to categorize and filter the job
- Return type:
Any
- Returns:
the return value of the task function
- async run_until_stopped(*, task_status=<anyio._core._tasks._IgnoredTaskStatus object>)¶
Run the scheduler until explicitly stopped.
- Return type:
- async stop()¶
Signal the scheduler that it should stop processing schedules.
This method does not wait for the scheduler to actually stop. For that, see
wait_until_stopped()
.- Return type:
- subscribe(callback, event_types=None, *, one_shot=False, is_async=True)¶
Subscribe to events.
To unsubscribe, call the
Subscription.unsubscribe()
method on the returned object.- Parameters:
callback – callable to be called with the event object when an event is published
event_types – an event class or an iterable event classes to subscribe to
one_shot – if
True
, automatically unsubscribe after the first matching eventis_async –
True
if the (synchronous) callback should be called on the event loop thread,False
if it should be called in a worker thread. Ifcallback
is a coroutine function, this flag is ignored.
Workers¶
Data stores¶
- class apscheduler.abc.DataStore¶
Asynchronous version of
DataStore
. Expected to work on asyncio.- abstract async acquire_jobs(worker_id, limit=None)¶
Acquire unclaimed jobs for execution.
This method claims up to the requested number of jobs for the given worker and returns them.
- abstract async acquire_schedules(scheduler_id, limit)¶
Acquire unclaimed due schedules for processing.
This method claims up to the requested number of schedules for the given scheduler and returns them.
- Parameters:
scheduler_id – unique identifier of the scheduler
limit – maximum number of schedules to claim
- Returns:
the list of claimed schedules
- abstract async add_job(job)¶
Add a job to be executed by an eligible worker.
- abstract async add_schedule(schedule, conflict_policy)¶
Add or update the given schedule in the data store.
- Parameters:
schedule (
Schedule
) – schedule to be addedconflict_policy (
ConflictPolicy
) – policy that determines what to do if there is an existing schedule with the same ID
- Return type:
- abstract async add_task(task)¶
Add the given task to the store.
If a task with the same ID already exists, it replaces the old one but does NOT affect task accounting (# of running jobs).
- abstract async get_job_result(job_id)¶
Retrieve the result of a job.
The result is removed from the store after retrieval.
- Parameters:
job_id (UUID) – the identifier of the job
- Return type:
JobResult | None
- Returns:
the result, or
None
if the result was not found
- abstract async get_jobs(ids=None)¶
Get the list of pending jobs.
- abstract async get_next_schedule_run_time()¶
Return the earliest upcoming run time of all the schedules in the store, or
None
if there are no active schedules.- Return type:
datetime | None
- abstract async get_schedules(ids=None)¶
Get schedules from the data store.
- Parameters:
ids – a specific set of schedule IDs to return, or
None
to return all schedules- Returns:
the list of matching schedules, in unspecified order
- abstract async get_task(task_id)¶
Get an existing task definition.
- Parameters:
task_id (
str
) – ID of the task to be returned- Return type:
- Returns:
the matching task
- Raises:
TaskLookupError – if no matching task was found
- abstract async get_tasks()¶
Get all the tasks in this store.
- Returns:
a list of tasks, sorted by ID
- abstract async release_job(worker_id, task_id, result)¶
Release the claim on the given job and record the result.
- abstract async release_schedules(scheduler_id, schedules)¶
Release the claims on the given schedules and update them on the store.
- Parameters:
scheduler_id – unique identifier of the scheduler
schedules – the previously claimed schedules
- abstract async remove_schedules(ids)¶
Remove schedules from the data store.
- abstract async remove_task(task_id)¶
Remove the task with the given ID.
- Parameters:
task_id (
str
) – ID of the task to be removed- Raises:
TaskLookupError – if no matching task was found
- Return type:
- abstract async start(exit_stack, event_broker)¶
Start the event broker.
- Parameters:
exit_stack (
AsyncExitStack
) – an asynchronous exit stack which will be processed when the scheduler is shut downevent_broker (
EventBroker
) – the event broker shared between the scheduler, worker (if any) and this data store
- Return type:
- class apscheduler.datastores.memory.MemoryDataStore(tasks=_Nothing.NOTHING, schedules=_Nothing.NOTHING, schedules_by_id=_Nothing.NOTHING, schedules_by_task_id=_Nothing.NOTHING, jobs=_Nothing.NOTHING, jobs_by_id=_Nothing.NOTHING, jobs_by_task_id=_Nothing.NOTHING, job_results=_Nothing.NOTHING, *, lock_expiration_delay=30)¶
Stores scheduler data in memory, without serializing it.
Can be shared between multiple schedulers within the same event loop.
- async acquire_jobs(worker_id, limit=None)¶
Acquire unclaimed jobs for execution.
This method claims up to the requested number of jobs for the given worker and returns them.
- async acquire_schedules(scheduler_id, limit)¶
Acquire unclaimed due schedules for processing.
This method claims up to the requested number of schedules for the given scheduler and returns them.
- Parameters:
scheduler_id – unique identifier of the scheduler
limit – maximum number of schedules to claim
- Returns:
the list of claimed schedules
- async add_job(job)¶
Add a job to be executed by an eligible worker.
- async add_schedule(schedule, conflict_policy)¶
Add or update the given schedule in the data store.
- Parameters:
schedule (
Schedule
) – schedule to be addedconflict_policy (
ConflictPolicy
) – policy that determines what to do if there is an existing schedule with the same ID
- Return type:
- async add_task(task)¶
Add the given task to the store.
If a task with the same ID already exists, it replaces the old one but does NOT affect task accounting (# of running jobs).
- async get_job_result(job_id)¶
Retrieve the result of a job.
The result is removed from the store after retrieval.
- Parameters:
job_id (UUID) – the identifier of the job
- Return type:
JobResult | None
- Returns:
the result, or
None
if the result was not found
- async get_jobs(ids=None)¶
Get the list of pending jobs.
- async get_next_schedule_run_time()¶
Return the earliest upcoming run time of all the schedules in the store, or
None
if there are no active schedules.- Return type:
datetime | None
- async get_schedules(ids=None)¶
Get schedules from the data store.
- Parameters:
ids – a specific set of schedule IDs to return, or
None
to return all schedules- Returns:
the list of matching schedules, in unspecified order
- async get_task(task_id)¶
Get an existing task definition.
- Parameters:
task_id (
str
) – ID of the task to be returned- Return type:
- Returns:
the matching task
- Raises:
TaskLookupError – if no matching task was found
- async get_tasks()¶
Get all the tasks in this store.
- Returns:
a list of tasks, sorted by ID
- async release_job(worker_id, task_id, result)¶
Release the claim on the given job and record the result.
- async release_schedules(scheduler_id, schedules)¶
Release the claims on the given schedules and update them on the store.
- Parameters:
scheduler_id – unique identifier of the scheduler
schedules – the previously claimed schedules
- async remove_schedules(ids)¶
Remove schedules from the data store.
- async remove_task(task_id)¶
Remove the task with the given ID.
- Parameters:
task_id (
str
) – ID of the task to be removed- Raises:
TaskLookupError – if no matching task was found
- Return type:
- class apscheduler.datastores.sqlalchemy.SQLAlchemyDataStore(engine, schema=None, max_poll_time=1, max_idle_time=60, *, retry_settings=RetrySettings(stop=<tenacity.stop.stop_after_delay object>, wait=<tenacity.wait.wait_exponential object>), lock_expiration_delay=30, serializer=_Nothing.NOTHING, start_from_scratch=False)¶
Uses a relational database to store data.
When started, this data store creates the appropriate tables on the given database if they’re not already present.
Operations are retried (in accordance to
retry_settings
) when an operation raisessqlalchemy.OperationalError
.This store has been tested to work with PostgreSQL (asyncpg driver) and MySQL (asyncmy driver).
- Parameters:
engine (Engine | AsyncEngine) – an asynchronous SQLAlchemy engine
schema (str | None) – a database schema name to use, if not the default
- async acquire_jobs(worker_id, limit=None)¶
Acquire unclaimed jobs for execution.
This method claims up to the requested number of jobs for the given worker and returns them.
- async acquire_schedules(scheduler_id, limit)¶
Acquire unclaimed due schedules for processing.
This method claims up to the requested number of schedules for the given scheduler and returns them.
- Parameters:
scheduler_id – unique identifier of the scheduler
limit – maximum number of schedules to claim
- Returns:
the list of claimed schedules
- async add_job(job)¶
Add a job to be executed by an eligible worker.
- async add_schedule(schedule, conflict_policy)¶
Add or update the given schedule in the data store.
- Parameters:
schedule (
Schedule
) – schedule to be addedconflict_policy (
ConflictPolicy
) – policy that determines what to do if there is an existing schedule with the same ID
- Return type:
- async add_task(task)¶
Add the given task to the store.
If a task with the same ID already exists, it replaces the old one but does NOT affect task accounting (# of running jobs).
- classmethod from_url(url, **options)¶
Create a new asynchronous SQLAlchemy data store.
- Parameters:
url – an SQLAlchemy URL to pass to
create_engine()
(must use an async dialect likeasyncpg
orasyncmy
)kwargs – keyword arguments to pass to the initializer of this class
- Returns:
the newly created data store
- async get_job_result(job_id)¶
Retrieve the result of a job.
The result is removed from the store after retrieval.
- Parameters:
job_id (UUID) – the identifier of the job
- Return type:
JobResult | None
- Returns:
the result, or
None
if the result was not found
- async get_jobs(ids=None)¶
Get the list of pending jobs.
- async get_next_schedule_run_time()¶
Return the earliest upcoming run time of all the schedules in the store, or
None
if there are no active schedules.- Return type:
datetime | None
- async get_schedules(ids=None)¶
Get schedules from the data store.
- Parameters:
ids – a specific set of schedule IDs to return, or
None
to return all schedules- Returns:
the list of matching schedules, in unspecified order
- async get_task(task_id)¶
Get an existing task definition.
- Parameters:
task_id (
str
) – ID of the task to be returned- Return type:
- Returns:
the matching task
- Raises:
TaskLookupError – if no matching task was found
- async get_tasks()¶
Get all the tasks in this store.
- Returns:
a list of tasks, sorted by ID
- async release_job(worker_id, task_id, result)¶
Release the claim on the given job and record the result.
- async release_schedules(scheduler_id, schedules)¶
Release the claims on the given schedules and update them on the store.
- Parameters:
scheduler_id – unique identifier of the scheduler
schedules – the previously claimed schedules
- async remove_schedules(ids)¶
Remove schedules from the data store.
- async remove_task(task_id)¶
Remove the task with the given ID.
- Parameters:
task_id (
str
) – ID of the task to be removed- Raises:
TaskLookupError – if no matching task was found
- Return type:
- async start(exit_stack, event_broker)¶
Start the event broker.
- Parameters:
exit_stack (
AsyncExitStack
) – an asynchronous exit stack which will be processed when the scheduler is shut downevent_broker (
EventBroker
) – the event broker shared between the scheduler, worker (if any) and this data store
- Return type:
- class apscheduler.datastores.mongodb.MongoDBDataStore(client, *, retry_settings=RetrySettings(stop=<tenacity.stop.stop_after_delay object>, wait=<tenacity.wait.wait_exponential object>), lock_expiration_delay=30, serializer=_Nothing.NOTHING, start_from_scratch=False, database='apscheduler')¶
Uses a MongoDB server to store data.
When started, this data store creates the appropriate indexes on the given database if they’re not already present.
Operations are retried (in accordance to
retry_settings
) when an operation raisespymongo.errors.ConnectionFailure
.- Parameters:
client (MongoClient) – a PyMongo client
database (str) – name of the database to use
- async acquire_jobs(worker_id, limit=None)¶
Acquire unclaimed jobs for execution.
This method claims up to the requested number of jobs for the given worker and returns them.
- async acquire_schedules(scheduler_id, limit)¶
Acquire unclaimed due schedules for processing.
This method claims up to the requested number of schedules for the given scheduler and returns them.
- Parameters:
scheduler_id – unique identifier of the scheduler
limit – maximum number of schedules to claim
- Returns:
the list of claimed schedules
- async add_job(job)¶
Add a job to be executed by an eligible worker.
- async add_schedule(schedule, conflict_policy)¶
Add or update the given schedule in the data store.
- Parameters:
schedule (
Schedule
) – schedule to be addedconflict_policy (
ConflictPolicy
) – policy that determines what to do if there is an existing schedule with the same ID
- Return type:
- async add_task(task)¶
Add the given task to the store.
If a task with the same ID already exists, it replaces the old one but does NOT affect task accounting (# of running jobs).
- async get_job_result(job_id)¶
Retrieve the result of a job.
The result is removed from the store after retrieval.
- Parameters:
job_id (UUID) – the identifier of the job
- Return type:
JobResult | None
- Returns:
the result, or
None
if the result was not found
- async get_jobs(ids=None)¶
Get the list of pending jobs.
- async get_next_schedule_run_time()¶
Return the earliest upcoming run time of all the schedules in the store, or
None
if there are no active schedules.- Return type:
datetime | None
- async get_schedules(ids=None)¶
Get schedules from the data store.
- Parameters:
ids – a specific set of schedule IDs to return, or
None
to return all schedules- Returns:
the list of matching schedules, in unspecified order
- async get_task(task_id)¶
Get an existing task definition.
- Parameters:
task_id (
str
) – ID of the task to be returned- Return type:
- Returns:
the matching task
- Raises:
TaskLookupError – if no matching task was found
- async get_tasks()¶
Get all the tasks in this store.
- Returns:
a list of tasks, sorted by ID
- async release_job(worker_id, task_id, result)¶
Release the claim on the given job and record the result.
- async release_schedules(scheduler_id, schedules)¶
Release the claims on the given schedules and update them on the store.
- Parameters:
scheduler_id – unique identifier of the scheduler
schedules – the previously claimed schedules
- async remove_schedules(ids)¶
Remove schedules from the data store.
- async remove_task(task_id)¶
Remove the task with the given ID.
- Parameters:
task_id (
str
) – ID of the task to be removed- Raises:
TaskLookupError – if no matching task was found
- Return type:
- async start(exit_stack, event_broker)¶
Start the event broker.
- Parameters:
exit_stack (
AsyncExitStack
) – an asynchronous exit stack which will be processed when the scheduler is shut downevent_broker (
EventBroker
) – the event broker shared between the scheduler, worker (if any) and this data store
- Return type:
Event brokers¶
- class apscheduler.abc.EventBroker¶
Interface for objects that can be used to publish notifications to interested subscribers.
- abstract async publish_local(event)¶
Publish an event, but only to local subscribers.
- Return type:
- abstract async start(exit_stack)¶
Start the event broker.
- Parameters:
exit_stack (
AsyncExitStack
) – an asynchronous exit stack which will be processed when the scheduler is shut down- Return type:
- abstract subscribe(callback, event_types=None, *, is_async=True, one_shot=False)¶
Subscribe to events from this event broker.
- Parameters:
callback – callable to be called with the event object when an event is published
event_types – an iterable of concrete Event classes to subscribe to
is_async –
True
if the (synchronous) callback should be called on the event loop thread,False
if it should be called in a worker thread. If the callback is a coroutine function, this flag is ignored.one_shot – if
True
, automatically unsubscribe after the first matching event
- class apscheduler.eventbrokers.local.LocalEventBroker¶
Asynchronous, local event broker.
This event broker only broadcasts within the process it runs in, and is therefore not suitable for multi-node or multiprocess use cases.
Does not serialize events.
- class apscheduler.eventbrokers.asyncpg.AsyncpgEventBroker(connection_factory, *, retry_settings=RetrySettings(stop=<tenacity.stop.stop_after_delay object>, wait=<tenacity.wait.wait_exponential object>), serializer=_Nothing.NOTHING, channel='apscheduler', max_idle_time=10)¶
An asynchronous, asyncpg based event broker that uses a PostgreSQL server to broadcast events using its
NOTIFY
mechanism.- Parameters:
- classmethod from_async_sqla_engine(engine, options=None, **kwargs)¶
Create a new asyncpg event broker from an SQLAlchemy engine.
The engine will only be used to create the appropriate options for
asyncpg.connect()
.- Parameters:
engine (AsyncEngine) – an asynchronous SQLAlchemy engine using asyncpg as the driver
options (Mapping[str, Any] | None) – extra keyword arguments passed to
asyncpg.connect()
(will override any automatically generated arguments based on the engine)kwargs (Any) – keyword arguments to pass to the initializer of this class
- Return type:
- Returns:
the newly created event broker
- classmethod from_dsn(dsn, options=None, **kwargs)¶
Create a new asyncpg event broker from an existing asyncpg connection pool.
- Parameters:
dsn – data source name, passed as first positional argument to
asyncpg.connect()
options – keyword arguments passed to
asyncpg.connect()
kwargs – keyword arguments to pass to the initializer of this class
- Returns:
the newly created event broker
- async start(exit_stack)¶
Start the event broker.
- Parameters:
exit_stack (
AsyncExitStack
) – an asynchronous exit stack which will be processed when the scheduler is shut down- Return type:
- class apscheduler.eventbrokers.mqtt.MQTTEventBroker(client=_Nothing.NOTHING, *, retry_settings=RetrySettings(stop=<tenacity.stop.stop_after_delay object>, wait=<tenacity.wait.wait_exponential object>), serializer=_Nothing.NOTHING, host='localhost', port=1883, topic='apscheduler', subscribe_qos=0, publish_qos=0)¶
An event broker that uses an MQTT (v3.1 or v5) broker to broadcast events.
Requires the paho-mqtt library to be installed.
- Parameters:
client (Client) – a paho-mqtt client
host (str) – host name or IP address to connect to
port (int) – TCP port number to connect to
topic (str) – topic on which to send the messages
subscribe_qos (int) – MQTT QoS to use for subscribing messages
publish_qos (int) – MQTT QoS to use for publishing messages
- async start(exit_stack)¶
Start the event broker.
- Parameters:
exit_stack (
AsyncExitStack
) – an asynchronous exit stack which will be processed when the scheduler is shut down- Return type:
- class apscheduler.eventbrokers.redis.RedisEventBroker(client, *, retry_settings=RetrySettings(stop=<tenacity.stop.stop_after_delay object>, wait=<tenacity.wait.wait_exponential object>), serializer=_Nothing.NOTHING, channel='apscheduler', stop_check_interval=1)¶
An event broker that uses a Redis server to broadcast events.
Requires the redis library to be installed.
- Parameters:
- classmethod from_url(url, **kwargs)¶
Create a new event broker from a URL.
- Parameters:
url (
str
) – a Redis URL (`redis://...`
)kwargs – keyword arguments to pass to the initializer of this class
- Return type:
- Returns:
the newly created event broker
- async start(exit_stack)¶
Start the event broker.
- Parameters:
exit_stack (
AsyncExitStack
) – an asynchronous exit stack which will be processed when the scheduler is shut down- Return type:
Serializers¶
- class apscheduler.abc.Serializer¶
Interface for classes that implement (de)serialization.
- abstract deserialize(serialized)¶
Restore a previously serialized object from bytestring
- Parameters:
serialized (
bytes
) – a bytestring previously received fromserialize()
- Return type:
- Returns:
a copy of the original object
- abstract serialize(obj)¶
Turn the given object into a bytestring.
- Return type:
- Returns:
a bytestring that can be later restored using
deserialize()
- class apscheduler.serializers.cbor.CBORSerializer(*, type_tag=4664, dump_options=_Nothing.NOTHING, load_options=_Nothing.NOTHING)¶
Serializes objects using CBOR (RFC 8949).
Can serialize types not normally CBOR serializable, if they implement
__getstate__()
and__setstate__()
.- Parameters:
type_tag – CBOR tag number for indicating arbitrary serialized object
dump_options – keyword arguments passed to
cbor2.dumps()
load_options – keyword arguments passed to
cbor2.loads()
- deserialize(serialized)¶
Restore a previously serialized object from bytestring
- Parameters:
serialized (
bytes
) – a bytestring previously received fromserialize()
- Returns:
a copy of the original object
- serialize(obj)¶
Turn the given object into a bytestring.
- Return type:
- Returns:
a bytestring that can be later restored using
deserialize()
- class apscheduler.serializers.json.JSONSerializer(*, magic_key='_apscheduler_json', dump_options=_Nothing.NOTHING, load_options=_Nothing.NOTHING)¶
Serializes objects using JSON.
Can serialize types not normally CBOR serializable, if they implement
__getstate__()
and__setstate__()
. These objects are serialized into dicts that contain the necessary information for deserialization inmagic_key
.- Parameters:
magic_key – name of a specially handled dict key that indicates that a dict contains a serialized instance of an arbitrary type
dump_options – keyword arguments passed to
json.dumps()
load_options – keyword arguments passed to
json.loads()
- deserialize(serialized)¶
Restore a previously serialized object from bytestring
- Parameters:
serialized (
bytes
) – a bytestring previously received fromserialize()
- Returns:
a copy of the original object
- serialize(obj)¶
Turn the given object into a bytestring.
- Return type:
- Returns:
a bytestring that can be later restored using
deserialize()
- class apscheduler.serializers.pickle.PickleSerializer(*, protocol=4)¶
Uses the
pickle
module to (de)serialize objects.As this serialization method is native to Python, it is able to serialize a wide range of types, at the expense of being insecure. Do not use this serializer unless you can fully trust the entire system to not have maliciously injected data. Such data can be made to call arbitrary functions with arbitrary arguments on unpickling.
- Parameters:
protocol (
int
) – the pickle protocol number to use
- deserialize(serialized)¶
Restore a previously serialized object from bytestring
- Parameters:
serialized (
bytes
) – a bytestring previously received fromserialize()
- Returns:
a copy of the original object
- serialize(obj)¶
Turn the given object into a bytestring.
- Return type:
- Returns:
a bytestring that can be later restored using
deserialize()
Triggers¶
- class apscheduler.abc.Trigger(*args, **kwds)¶
Abstract base class that defines the interface that every trigger must implement.
- abstract next()¶
Return the next datetime to fire on.
If no such datetime can be calculated,
None
is returned. :raises apscheduler.exceptions.MaxIterationsReached:- Return type:
datetime | None
- class apscheduler.triggers.date.DateTrigger(run_time)¶
Triggers once on the given date/time.
- Parameters:
run_time (datetime | str | None) – the date/time to run the job at
- next()¶
Return the next datetime to fire on.
If no such datetime can be calculated,
None
is returned. :raises apscheduler.exceptions.MaxIterationsReached:- Return type:
datetime | None
- class apscheduler.triggers.interval.IntervalTrigger(*, weeks=0, days=0, hours=0, minutes=0, seconds=0, microseconds=0, start_time=_Nothing.NOTHING, end_time=None)¶
Triggers on specified intervals.
The first trigger time is on
start_time
which is the moment the trigger was created unless specifically overridden. Ifend_time
is specified, the last trigger time will be at or before that time. If noend_time
has been given, the trigger will produce new trigger times as long as the resulting datetimes are valid datetimes in Python.- Parameters:
weeks (float) – number of weeks to wait
days (float) – number of days to wait
hours (float) – number of hours to wait
minutes (float) – number of minutes to wait
seconds (float) – number of seconds to wait
microseconds (float) – number of microseconds to wait
start_time (datetime | str | None) – first trigger date/time (defaults to current date/time if omitted)
end_time (datetime | str | None) – latest possible date/time to trigger on
- next()¶
Return the next datetime to fire on.
If no such datetime can be calculated,
None
is returned. :raises apscheduler.exceptions.MaxIterationsReached:- Return type:
datetime | None
- class apscheduler.triggers.calendarinterval.CalendarIntervalTrigger(*, years=0, months=0, weeks=0, days=0, hour=0, minute=0, second=0, start_date=_Nothing.NOTHING, end_date=None, timezone='local')¶
Runs the task on specified calendar-based intervals always at the same exact time of day.
When calculating the next date, the
years
andmonths
parameters are first added to the previous date while keeping the day of the month constant. This is repeated until the resulting date is valid. After that, theweeks
anddays
parameters are added to that date. Finally, the date is combined with the given time (hour, minute, second) to form the final datetime.This means that if the
days
orweeks
parameters are not used, the task will always be executed on the same day of the month at the same wall clock time, assuming the date and time are valid.If the resulting datetime is invalid due to a daylight saving forward shift, the date is discarded and the process moves on to the next date. If instead the datetime is ambiguous due to a backward DST shift, the earlier of the two resulting datetimes is used.
If no previous run time is specified when requesting a new run time (like when starting for the first time or resuming after being paused),
start_date
is used as a reference and the next valid datetime equal to or later than the current time will be returned. Otherwise, the next valid datetime starting from the previous run time is returned, even if it’s in the past.Warning
Be wary of setting a start date near the end of the month (29. – 31.) if you have
months
specified in your interval, as this will skip the months when those days do not exist. Likewise, setting the start date on the leap day (February 29th) and having years` defined may cause some years to be skipped.Users are also discouraged from using a time inside the target timezone’s DST switching period (typically around 2 am) since a date could either be skipped or repeated due to the specified wall clock time either occurring twice or not at all.
- Parameters:
years (int) – number of years to wait
months (int) – number of months to wait
weeks (int) – number of weeks to wait
days (int) – number of days to wait
hour (int) – hour to run the task at
minute (int) – minute to run the task at
second (int) – second to run the task at
start_date (date | str | None) – first date to trigger on (defaults to current date if omitted)
end_date (date | str | None) – latest possible date to trigger on
timezone (str | tzinfo | None) – time zone to use for calculating the next fire time
- next()¶
Return the next datetime to fire on.
If no such datetime can be calculated,
None
is returned. :raises apscheduler.exceptions.MaxIterationsReached:- Return type:
datetime | None
- class apscheduler.triggers.combining.AndTrigger(triggers, threshold=1, max_iterations=10000)¶
Fires on times produced by the enclosed triggers whenever the fire times are within the given threshold.
If the produced fire times are not within the given threshold of each other, the trigger(s) that produced the earliest fire time will be asked for their next fire time and the iteration is restarted. If instead all the triggers agree on a fire time, all the triggers are asked for their next fire times and the earliest of the previously produced fire times will be returned.
This trigger will be finished when any of the enclosed trigger has finished.
- Parameters:
triggers – triggers to combine
threshold – maximum time difference between the next fire times of the triggers in order for the earliest of them to be returned from
next()
(in seconds, or as timedelta)max_iterations – maximum number of iterations of fire time calculations before giving up
- next()¶
Return the next datetime to fire on.
If no such datetime can be calculated,
None
is returned. :raises apscheduler.exceptions.MaxIterationsReached:- Return type:
datetime | None
- class apscheduler.triggers.combining.OrTrigger(triggers)¶
Fires on every fire time of every trigger in chronological order. If two or more triggers produce the same fire time, it will only be used once.
This trigger will be finished when none of the enclosed triggers can produce any new fire times.
- Parameters:
triggers – triggers to combine
- next()¶
Return the next datetime to fire on.
If no such datetime can be calculated,
None
is returned. :raises apscheduler.exceptions.MaxIterationsReached:- Return type:
datetime | None
- class apscheduler.triggers.cron.CronTrigger(*, year=None, month=None, day=None, week=None, day_of_week=None, hour=None, minute=None, second=None, start_time=_Nothing.NOTHING, end_time=None, timezone=_Nothing.NOTHING)¶
Triggers when current time matches all specified time constraints, similarly to how the UNIX cron scheduler works.
- Parameters:
day_of_week (int | str | None) – number or name of weekday (0-7 or sun,mon,tue,wed,thu,fri,sat, sun)
start_time (datetime | str | None) – earliest possible date/time to trigger on (defaults to current time)
end_time (datetime | None) – latest possible date/time to trigger on
timezone (str | tzinfo | None) – time zone to use for the date/time calculations (defaults to the local timezone)
Note
The first weekday is always monday.
- classmethod from_crontab(expr, timezone='local')¶
Create a
CronTrigger
from a standard crontab expression.See https://en.wikipedia.org/wiki/Cron for more information on the format accepted here.
- Parameters:
- Return type:
- next()¶
Return the next datetime to fire on.
If no such datetime can be calculated,
None
is returned. :raises apscheduler.exceptions.MaxIterationsReached:- Return type:
datetime | None
Events¶
- class apscheduler.Event(*, timestamp=_Nothing.NOTHING)¶
Base class for all events.
- Variables:
timestamp – the time when the event occurrent
- class apscheduler.DataStoreEvent(*, timestamp=_Nothing.NOTHING)¶
Base class for events originating from a data store.
- class apscheduler.TaskAdded(*, timestamp=_Nothing.NOTHING, task_id)¶
Signals that a new task was added to the store.
- Variables:
task_id – ID of the task that was added
- class apscheduler.TaskUpdated(*, timestamp=_Nothing.NOTHING, task_id)¶
Signals that a task was updated in a data store.
- Variables:
task_id – ID of the task that was updated
- class apscheduler.TaskRemoved(*, timestamp=_Nothing.NOTHING, task_id)¶
Signals that a task was removed from the store.
- Variables:
task_id – ID of the task that was removed
- class apscheduler.ScheduleAdded(*, timestamp=_Nothing.NOTHING, schedule_id, next_fire_time)¶
Signals that a new schedule was added to the store.
- Variables:
schedule_id – ID of the schedule that was added
next_fire_time – the first run time calculated for the schedule
- class apscheduler.ScheduleUpdated(*, timestamp=_Nothing.NOTHING, schedule_id, next_fire_time)¶
Signals that a schedule has been updated in the store.
- Variables:
schedule_id – ID of the schedule that was updated
next_fire_time – the next time the schedule will run
- class apscheduler.ScheduleRemoved(*, timestamp=_Nothing.NOTHING, schedule_id)¶
Signals that a schedule was removed from the store.
- Variables:
schedule_id – ID of the schedule that was removed
- class apscheduler.JobAdded(*, timestamp=_Nothing.NOTHING, job_id, task_id, schedule_id, tags)¶
Signals that a new job was added to the store.
- Variables:
job_id – ID of the job that was added
task_id – ID of the task the job would run
schedule_id – ID of the schedule the job was created from
tags – the set of tags collected from the associated task and schedule
- class apscheduler.JobRemoved(*, timestamp=_Nothing.NOTHING, job_id)¶
Signals that a job was removed from the store.
- Variables:
job_id – ID of the job that was removed
- class apscheduler.ScheduleDeserializationFailed(*, timestamp=_Nothing.NOTHING, schedule_id, exception)¶
Signals that the deserialization of a schedule has failed.
- Variables:
schedule_id – ID of the schedule that failed to deserialize
exception – the exception that was raised during deserialization
- class apscheduler.JobDeserializationFailed(*, timestamp=_Nothing.NOTHING, job_id, exception)¶
Signals that the deserialization of a job has failed.
- Variables:
job_id – ID of the job that failed to deserialize
exception – the exception that was raised during deserialization
- class apscheduler.SchedulerEvent(*, timestamp=_Nothing.NOTHING)¶
Base class for events originating from a scheduler.
- class apscheduler.SchedulerStarted(*, timestamp=_Nothing.NOTHING)¶
- class apscheduler.SchedulerStopped(*, timestamp=_Nothing.NOTHING, exception=None)¶
Signals that a scheduler has stopped.
- Variables:
exception – the exception that caused the scheduler to stop, if any
- class apscheduler.JobAcquired(*, timestamp=_Nothing.NOTHING, job_id, worker_id)¶
Signals that a worker has acquired a job for processing.
- class apscheduler.JobReleased(*, timestamp=_Nothing.NOTHING, job_id, worker_id, outcome, exception_type=None, exception_message=None, exception_traceback=None)¶
Signals that a worker has finished processing of a job.
- Parameters:
job_id (UUID | str) – the ID of the job that was released
worker_id (str) – the ID of the worker that released the job
outcome (TEnum | str) – the outcome of the job
exception_type (str | None) – the fully qualified name of the exception if
outcome
isJobOutcome.error
exception_message (str | None) – the result of
str(exception)
ifoutcome
isJobOutcome.error
exception_traceback (list[str] | None) – the traceback lines from the exception if
outcome
isJobOutcome.error
Enumerated types¶
- class apscheduler.RunState(value)¶
Used to track the running state of schedulers and workers.
Values:
starting
: not running yet, but in the process of startingstarted
: runningstopping
: still running but in the process of shutting downstopped
: not running
- class apscheduler.JobOutcome(value)¶
Used to indicate how the execution of a job ended.
Values:
success
: the job completed successfullyerror
: the job raised an exceptionmissed_start_deadline
: the job’s execution was delayed enough for it to miss its deadlinecancelled
: the job’s execution was cancelled
- class apscheduler.ConflictPolicy(value)¶
Used to indicate what to do when trying to add a schedule whose ID conflicts with an existing schedule.
Values:
replace
: replace the existing schedule with a new onedo_nothing
: keep the existing schedule as-is and drop the new scheduleexception
: raise an exception if a conflict is detected
- class apscheduler.CoalescePolicy(value)¶
Used to indicate how to queue jobs for a schedule that has accumulated multiple run times since the last scheduler iteration.
Values:
earliest
: run once, with the earliest fire timelatest
: run once, with the latest fire timeall
: submit one job for every accumulated fire time
Context variables¶
See the contextvars
module for information on how to work with context variables.
- apscheduler.current_scheduler the current scheduler: ContextVar[Union[Scheduler, AsyncScheduler]]¶
- apscheduler.current_worker the current scheduler: ContextVar[Union[Worker, AsyncWorker]]¶
- apscheduler.current_job information on the job being currently run: ContextVar[JobInfo]¶
Exceptions¶
- exception apscheduler.TaskLookupError(task_id)¶
Raised by a data store when it cannot find the requested task.
- exception apscheduler.ScheduleLookupError(schedule_id)¶
Raised by a scheduler when it cannot find the requested schedule.
- exception apscheduler.JobLookupError(job_id)¶
Raised when the job store cannot find a job for update or removal.
- exception apscheduler.JobResultNotReady(job_id)¶
Raised by
get_job_result()
if the job result is not ready.
- exception apscheduler.JobCancelled¶
Raised by
get_job_result()
if the job was cancelled.
- exception apscheduler.JobDeadlineMissed¶
Raised by
get_job_result()
if the job failed to start within the allotted time.
- exception apscheduler.ConflictingIdError(schedule_id)¶
Raised when trying to add a schedule to a store that already contains a schedule by that ID, and the conflict policy of
exception
is used.
- exception apscheduler.SerializationError¶
Raised when a serializer fails to serialize the given object.
- exception apscheduler.DeserializationError¶
Raised when a serializer fails to deserialize the given object.
- exception apscheduler.MaxIterationsReached¶
Raised when a trigger has reached its maximum number of allowed computation iterations when trying to calculate the next fire time.