Prepare v2 docs (#1998)

* Prepare docs for release

* Blackify executions.py

* Added a high level overview of RQ concepts
This commit is contained in:
Selwin Ong 2024-10-28 21:55:42 +07:00 committed by GitHub
parent 899e2ec196
commit b145f12fff
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
5 changed files with 66 additions and 20 deletions

View File

@ -404,7 +404,7 @@ q.delete(delete_jobs=True) # Passing in `True` will remove all jobs in the queue
### On the Design
With RQ, you don't have to set up any queues upfront, and you don't have to
specify any channels, exchanges, routing rules, or whatnot. You can just put
specify any channels, exchanges, routing rules or whatnot. You can just put
jobs onto any queue you want. As soon as you enqueue a job to a queue that
does not exist yet, it is created on the fly.

View File

@ -10,6 +10,35 @@ jobs.
## RQ's Job Object
### The Job Lifecycle
The life-cycle of a worker consists of a few phases:
1. _Queued_. When `queue.enqueue(foo)` is called, a `Job` will be created and it's ID
pushed into the queue. `job.get_status()` will return `queued`.
2. _Started_. When a worker picks up a job from queue, the job status will be set to `started`.
In this phase an `Execution` object will be created and it's `composite_key` put in `StartedJobRegistry`.
3. _Finished_. After an execution has ended, `execution` will be removed from `StartedJobRegistry`.
A `Result` object that holds the result of the execution will be created. Both the `Job` and `Result` key
will persist in Redis until the value of `result_ttl` is up. More details [here]( /docs/results/).
#### Job Status
The status of a job can be one of the following:
* `queued`: The default status for created jobs, except for those that have dependencies, which will be created as `deferred`. These jobs have been placed in a queue and are ready to be executed.
* `finished`: The job has finished execution and is available through the finished job registry.
* `failed`: Jobs that encountered errors during execution or expired before being executed.
* `started`: The job has started execution. This status includes the job execution support mechanisms, such as setting the worker name and setting up heartbeat information.
* `deferred`: The job is not ready for execution because its dependencies have not finished successfully yet.
* `scheduled`: Jobs created to run at a future date or jobs that are retried after a retry interval.
* `stopped`: The job was stopped because the worker was stopped.
* `canceled`: The job has been manually canceled and will not be executed, even if it is part of a dependency chain.
These statuses can also be accessed from the job object using boolean properties, such as `job.is_finished`.
### Job Creation
When you enqueue a function, a job will be returned. You may then access the
@ -125,20 +154,33 @@ for job in jobs:
print('Job %s: %s' % (job.id, job.func_name))
```
#### Job Status
The status of a job can be one of the following:
## Job Executions
* `queued`: The default status for created jobs, except for those that have dependencies, which will be created as `deferred`. These jobs have been placed in a queue and are ready to be executed.
* `finished`: The job has finished execution and is available through the finished job registry.
* `failed`: Jobs that encountered errors during execution or expired before being executed.
* `started`: The job has started execution. This status includes the job execution support mechanisms, such as setting the worker name and setting up heartbeat information.
* `deferred`: The job is not ready for execution because its dependencies have not finished successfully yet.
* `scheduled`: Jobs created to run at a future date or jobs that are retried after a retry interval.
* `stopped`: The job was stopped because the worker was stopped.
* `canceled`: The job has been manually canceled and will not be executed, even if it is part of a dependency chain.
_New in 2.0_
When a job is being executed, RQ stores it's execution data in Redis. You can access this data
via `Execution` objects.
```python
from redis import Redis
from rq.job import Job
redis = Redis()
job = Job.fetch('my_job_id', connection=redis)
executions = job.get_executions() # Returns all current executions
execution = job.get_executions()[0] # Retrieves a single execution
print(execution.created_at) # When did this execution start?
print(execution.last_heartbeat) # Worker's last heartbeat
```
`Execution` objects have a few properties:
* `id`: ID of an execution.
* `job`: the `Job` object that owns this execution instance
* `composite_key`: a combination of `job.id` and `execution.id`, formatted as `<job_id>:<execution_id>`
* `created_at`: returns a datetime object representing the start of this execution
* `last_heartbeat`: worker's last heartbeat
These statuses can also be accessed from the job object using boolean properties, such as `job.is_finished`.
## Stopping a Currently Executing Job
_New in version 1.7.0_

View File

@ -79,7 +79,7 @@ _New in version 1.14.0._
* `--maintenance-interval`: defaults to 600 seconds. Runs maintenance tasks every X seconds.
## Inside the worker
## Inside the Worker
### The Worker Lifecycle
@ -348,8 +348,6 @@ To implement this strategy use `-ds round_robin` argument.
To dequeue jobs from the different queues randomly, use `-ds random` argument.
Deprecation Warning: Those strategies were formely being implemented by using the custom classes `rq.worker.RoundRobinWorker`
and `rq.worker.RandomWorker`. As the `--dequeue-strategy` argument allows for this option to be used with any worker, those worker classes are deprecated and will be removed from future versions.
## Custom Job and Queue Classes

View File

@ -9,7 +9,7 @@ to have a low barrier to entry. It can be integrated in your web stack easily.
RQ requires Redis >= 3.0.0.
## Getting started
## Getting Started
First, run a Redis server. You can use an existing one. To put jobs on
queues, you don't have to do anything special, just define your typically
@ -61,7 +61,7 @@ queue.enqueue(say_hello, retry=Retry(max=3))
queue.enqueue(say_hello, retry=Retry(max=3, interval=[10, 30, 60]))
```
### The worker
### The Worker
To start executing enqueued function calls in the background, start a worker
from your project's directory:
@ -83,12 +83,17 @@ Simply use the following command to install the latest released version:
pip install rq
If you want the cutting edge version (that may well be broken), use this:
pip install git+https://github.com/nvie/rq.git@master#egg=rq
## High Level Overview
There are several important concepts in RQ:
1. `Queue`: contains a list of `Job` instances to be executed in a FIFO manner.
2. `Job`: contains the function to be executed by the worker.
3. `Worker`: responsible for getting `Job` instances from a `Queue` and executing them.
4. `Execution`: contains runtime data of a `Job`, created by a `Worker` when it executes a `Job`.
5. `Result`: stores the outcome of an `Execution`, whether it succeeded or failed.
## Project history
## Project History
This project has been inspired by the good parts of [Celery][1], [Resque][2]
and [this snippet][3], and has been created as a lightweight alternative to

View File

@ -12,6 +12,7 @@ from .registry import BaseRegistry, StartedJobRegistry
from .utils import as_text, current_timestamp, now
# TODO: add execution.worker
class Execution:
"""Class to represent an execution of a job."""