Merge branch 'master' of github.com:rq/rq

This commit is contained in:
Selwin Ong 2024-05-01 14:20:44 +07:00
commit 4efd81e648
2 changed files with 27 additions and 2 deletions

View File

@ -471,6 +471,31 @@ To learn about workers, see the [workers][w] documentation.
[w]: {{site.baseurl}}workers/ [w]: {{site.baseurl}}workers/
## Suspending and Resuming
Sometimes you may want to suspend RQ to prevent it from processing new jobs.
A classic example is during the initial phase of a deployment script or in advance
of putting your site into maintenance mode. This is particularly helpful when
you have jobs that are relatively long-running and might otherwise be forcibly
killed during the deploy.
The `suspend` command stops workers on _all_ queues (in a single Redis database)
from picking up new jobs. However currently running jobs will continue until
completion.
```bash
# Suspend indefinitely
rq suspend
# Suspend for a specific duration (in seconds) then automatically
# resume work again.
rq suspend --duration 300
# Resume work again.
rq resume
```
## Considerations for jobs ## Considerations for jobs
Technically, you can put any Python function call on a queue, but that does not Technically, you can put any Python function call on a queue, but that does not

View File

@ -601,7 +601,7 @@ class Job:
return job return job
@classmethod @classmethod
def fetch_many(cls, job_ids: Iterable[str], connection: 'Redis', serializer=None) -> List['Job']: def fetch_many(cls, job_ids: Iterable[str], connection: 'Redis', serializer=None) -> List[Optional['Job']]:
""" """
Bulk version of Job.fetch Bulk version of Job.fetch
@ -614,7 +614,7 @@ class Job:
serializer (Callable): A serializer serializer (Callable): A serializer
Returns: Returns:
jobs (list[Job]): A list of Jobs instances. jobs (list[Optional[Job]]): A list of Jobs instances, elements are None if a job_id does not exist.
""" """
parsed_ids = [parse_job_id(job_id) for job_id in job_ids] parsed_ids = [parse_job_id(job_id) for job_id in job_ids]
with connection.pipeline() as pipeline: with connection.pipeline() as pipeline: