* Jobs can return a Retry object
* Fix test on Redis < 5
* Fix ruff warnings
* Minor house keeping
* More minor fixes
* More typing improvements
* More fixes
* Fix typing
* Add an exception handler for the pubsub thread
After a connection error to Redis the pubsub thread was stopped and the
worker could not receive commands anymore.
This commit adds an exception handler for the thread that adds a log
message and ignores `redis.exceptions.ConnectionError`. Any other
exception is re-raised.
redis-py internal mechanism allows the pubsub thread to recover its
connection and reinstall the pubsub channel subscription to allow the
worker to receive commands again after connection errors.
It tries to behave the same as the main worker loop retry mechanism but
without the backoff wait factor.
Fixes#1836Fixes#2070
* Add test for untested line, improve logging and comments
Add *args & **kwargs to tests.fixtures.raise_exc to pass tests with Python 3.7
Prior to this commit it was not possible to get a number of jobs in a
registry or list job IDs without triggering a cleanup.
Calling cleanup caused issues for:
* Monitoring tools (see https://github.com/rq/rq/pull/2104) and
* Cases with high number of jobs in the registries (see
https://github.com/rq/rq/pull/2003).
* Cases where a failure callback is registered and but the
`get_job_ids` is called not from the main thread.
In this commit:
1. A new `BaseRegistry.get_job_count` method is added. It is a side
effect free version of `BaseRegistry.count` and runs in O(1).
2. A `cleanup` parameter added to `get_job_ids` that allows to avoid
clean up if it is set to `False`.
Resolves https://github.com/rq/rq/pull/2104.
Resolves https://github.com/rq/rq/pull/2003.
* Execution objects now return UTC timestamps
* execution.job should return a full Job instance
* worker.maintain_heartbeats() should extend execution TTL
* Don't use cached_property since it's not supported on Python 3.7
* Fixed type hint of execution._job
* add info about running docs in README
* update jekyll config to fix error
* update docs to clarify misc timeout values
* update readme
* update argument from default_worker_ttl -> worker_ttl
* Fix black formatting
* argument should still be respected until it's deprecated
* Updated CHANGES.md
* Updated CHANGES.md
---------
Co-authored-by: Sean Villars <sean@setpoint.io>
This change checks for Redis connection socket timeouts that are too
short for operations such as BLPOP, and adjusts them to at least be the
expected timeout.
* Add new deferred_ttl attribute to jobs
* Add DeferredJobRegistry to cleanup
* Use normal ttl for deferred jobs as well
* Test that jobs landing in deferred queue get a TTL
* Pass pipeline in job cleanup
* Remove cleanup call
We pass a ttl of -1 which does not do anything
* Pass exc_info to add
The add implementation overwrites it so it won't get lost this way
* Remove extraneous save call
The add function already saves the job for us. So no need to save it
twice.
* Tune cleanup function description
* Test cleanup also works for deleted jobs
* Replace testconn with connection
---------
Co-authored-by: Raymond Guo <raymond.guo@databricks.com>
* Add test to list queue with only deferred jobs
Signed-off-by: Simó Albert i Beltran <sim6@probeta.net>
* Register queue also for deferred job
Without this change queues initialized with only deferred jobs are not
listed by `rq.Queue.all()`.
Signed-off-by: Simó Albert i Beltran <sim6@probeta.net>
---------
Signed-off-by: Simó Albert i Beltran <sim6@probeta.net>
* Execute custom handlers when abandoned job is found
* Make exception_handlers optional
* Run exception handlers before checking retry
* Test abandoned job handler is called
* Improve test coverage
* Fix handler order
* Batch.py initial commit
* Add method to refresh batch status
* Clean up Redis key properties
* Remove persist_job method
* Execute refresh method when fetching batch
* Add batch_id to job
* Add option to batch jobs with enqueue_many
* Add set_batch_id method
* Handle batch_ttl when job is started
* Renew ttl on all jobs in batch when a job finishes
* Use fetch_jobs method in refresh()
* Remove batch TTL
During worker maintenance task, worker will loop through batches and
delete expired jobs.
When all jobs are expired, the batch is deleted.
* Fix buggy connection
* Raise batch error in fetch method
* Fix fetch connection arg
* Add testing for batch
* Remove unused import
* Update batch documentation
* Add batch ID to job persistence test
* Test that empty batch is removed from batch list
* Make default batch ID full uuid4 to match Job
* Add method to return batch key
* Add all method to return iterable of batches
* Revert changes to queue.py
* Separate batch creating from job enqueueing.
Batches can be created by passing an array of jobs.
* Fix bug with stopped callback in enqueue_many
* Rename delete_expired_jobs batch method
* Use exists to identify expired jobs
* Remove jobs attribute from batch
Jobs have to be explicitly fetched using the get_jobs method
* Don't add batch_id to job in _add_jobs method
This should be done when the job is created before it is saved to redis.
* Use cleanup method to determine if batch should be deleted
* Add get_jobs method
This method will return an array of jobs belonging to the batch.
* Update create method to not allow jobs arg
Jobs should be added to batches in the Queue.enqueue_many
function
* Add batch_id to job create method
* Delete set_batch_id method
The batch ID should be set atomically when the job is created
* Add batch argument to queue.enqueue_many
This will be how jobs can be batched. The batch ID is added to each job
created by enqueue_many, and then those jobs are all added to the batch
in Redis using the batch's _add_jobs method
* Update batch tests to use new enqueue_many arg
* Fix batch_id for non_batched jobs
* Use different pipeline name
* Don't use fetch method when enqueuing jobs
If the batch is empty, fetch will delete it from Redis
* Test enqueuing batched jobs with string ID
* Update batch documentation
* Remove unused variables
* Fix expired job tracking bug
* Add delete_job method
* Use delete_job method on batch
* Pass multiple jobs into srem command
* Add multiple keys to exists call
* Pipeline _add_jobs call
* Fix missing job_id arg
* Move clean_batch_registry to Batch class
* Remove unused variables
* Fix missing dependency deletion
I accidentally deleted this earlier
* Move batch enqueueing to batch class
* Update batch docs
* Add test for batched jobs with str queue arg
* Don't delete batch in cleanup
* Change Batch to Group
* Import EnqueueData for type hint
* Sort imports
* Remove local config file
* Rename clean_group_registries method
* Update feature version
* Fix group constant names
* Only allow Queue in enqueue many
* Move cleanup logic to classmethod
* Remove str group test
* Remove unused import
* Make pipeline mandatory in _add_jobs method
* Rename expired job ids array
* Add slow decorator
* Assert job 1 data in group
* Rename id arg to name
* Only run cleanup when jobs are retrieved
* Remove job_ids variable
* Fix group name arg
* Fix group name arg
* Clean groups before fetching
* Use uuid4().hex for shorter id
* Move cleanup logic to instance method
* Use all method when cleaning group registry
* Check if group exists instead of catching error
* Cleanup expired jobs
* Apply black formatting
* Fix black formatting
* Pipeline group cleanup
* Fix pipeline call to exists
* Add pyenv version file
* Remove group cleanup after job completes
* Use existing pipeline
* Remove unneeded pipeline check
* Fix empty call
* Test group __repr__ method
* Remove unnecessary pipeline assignment
* Remove unused delete method
* Fix pipeline name
* Remove unnecessary conditional block
* Add result blocking
* Dev tidyup
* Skip XREAD test on old redis versions
* Lint
* Clarify that the latest result is returned.
* Fix job test ordering
* Remove test latency hack
* Readability improvements
* Removed pop_connection from Worker.find_by_key()
* Removed push_connection from worker.work()
* Remove push_connection from workers.py and cli
* Remove get_current_connection() from worker._set_connection()
* Fix failing tests
* Removed the use of get_current_connection()
* WIP removing get_connection()
* Fixed tests in test_dependencies.py
* Fixed a few test files
* test_worker.py now passes
* Fix HerokuWorker
* Fix schedule_access_self test
* Fix ruff errors
* Initial work on Execution class
* Executions are now created and deleted when jobs are performed
* Added execution.heartbeat()
* Added a way to get execution IDs from execution registry
* Job.fetch should also support execution composite key
* Added ExecutionRegistry.get_executions()
* execution.heartbeat() now also updates StartedJobRegistry
* Added job.get_executions()
* Added worker.prepare_execution()
* Simplified start_worker function in fixtures.py
* Minor test fixes
* Black
* Fixed a failing shutdown test
* Removed Execution.create from worker.prepare_job_execution
* Fix Sentry test
* Minor fixes
* Better test coverage
* Readded back worker.set_current_job_working_time()
* Reverse the order of handle_exception and handle_job_failure
* Fix SSL test
* job.delete() also deletes executions.
* Set job._status to FAILED as soon as job raises an exception
* Exclusively use execution.composite_key in StartedJobRegistry
* Use codecov v3
* Format with black
* Remove print statement
* Remove Redis server 3 from tests
* Remove support for Redis server < 4
* Fixed ruff warnings
* Added tests and remove unused code
* Linting fixes
In 9adcd7e50c a change was made to make
the workhorse process into a session leader. This appears to have been
done in order to set the process group ID of the workhorse so that it
can be killed easily along with its descendants when
`Worker.kill_horse()` is called.
However, `setsid` is overkill for this purpose; it sets not only the
process group ID but also the session ID. This can cause issues for user
jobs which may rely on session IDs being less-than-unique for arbitrary
reasons.
This change switches to `setpgrp`; this sets the process group ID so
that the workhorse and its descendants can be killed *without* changing
the session ID.
* Add result blocking
* Dev tidyup
* Skip XREAD test on old redis versions
* Lint
* Clarify that the latest result is returned.
* Fix job test ordering
* Remove test latency hack
* Readability improvements