* Execute custom handlers when abandoned job is found
* Make exception_handlers optional
* Run exception handlers before checking retry
* Test abandoned job handler is called
* Improve test coverage
* Fix handler order
* Batch.py initial commit
* Add method to refresh batch status
* Clean up Redis key properties
* Remove persist_job method
* Execute refresh method when fetching batch
* Add batch_id to job
* Add option to batch jobs with enqueue_many
* Add set_batch_id method
* Handle batch_ttl when job is started
* Renew ttl on all jobs in batch when a job finishes
* Use fetch_jobs method in refresh()
* Remove batch TTL
During worker maintenance task, worker will loop through batches and
delete expired jobs.
When all jobs are expired, the batch is deleted.
* Fix buggy connection
* Raise batch error in fetch method
* Fix fetch connection arg
* Add testing for batch
* Remove unused import
* Update batch documentation
* Add batch ID to job persistence test
* Test that empty batch is removed from batch list
* Make default batch ID full uuid4 to match Job
* Add method to return batch key
* Add all method to return iterable of batches
* Revert changes to queue.py
* Separate batch creating from job enqueueing.
Batches can be created by passing an array of jobs.
* Fix bug with stopped callback in enqueue_many
* Rename delete_expired_jobs batch method
* Use exists to identify expired jobs
* Remove jobs attribute from batch
Jobs have to be explicitly fetched using the get_jobs method
* Don't add batch_id to job in _add_jobs method
This should be done when the job is created before it is saved to redis.
* Use cleanup method to determine if batch should be deleted
* Add get_jobs method
This method will return an array of jobs belonging to the batch.
* Update create method to not allow jobs arg
Jobs should be added to batches in the Queue.enqueue_many
function
* Add batch_id to job create method
* Delete set_batch_id method
The batch ID should be set atomically when the job is created
* Add batch argument to queue.enqueue_many
This will be how jobs can be batched. The batch ID is added to each job
created by enqueue_many, and then those jobs are all added to the batch
in Redis using the batch's _add_jobs method
* Update batch tests to use new enqueue_many arg
* Fix batch_id for non_batched jobs
* Use different pipeline name
* Don't use fetch method when enqueuing jobs
If the batch is empty, fetch will delete it from Redis
* Test enqueuing batched jobs with string ID
* Update batch documentation
* Remove unused variables
* Fix expired job tracking bug
* Add delete_job method
* Use delete_job method on batch
* Pass multiple jobs into srem command
* Add multiple keys to exists call
* Pipeline _add_jobs call
* Fix missing job_id arg
* Move clean_batch_registry to Batch class
* Remove unused variables
* Fix missing dependency deletion
I accidentally deleted this earlier
* Move batch enqueueing to batch class
* Update batch docs
* Add test for batched jobs with str queue arg
* Don't delete batch in cleanup
* Change Batch to Group
* Import EnqueueData for type hint
* Sort imports
* Remove local config file
* Rename clean_group_registries method
* Update feature version
* Fix group constant names
* Only allow Queue in enqueue many
* Move cleanup logic to classmethod
* Remove str group test
* Remove unused import
* Make pipeline mandatory in _add_jobs method
* Rename expired job ids array
* Add slow decorator
* Assert job 1 data in group
* Rename id arg to name
* Only run cleanup when jobs are retrieved
* Remove job_ids variable
* Fix group name arg
* Fix group name arg
* Clean groups before fetching
* Use uuid4().hex for shorter id
* Move cleanup logic to instance method
* Use all method when cleaning group registry
* Check if group exists instead of catching error
* Cleanup expired jobs
* Apply black formatting
* Fix black formatting
* Pipeline group cleanup
* Fix pipeline call to exists
* Add pyenv version file
* Remove group cleanup after job completes
* Use existing pipeline
* Remove unneeded pipeline check
* Fix empty call
* Test group __repr__ method
* Remove unnecessary pipeline assignment
* Remove unused delete method
* Fix pipeline name
* Remove unnecessary conditional block
* changed the utcnow() to now()
* changed the utcnow() to now()
* changed the utcnow() to now()
* changed the utcnow() to now()
* changed the utcnow() to now()
* changed the utcnow() to now()
* changed the utcnow() to now()
* changed the utcnow() to now()
Currently, with Redis engine 7.1, and `redis` client 5.0.1 trying to enqueue a new job yields
```text
AttributeError: 'float' object has no attribute 'split'
```
This is caused by the fact that `redis_conn.info("server")["redis_version"]` returns a `float`, not a `str`.
* Add result blocking
* Dev tidyup
* Skip XREAD test on old redis versions
* Lint
* Clarify that the latest result is returned.
* Fix job test ordering
* Remove test latency hack
* Readability improvements
* Removed pop_connection from Worker.find_by_key()
* Removed push_connection from worker.work()
* Remove push_connection from workers.py and cli
* Remove get_current_connection() from worker._set_connection()
* Fix failing tests
* Removed the use of get_current_connection()
* WIP removing get_connection()
* Fixed tests in test_dependencies.py
* Fixed a few test files
* test_worker.py now passes
* Fix HerokuWorker
* Fix schedule_access_self test
* Fix ruff errors
Currently, with Redis engine 7.1, and `redis` client 5.0.1 trying to enqueue a new job yields
```text
AttributeError: 'float' object has no attribute 'split'
```
This is caused by the fact that `redis_conn.info("server")["redis_version"]` returns a `float`, not a `str`.
* Initial work on Execution class
* Executions are now created and deleted when jobs are performed
* Added execution.heartbeat()
* Added a way to get execution IDs from execution registry
* Job.fetch should also support execution composite key
* Added ExecutionRegistry.get_executions()
* execution.heartbeat() now also updates StartedJobRegistry
* Added job.get_executions()
* Added worker.prepare_execution()
* Simplified start_worker function in fixtures.py
* Minor test fixes
* Black
* Fixed a failing shutdown test
* Removed Execution.create from worker.prepare_job_execution
* Fix Sentry test
* Minor fixes
* Better test coverage
* Readded back worker.set_current_job_working_time()
* Reverse the order of handle_exception and handle_job_failure
* Fix SSL test
* job.delete() also deletes executions.
* Set job._status to FAILED as soon as job raises an exception
* Exclusively use execution.composite_key in StartedJobRegistry
* Use codecov v3
* Format with black
* Remove print statement
* Remove Redis server 3 from tests
* Remove support for Redis server < 4
* Fixed ruff warnings
* Added tests and remove unused code
* Linting fixes
In 9adcd7e50c a change was made to make
the workhorse process into a session leader. This appears to have been
done in order to set the process group ID of the workhorse so that it
can be killed easily along with its descendants when
`Worker.kill_horse()` is called.
However, `setsid` is overkill for this purpose; it sets not only the
process group ID but also the session ID. This can cause issues for user
jobs which may rely on session IDs being less-than-unique for arbitrary
reasons.
This change switches to `setpgrp`; this sets the process group ID so
that the workhorse and its descendants can be killed *without* changing
the session ID.