* Execution objects now return UTC timestamps
* execution.job should return a full Job instance
* worker.maintain_heartbeats() should extend execution TTL
* Don't use cached_property since it's not supported on Python 3.7
* Fixed type hint of execution._job
* add info about running docs in README
* update jekyll config to fix error
* update docs to clarify misc timeout values
* update readme
* update argument from default_worker_ttl -> worker_ttl
* Fix black formatting
* argument should still be respected until it's deprecated
* Updated CHANGES.md
* Updated CHANGES.md
---------
Co-authored-by: Sean Villars <sean@setpoint.io>
This change checks for Redis connection socket timeouts that are too
short for operations such as BLPOP, and adjusts them to at least be the
expected timeout.
* Add new deferred_ttl attribute to jobs
* Add DeferredJobRegistry to cleanup
* Use normal ttl for deferred jobs as well
* Test that jobs landing in deferred queue get a TTL
* Pass pipeline in job cleanup
* Remove cleanup call
We pass a ttl of -1 which does not do anything
* Pass exc_info to add
The add implementation overwrites it so it won't get lost this way
* Remove extraneous save call
The add function already saves the job for us. So no need to save it
twice.
* Tune cleanup function description
* Test cleanup also works for deleted jobs
* Replace testconn with connection
---------
Co-authored-by: Raymond Guo <raymond.guo@databricks.com>
* Add test to list queue with only deferred jobs
Signed-off-by: Simó Albert i Beltran <sim6@probeta.net>
* Register queue also for deferred job
Without this change queues initialized with only deferred jobs are not
listed by `rq.Queue.all()`.
Signed-off-by: Simó Albert i Beltran <sim6@probeta.net>
---------
Signed-off-by: Simó Albert i Beltran <sim6@probeta.net>
* fix docs: use delay to enqueue and use declared in separate file
Signed-off-by: Mehdi Nassim KHODJA <18899702+naskio@users.noreply.github.com>
* fix docs: use delay only
Signed-off-by: Mehdi Nassim KHODJA <18899702+naskio@users.noreply.github.com>
---------
Signed-off-by: Mehdi Nassim KHODJA <18899702+naskio@users.noreply.github.com>
* Fix the "Fork me on GitHub" ribbon
The image URL is broken. Replace it with the "official" URL found in
<https://github.blog/2008-12-19-github-ribbons/>.
* Replace the orange ribbon with a red one
* Execute custom handlers when abandoned job is found
* Make exception_handlers optional
* Run exception handlers before checking retry
* Test abandoned job handler is called
* Improve test coverage
* Fix handler order
* Batch.py initial commit
* Add method to refresh batch status
* Clean up Redis key properties
* Remove persist_job method
* Execute refresh method when fetching batch
* Add batch_id to job
* Add option to batch jobs with enqueue_many
* Add set_batch_id method
* Handle batch_ttl when job is started
* Renew ttl on all jobs in batch when a job finishes
* Use fetch_jobs method in refresh()
* Remove batch TTL
During worker maintenance task, worker will loop through batches and
delete expired jobs.
When all jobs are expired, the batch is deleted.
* Fix buggy connection
* Raise batch error in fetch method
* Fix fetch connection arg
* Add testing for batch
* Remove unused import
* Update batch documentation
* Add batch ID to job persistence test
* Test that empty batch is removed from batch list
* Make default batch ID full uuid4 to match Job
* Add method to return batch key
* Add all method to return iterable of batches
* Revert changes to queue.py
* Separate batch creating from job enqueueing.
Batches can be created by passing an array of jobs.
* Fix bug with stopped callback in enqueue_many
* Rename delete_expired_jobs batch method
* Use exists to identify expired jobs
* Remove jobs attribute from batch
Jobs have to be explicitly fetched using the get_jobs method
* Don't add batch_id to job in _add_jobs method
This should be done when the job is created before it is saved to redis.
* Use cleanup method to determine if batch should be deleted
* Add get_jobs method
This method will return an array of jobs belonging to the batch.
* Update create method to not allow jobs arg
Jobs should be added to batches in the Queue.enqueue_many
function
* Add batch_id to job create method
* Delete set_batch_id method
The batch ID should be set atomically when the job is created
* Add batch argument to queue.enqueue_many
This will be how jobs can be batched. The batch ID is added to each job
created by enqueue_many, and then those jobs are all added to the batch
in Redis using the batch's _add_jobs method
* Update batch tests to use new enqueue_many arg
* Fix batch_id for non_batched jobs
* Use different pipeline name
* Don't use fetch method when enqueuing jobs
If the batch is empty, fetch will delete it from Redis
* Test enqueuing batched jobs with string ID
* Update batch documentation
* Remove unused variables
* Fix expired job tracking bug
* Add delete_job method
* Use delete_job method on batch
* Pass multiple jobs into srem command
* Add multiple keys to exists call
* Pipeline _add_jobs call
* Fix missing job_id arg
* Move clean_batch_registry to Batch class
* Remove unused variables
* Fix missing dependency deletion
I accidentally deleted this earlier
* Move batch enqueueing to batch class
* Update batch docs
* Add test for batched jobs with str queue arg
* Don't delete batch in cleanup
* Change Batch to Group
* Import EnqueueData for type hint
* Sort imports
* Remove local config file
* Rename clean_group_registries method
* Update feature version
* Fix group constant names
* Only allow Queue in enqueue many
* Move cleanup logic to classmethod
* Remove str group test
* Remove unused import
* Make pipeline mandatory in _add_jobs method
* Rename expired job ids array
* Add slow decorator
* Assert job 1 data in group
* Rename id arg to name
* Only run cleanup when jobs are retrieved
* Remove job_ids variable
* Fix group name arg
* Fix group name arg
* Clean groups before fetching
* Use uuid4().hex for shorter id
* Move cleanup logic to instance method
* Use all method when cleaning group registry
* Check if group exists instead of catching error
* Cleanup expired jobs
* Apply black formatting
* Fix black formatting
* Pipeline group cleanup
* Fix pipeline call to exists
* Add pyenv version file
* Remove group cleanup after job completes
* Use existing pipeline
* Remove unneeded pipeline check
* Fix empty call
* Test group __repr__ method
* Remove unnecessary pipeline assignment
* Remove unused delete method
* Fix pipeline name
* Remove unnecessary conditional block
* changed the utcnow() to now()
* changed the utcnow() to now()
* changed the utcnow() to now()
* changed the utcnow() to now()
* changed the utcnow() to now()
* changed the utcnow() to now()
* changed the utcnow() to now()
* changed the utcnow() to now()