2011-11-14 11:10:59 +00:00
|
|
|
# WARNING: DON'T USE THIS IN PRODUCTION (yet)
|
|
|
|
|
2011-11-15 23:53:56 +00:00
|
|
|
# RQ: Simple job queues for Python
|
2011-11-14 11:10:59 +00:00
|
|
|
|
2011-11-15 23:37:59 +00:00
|
|
|
**RQ** is a lightweight Python library for queueing work and processing them in
|
|
|
|
workers. It is backed by Redis.
|
|
|
|
|
2011-11-14 14:15:05 +00:00
|
|
|
# Putting jobs on queues
|
|
|
|
|
2011-11-15 21:45:51 +00:00
|
|
|
To put jobs on queues, first declare a Python function to be called on
|
|
|
|
a background process:
|
2011-11-14 14:15:05 +00:00
|
|
|
|
|
|
|
def slow_fib(n):
|
|
|
|
if n <= 1:
|
|
|
|
return 1
|
|
|
|
else:
|
|
|
|
return slow_fib(n-1) + slow_fib(n-2)
|
|
|
|
|
2011-11-15 21:45:51 +00:00
|
|
|
Notice anything? There's nothing special about a job! Any Python function can
|
|
|
|
be put on an RQ queue, as long as the function is in a module that is
|
2011-11-15 22:31:01 +00:00
|
|
|
importable from the worker process.
|
2011-11-14 14:15:05 +00:00
|
|
|
|
2011-11-15 21:45:51 +00:00
|
|
|
To calculate the 36th Fibonacci number in the background, simply do this:
|
|
|
|
|
|
|
|
from rq import Queue
|
2011-11-14 14:15:05 +00:00
|
|
|
from fib import slow_fib
|
2011-11-15 21:45:51 +00:00
|
|
|
|
|
|
|
# Calculate the 36th Fibonacci number in the background
|
|
|
|
q = Queue()
|
|
|
|
q.enqueue(slow_fib, 36)
|
2011-11-14 14:15:05 +00:00
|
|
|
|
2011-11-15 21:45:51 +00:00
|
|
|
If you want to put the work on a specific queue, simply specify its name:
|
2011-11-14 14:15:05 +00:00
|
|
|
|
2011-11-15 21:45:51 +00:00
|
|
|
q = Queue('math')
|
|
|
|
q.enqueue(slow_fib, 36)
|
2011-11-14 14:15:05 +00:00
|
|
|
|
2011-11-15 21:45:51 +00:00
|
|
|
You can use any queue name, so you can quite flexibly distribute work to your
|
|
|
|
own desire. Common patterns are to name your queues after priorities (e.g.
|
|
|
|
`high`, `medium`, `low`).
|
2011-11-14 14:15:05 +00:00
|
|
|
|
|
|
|
|
2011-11-15 22:00:55 +00:00
|
|
|
# The worker
|
|
|
|
|
|
|
|
**NOTE: You currently need to create the worker yourself, which is extremely
|
2011-11-15 22:31:01 +00:00
|
|
|
easy, but RQ will include a custom script soon that can be used to start
|
2011-11-15 22:00:55 +00:00
|
|
|
arbitrary workers without writing any code.**
|
|
|
|
|
|
|
|
Creating a worker daemon is also extremely easy. Create a file `worker.py`
|
|
|
|
with the following content:
|
|
|
|
|
|
|
|
from rq import Queue, Worker
|
|
|
|
|
|
|
|
q = Queue()
|
2011-11-17 21:39:53 +00:00
|
|
|
Worker(q).work()
|
2011-11-15 22:00:55 +00:00
|
|
|
|
|
|
|
After that, start a worker instance:
|
|
|
|
|
|
|
|
python worker.py
|
|
|
|
|
|
|
|
This will wait for work on the default queue and start processing it as soon as
|
|
|
|
messages arrive.
|
|
|
|
|
|
|
|
You can even watch several queues at the same time and start processing from
|
|
|
|
them:
|
|
|
|
|
|
|
|
from rq import Queue, Worker
|
|
|
|
|
|
|
|
queues = map(Queue, ['high', 'normal', 'low'])
|
2011-11-17 21:39:53 +00:00
|
|
|
Worker(queues).work_burst()
|
2011-11-15 22:00:55 +00:00
|
|
|
|
2011-11-15 23:53:56 +00:00
|
|
|
Which will keep popping jobs from the given queues, giving precedence to the
|
|
|
|
`high` queue, then `normal`, etc. It will return when there are no more jobs
|
2011-11-17 21:39:53 +00:00
|
|
|
left (contrast this to the previous example using `Worker.work()`, which will
|
|
|
|
never return since it keeps waiting for new work to arrive).
|
2011-11-15 22:00:55 +00:00
|
|
|
|
|
|
|
|
2011-11-14 23:38:13 +00:00
|
|
|
# Installation
|
2011-11-14 11:10:59 +00:00
|
|
|
|
|
|
|
Simply use the following command to install the latest released version:
|
|
|
|
|
|
|
|
pip install rq
|
|
|
|
|
|
|
|
If you want the cutting edge version (that may well be broken), use this:
|
|
|
|
|
2011-11-14 17:40:12 +00:00
|
|
|
pip install -e git+git@github.com:nvie/rq.git@master#egg=rq
|
2011-11-14 11:10:59 +00:00
|
|
|
|
2011-11-15 23:53:56 +00:00
|
|
|
|
|
|
|
# Project History
|
|
|
|
|
|
|
|
This project has been inspired by the good parts of [Celery][1], [Resque][2]
|
|
|
|
and [this snippet][3], and has been created as a lightweight alternative to the
|
|
|
|
heaviness of Celery or other AMQP-based queueing implementations.
|
|
|
|
|
|
|
|
[1]: http://www.celeryproject.org/
|
|
|
|
[2]: https://github.com/defunkt/resque
|
|
|
|
[3]: http://flask.pocoo.org/snippets/73/
|
|
|
|
|
|
|
|
Project values:
|
|
|
|
|
|
|
|
* Simplicity over completeness
|
|
|
|
* Fail-safety over performance
|
|
|
|
* Runtime insight over static configuration upfront
|
|
|
|
|
|
|
|
This means that, to use RQ, you don't have to set up any queues up front, and
|
|
|
|
you don't have to specify any channels, exchanges, or whatnot. You can put
|
|
|
|
jobs onto any queue you want, at runtime. As soon as you enqueue a job, it is
|
|
|
|
created on the fly.
|