rq/README.md

96 lines
2.7 KiB
Markdown
Raw Normal View History

2011-11-14 11:10:59 +00:00
# WARNING: DON'T USE THIS IN PRODUCTION (yet)
2011-11-15 23:37:59 +00:00
# RQ — Simple job queues for Python
2011-11-14 11:10:59 +00:00
2011-11-15 23:37:59 +00:00
**RQ** is a lightweight Python library for queueing work and processing them in
workers. It is backed by Redis.
This project is inspired by the good parts of [Celery][1], [Resque][2] and
[this snippet][3], and has been created as a lightweight alternative to the
heaviness of Celery.
[1]: http://www.celeryproject.org/
[2]: https://github.com/defunkt/resque
[3]: http://flask.pocoo.org/snippets/73/
2011-11-14 11:10:59 +00:00
2011-11-14 14:15:05 +00:00
# Putting jobs on queues
2011-11-15 21:45:51 +00:00
To put jobs on queues, first declare a Python function to be called on
a background process:
2011-11-14 14:15:05 +00:00
def slow_fib(n):
if n <= 1:
return 1
else:
return slow_fib(n-1) + slow_fib(n-2)
2011-11-15 21:45:51 +00:00
Notice anything? There's nothing special about a job! Any Python function can
be put on an RQ queue, as long as the function is in a module that is
2011-11-15 22:31:01 +00:00
importable from the worker process.
2011-11-14 14:15:05 +00:00
2011-11-15 21:45:51 +00:00
To calculate the 36th Fibonacci number in the background, simply do this:
from rq import Queue
2011-11-14 14:15:05 +00:00
from fib import slow_fib
2011-11-15 21:45:51 +00:00
# Calculate the 36th Fibonacci number in the background
q = Queue()
q.enqueue(slow_fib, 36)
2011-11-14 14:15:05 +00:00
2011-11-15 21:45:51 +00:00
If you want to put the work on a specific queue, simply specify its name:
2011-11-14 14:15:05 +00:00
2011-11-15 21:45:51 +00:00
q = Queue('math')
q.enqueue(slow_fib, 36)
2011-11-14 14:15:05 +00:00
2011-11-15 21:45:51 +00:00
You can use any queue name, so you can quite flexibly distribute work to your
own desire. Common patterns are to name your queues after priorities (e.g.
`high`, `medium`, `low`).
2011-11-14 14:15:05 +00:00
2011-11-15 22:00:55 +00:00
# The worker
**NOTE: You currently need to create the worker yourself, which is extremely
2011-11-15 22:31:01 +00:00
easy, but RQ will include a custom script soon that can be used to start
2011-11-15 22:00:55 +00:00
arbitrary workers without writing any code.**
Creating a worker daemon is also extremely easy. Create a file `worker.py`
with the following content:
from rq import Queue, Worker
q = Queue()
Worker(q).work_forever()
After that, start a worker instance:
python worker.py
This will wait for work on the default queue and start processing it as soon as
messages arrive.
You can even watch several queues at the same time and start processing from
them:
from rq import Queue, Worker
queues = map(Queue, ['high', 'normal', 'low'])
Worker(queues).work()
Which will keep working as long as there is work on any of the three queues,
giving precedence to the `high` queue on each cycle, and will quit when there
is no more work (contrast this to the previous worker example, which will wait
for new work when called with `Worker.work_forever()`.
2011-11-14 23:38:13 +00:00
# Installation
2011-11-14 11:10:59 +00:00
Simply use the following command to install the latest released version:
pip install rq
If you want the cutting edge version (that may well be broken), use this:
2011-11-14 17:40:12 +00:00
pip install -e git+git@github.com:nvie/rq.git@master#egg=rq
2011-11-14 11:10:59 +00:00