Replace highlight tag in docs (#1600)

This commit is contained in:
Selwin Ong 2021-12-07 19:35:56 +07:00 committed by GitHub
parent 93f34c796f
commit f14dd9e2d7
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
7 changed files with 34 additions and 34 deletions

View File

@ -91,12 +91,12 @@ def my_handler(job, *exc_info):
# do custom things here
```
{% highlight python %}
```python
from exception_handlers import foo_handler
w = Worker([q], exception_handlers=[foo_handler],
disable_default_exception_handler=True)
{% endhighlight %}
```
## Chaining Exception Handlers

View File

@ -138,7 +138,7 @@ Workers are registered to the system under their names, which are generated
randomly during instantiation (see [monitoring][m]). To override this default,
specify the name when starting the worker, or use the `--name` cli option.
{% highlight python %}
```python
from redis import Redis
from rq import Queue, Worker
@ -147,7 +147,7 @@ queue = Queue('queue_name')
# Start a worker with a custom name
worker = Worker([queue], connection=redis, name='foo')
{% endhighlight %}
```
[m]: /docs/monitoring/

View File

@ -15,43 +15,43 @@ First, run a Redis server. You can use an existing one. To put jobs on
queues, you don't have to do anything special, just define your typically
lengthy or blocking function:
{% highlight python %}
```python
import requests
def count_words_at_url(url):
resp = requests.get(url)
return len(resp.text.split())
{% endhighlight %}
```
Then, create a RQ queue:
{% highlight python %}
```python
from redis import Redis
from rq import Queue
q = Queue(connection=Redis())
{% endhighlight %}
```
And enqueue the function call:
{% highlight python %}
```python
from my_module import count_words_at_url
result = q.enqueue(count_words_at_url, 'http://nvie.com')
{% endhighlight %}
```
Scheduling jobs are similarly easy:
{% highlight python %}
```python
# Schedule job to run at 9:15, October 10th
job = queue.enqueue_at(datetime(2019, 10, 8, 9, 15), say_hello)
# Schedule job to be run in 10 seconds
job = queue.enqueue_in(timedelta(seconds=10), say_hello)
{% endhighlight %}
```
You can also ask RQ to retry failed jobs:
{% highlight python %}
```python
from rq import Retry
# Retry up to 3 times, failed job will be requeued immediately
@ -59,20 +59,20 @@ queue.enqueue(say_hello, retry=Retry(max=3))
# Retry up to 3 times, with configurable intervals between retries
queue.enqueue(say_hello, retry=Retry(max=3, interval=[10, 30, 60]))
{% endhighlight %}
```
### The worker
To start executing enqueued function calls in the background, start a worker
from your project's directory:
{% highlight console %}
```console
$ rq worker --with-scheduler
*** Listening for work on default
Got count_words_at_url('http://nvie.com') from default
Job result = 818
*** Listening for work on default
{% endhighlight %}
```
That's about it.

View File

@ -18,6 +18,6 @@ environmental variable will already do the trick.
If `settings.py` is your Django settings file (as it is by default), use this:
{% highlight console %}
```console
$ DJANGO_SETTINGS_MODULE=settings rq worker high default low
{% endhighlight %}
```

View File

@ -14,7 +14,7 @@ To setup RQ on [Heroku][1], first add it to your
Create a file called `run-worker.py` with the following content (assuming you
are using [Redis To Go][2] with Heroku):
{% highlight python %}
```python
import os
import urlparse
from redis import Redis
@ -35,7 +35,7 @@ if __name__ == '__main__':
with Connection(conn):
worker = Worker(map(Queue, listen))
worker.work()
{% endhighlight %}
```
Then, add the command to your `Procfile`:
@ -43,13 +43,13 @@ Then, add the command to your `Procfile`:
Now, all you have to do is spin up a worker:
{% highlight console %}
```console
$ heroku scale worker=1
{% endhighlight %}
```
If you are using [Heroku Redis][5]) you might need to change the Redis connection as follows:
{% highlight console %}
```console
conn = redis.Redis(
host=host,
password=password,
@ -57,16 +57,16 @@ conn = redis.Redis(
ssl=True,
ssl_cert_reqs=None
)
{% endhighlight %}
```
and for using the cli:
{% highlight console %}
```console
rq info --config rq_conf
{% endhighlight %}{% endhighlight %}
``````
Where the rq_conf.py file looks like:
{% highlight console %}
```console
REDIS_HOST = "host"
REDIS_PORT = port
REDIS_PASSWORD = "password"
@ -74,7 +74,7 @@ REDIS_SSL = True
REDIS_SSL_CA_CERTS = None
REDIS_DB = 0
REDIS_SSL_CERT_REQS = None
{% endhighlight %}{% endhighlight %}
``````
## Putting RQ under foreman

View File

@ -13,7 +13,7 @@ your product.
RQ can be used in combination with supervisor easily. You'd typically want to
use the following supervisor settings:
{% highlight ini %}
```
[program:myworker]
; Point the command to the specific rq command you want to run.
; If you use virtualenv, be sure to point it to
@ -38,14 +38,14 @@ stopsignal=TERM
; These are up to you
autostart=true
autorestart=true
{% endhighlight %}
```
### Conda environments
[Conda][2] virtualenvs can be used for RQ jobs which require non-Python
dependencies. You can use a similar approach as with regular virtualenvs.
{% highlight ini %}
```
[program:myworker]
; Point the command to the specific rq command you want to run.
; For conda virtual environments, install RQ into your env.
@ -70,7 +70,7 @@ stopsignal=TERM
; These are up to you
autostart=true
autorestart=true
{% endhighlight %}
```
[1]: http://supervisord.org/
[2]: https://conda.io/docs/

View File

@ -12,7 +12,7 @@ To run multiple workers under systemd, you'll first need to create a unit file.
We can name this file `rqworker@.service`, put this file in `/etc/systemd/system`
directory (location may differ by what distributions you run).
{% highlight ini %}
```
[Unit]
Description=RQ Worker Number %i
After=network.target
@ -31,7 +31,7 @@ Restart=always
[Install]
WantedBy=multi-user.target
{% endhighlight %}
```
If your unit file is properly installed, you should be able to start workers by
invoking `systemctl start rqworker@1.service`, `systemctl start rqworker@2.service`