The options to show only partial info (only workers, only queues) are
kept, but not default anymore. This makes the default invocation
(without args) the most useful one.
The default refresh interval is 2.5 seconds.
This fixes#42.
Connections can now be set explicitly on Queues, Workers, and Jobs.
Jobs that are implicitly created by Queue or Worker API calls now
inherit the connection of their creator's.
For all RQ object instances that are created now holds that the
"current" connection is used if none is passed in explicitly. The
"current" connection is thus hold on to at creation time and won't be
changed for the lifetime of the object.
Effectively, this means that, given a default Redis connection, say you
create a queue Q1, then push another Redis connection onto the
connection stack, then create Q2. In that case, Q1 means a queue on the
first connection and Q2 on the second connection.
This is way more clear than it used to be.
Also, I've removed the `use_redis()` call, which was named ugly.
Instead, some new alternatives for connection management now exist.
You can push/pop connections now:
>>> my_conn = Redis()
>>> push_connection(my_conn)
>>> q = Queue()
>>> q.connection == my_conn
True
>>> pop_connection() == my_conn
Also, you can stack them syntactically:
>>> conn1 = Redis()
>>> conn2 = Redis('example.org', 1234)
>>> with Connection(conn1):
... q = Queue()
... with Connection(conn2):
... q2 = Queue()
... q3 = Queue()
>>> q.connection == conn1
True
>>> q2.connection == conn2
True
>>> q3.connection == conn1
True
Or, if you only require a single connection to Redis (for most uses):
>>> use_connection(Redis())
This aids unpacking in the case of a function that isn't importable from
the worker's runtime. The unpickling will now (almost) always succeed,
and throw an ImportError later on, when the function is actually
accessed (thus imported implicitly).
The end result is a job on the failed queue, with exc_info describing
the import error, which is tremendously useful.
This case protects against JobTimeoutExceptions being raised immediately
after the job body has been (successfully) executed. Still,
JobTimeoutExceptions pass through naturally, like any other exception,
to be handled by the default exception handler that writes failed jobs
to the failed queue.
Timeouts therefore are reported like any other exception.