- protobuf version in versions larger than 4.25.5 when used by google-cloud-pubsub
raises the following error:
site-packages/google/protobuf/internal/well_known_types.py", line 443, in FromTimedelta
raise AttributeError(
AttributeError: Fail to convert to Duration.
Expected a timedelta like object got str: 'str' object has no attribute 'seconds'
Fix this by restriction of the allowed package versions
- Added unit test to validate pubsub and protobuf compatibility
- Enabled google-cloud-pubsub package versions bump
Co-authored-by: Haim Daniel <haimdaniel@gmail.com>
* feature(urllib3): add urllib3 client
* test(urllib3): test urllib3 client
* test(urllib3): update http test for urllib3
* test(urllib3): use urllib3 client instead of curl
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* style(urllib3): remove unused imports
* style(urllib3): fix pre-commit errors
* ci(urllib3): remove pycurl dependency
* docs(urllib3): add docs
* style(urllib3): fix failing gh-workflow py3.8
* style(urllib3): add mention of ProxyManager
* style(urllib3): fix pre-commit issues
* style(pycurl): remove curl-related code
* feat(urllib3): add missing request features (header, auth, ssl, proxy, redirects)
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* fix(urllib3): improve styling
* test(urllib3): add new tests
* fix(urllib3): fix request auth
* fix(aws): validate certificate on request
* style(): add missing exports
* feat(aws): add ssl certificate verification from boto
* feat(urllib): try to use certifi.where() if request.ca_certs are not provided
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* ci(pydocstyle): add missing docstring in public class
* test(urllib3): improve test case
* ci(pydocstyle): fix multi-line docstring summary should start at the first line
* feat(urllib3): remove assert_hostname
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* test(boto): add test for get_cert_path returning .pem file path
* test(urllib3): add test for _get_pool_key_parts method
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Asif Saif Uddin <auvipy@gmail.com>
Co-authored-by: Tomer Nosrati <tomer.nosrati@gmail.com>
* Add support for Google Pub/Sub as transport broker
* Add tests
* Add docs
* flake8
* flake8
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Fix future import
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Add missing test requirements
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Add monitoring dependency
* Fix test for python3.8
* Mock better google's monitoring api
* Flake8
* Add refdoc
* Add extra url to workaround pypy grpcio
* Add extra index url in tox for grpcio/pypy support
* Revert "Add extra url to workaround pypy grpcio"
This reverts commit dfde4d523c.
* pin grpcio version to match extra_index wheel
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Reduce poll calls if qos denies msg rx
---------
Co-authored-by: Haim Daniel <haimdaniel@gmail.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Tomer Nosrati <tomer.nosrati@gmail.com>
* - This will make sure that the version unpacking doesn't stop connection to some versions of message brokers.
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* - I updated the test in this version
- Improved my code
* - Removed `merge-conflict` comments in the code!
* - linting problem resolved.
* - Docstrings are now in accordance with `pydocstyle`.
* - resolving `"_unpack_version" has incompatible type "int"` problem in `_unpack_version` method
- `"` are sent back to `'` (apparently my company's formatter `black` is not compatible with `pydocstyle`, and mine integration with VSCode has messed with kombu's repository)
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* SQS: wrap logic to resolve queue URL into a reusable method
* Create new `_resolve_queue_url` method for reusability.
`_new_queue` method retrieves the queue URL from cache
or fetches it from SQS, updating the cache if necessary.
`_delete` method needs to resolve the queue URL in order to
delete the queue from SQS (next commit).
Both methods will be able to reuse the same functionality by
calling the `_resolve_queue_url` method.
* Introduce DoesNotExistQueueException for easier error handling.
`_new_queue` method is responsible for creating a new queue when
it doesn't exist, utilizing the new exception for clarity.
* Unit test coverage for `_resolve_queue_url` method.
* SQS: Fix missing queue deletion in Channel._delete
* Add call to `delete_queue` using sqs client.
`_delete` method is expected to delete the specified queue when
called. Previously, this functionality was missing, which has
now been corrected.
The method raises a `DoesNotExistQueueException` if the specified
queue name doesn’t exist.
* Update unit tests with new assertion and mock to verify queue deletion.
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* Redelivered messages should respect the original priority
* update restore message test to check priority
* flake8
* add integration tests
* also add integration test for mongodb
* flake8
* temporarily removing python 3.9 from CI due to unrelated failures
* Update .github/workflows/ci.yaml
---------
Co-authored-by: Tomer Nosrati <tomer.nosrati@gmail.com>
Co-authored-by: Asif Saif Uddin <auvipy@gmail.com>
* Add more logs
* Launch _on_connection_disconnect in Conection only if channel was added properly to the poller
* Prepare test which check the flow of the channel removal from poller
* Change the comment
* ConnectionPool can't be used after .resize(..., reset=True) (resolves#2018)
* Update kombu/resource.py
---------
Co-authored-by: Asif Saif Uddin <auvipy@gmail.com>
When broker send Basic.cancel notification, py-amqp default behavior is
to raise a ConsumerCancelled exception. It is then treated as an error
even if connection and channel are still operationnal and connection
is closed.
Basic.cancel may be sent by broker when a queue is deleted or when
replicated queue leader change. py-amqp channel.basic_consume allow to
define a callback function on this event. It may be useful to register
this callback from kombu when consuming from a queue.
(cherry picked from commit e1fa168ace)
Signed-off-by: julien.cosmao <julien.cosmao@ovhcloud.com>
Co-authored-by: julien.cosmao <julien.cosmao@ovhcloud.com>
* Fix partial closure error when instantiating redis pool for a global_keyprefix sentinel connection
* add tests for redis sentinel connection with global_keyprefix setting
* StrictRedis is deprecated, use Redis class
* add some resource cleanup
* add some resource cleanup
* Use the correct protocol for SQS requests
TL;DR - The use of boto3 in #1759 resulted in relying on blocking
(synchronous) HTTP requests, which caused the performance issue reported
in #1783.
`kombu` previously used to craft AWS requests manually as explained in
detail in #1726, which resulted in an outage when botocore temporarily
changed the default protocol to JSON (before rolling back due to the
impact on celery and airflow.) To fix the issue, I submitted #1759,
which changes `kombu` to use `boto3` instead of manually crafting AWS
requests. This way when boto3 changes the default protocol, kombu won't
be impacted.
While working on #1759, I did extensive debugging to understand the
multi-threading nature of kombu. What I discovered is that there isn't
an actual multi-threading in the true sense of the word, but an event
loop that runs on the same thread and process and orchestrate the
communication with SQS. As such, it didn't appear to me that there is
anything to worry about my change, and the testing I did didn't discover
any issue. However, it turns out that while kombu's event loop doesn't
have actual multi-threading, its [reliance on
pycurl](https://github.com/celery/kombu/blob/main/kombu/asynchronous/http/curl.py#L48)
(and thus libcurl) meant that the requests to AWS were being done
asynchronously. On the other hand, boto3 requests are always done
synchronously, i.e. they are blocking requests.
The above meant that my change introduced blocking on the event loop of
kombu. This is fine in most of the cases, since the requests to SQS are
pretty fast. However, in the case of using long-polling, a call to SQS's
ReceiveMessage can last up to 20 seconds (depending on the user
configuration).
To solve this problem, I rolled back my earlier changes and, instead, to
address the issue reported in #1726, I now changed the
`AsyncSQSConnection` class such that it crafts either a `query` or a
`json` request depending on the protocol used by the SQS client. Thus,
when botocore changes the default protocol of SQS to JSON, kombu won't
be impacted, since it crafts its own request and, after my change, it
uses a hard-coded protocol based on the crafted requests.
This solution shouldn't be the final solution, and it is more of a
workaround that does the job for now. The final solution should be to
completely rely on boto3 for any communication with AWS, and ensuring
that all requests are async in nature (non-blocking.) This, however, is
a fundamental change that requires a lot of testing, in particular
performance testing.
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update kombu/asynchronous/aws/sqs/connection.py
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Asif Saif Uddin <auvipy@gmail.com>
* added Python 3.12 in the CI
* try to make kafka work on py3.12
* skip kafka for the time being as it seems not woring with py3.12 yet
* using assert_called_once()
* Create a lock on cached_property if not present
This fixes#1804 (fixing breakage caused by use of undocumented
implementation details of functools.cached_property) by ensuring a lock
is always present on cached_property attributes, which is required to
safely support setting and deleting cached values in addition to
computing them on demand.
* Add a unit test for cached_property locking
* azure servicebus: use DefaultAzureCredential in documentation
* azure servicebus: only use connection string when using sas key
* azure servicebus: add two small tests for paring of connection string
* azure servicebus: fix lint issues
* [fix#1726] Use boto3 for SQS async requests
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
The redis-py 4.5.2 changes the UnixDomainSocketConnection class so now
it inherits from AbstractConnection:
https://github.com/redis/redis-py/releases/tag/v4.5.2
This patch makes sure that the health_check_interval parameter is
checked for the __init__ method of the main class and also the bases, so
it doesn't fail with the newer version of redis-py.
* Fix#1618: avoid re-fetching queue URL when we already have it
`_get_from_sqs` was unnecessarily calling `get_queue_url` every
time even though the only place which calls `_get_from_sqs`
(that is `_get_async`) actually already knows the queue URL.
This change avoids hundreds of `GetQueueUrl` AWS API calls per hour
when using this SQS backend with celery.
Also `connection` is set by the one-and-only caller (and `queue` is
actually the queue name string now anyway so couldn't ever have
`.connection`) so remove the None default and unused fallback code.
* Clarify that `_new_queue` returns the queue URL
It seems that prior to 129a9e4ed0 it returned a queue
object but this is no longer the case so update comments
variable names accordingly to make it clearer.
Also remove the incorrect fallback which cannot
be correct any more given the return value has to
be the queue URL which must be a string.
* Unit test coverage for SQS async codepath
This key code path (which as far as I can see is
the main route when using celery with SQS) was
missing test coverage.
This test adds coverage for:
`_get_bulk_async` -> `_get_async` -> `_get_from_sqs`
* Main
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Fix
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Fix
* Trigger Build
* Fix
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Fix
* Fix
* noqas
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Fix
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* remove unused noqa
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* re-add import
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Fix
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Fixes
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* use pytest-freezer (#1683)
* Main
* Trigger Build
* Fixes
* remove
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* lint
* Lint
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Asif Saif Uddin <auvipy@gmail.com>