-
Notifications
You must be signed in to change notification settings - Fork 822
Description
Some celery tests were disabled a long time ago because of inconsistency.
https://github.com/open-telemetry/opentelemetry-python-contrib/blob/main/tests/opentelemetry-docker-tests/tests/celery/test_celery_functional.py#L36
https://github.com/open-telemetry/opentelemetry-python-contrib/blob/main/tests/opentelemetry-docker-tests/tests/celery/test_celery_functional.py#L146
https://github.com/open-telemetry/opentelemetry-python-contrib/blob/main/tests/opentelemetry-docker-tests/tests/celery/test_celery_functional.py#L193
https://github.com/open-telemetry/opentelemetry-python-contrib/blob/main/tests/opentelemetry-docker-tests/tests/celery/test_celery_functional.py#L209
https://github.com/open-telemetry/opentelemetry-python-contrib/blob/main/tests/opentelemetry-docker-tests/tests/celery/test_celery_functional.py#L479
All of the above tests either use .delay() or .apply_async() to trigger the Celery tasks. This means a span is generated for the producer, then a message is put on a queue/broker. A celery worker then picks up the message and executes the relavant task which generates the consumer span. This is tricky as there can be some delay between producing and consuming. It looks like our tests do not take this delay into account and expect both producer and consumer spans to be available immediately after calling delay/apply_async which obviously fails a lot of times.
Solution here would be to wait till the expected number of spans arrive to memory exporter or wait on a condition fulfilled by the task to be executed. Whichever solution we pick should have a timeout with a assertion error so tests don't get stuck forever in case the message never arrives.