fix: call patch_all
before importing handler code.
#598
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
What does this PR do?
Move location of where we call
ddtrace.patch_all
, ensuring it is always called before the handler is imported.Motivation
Customer reported issue (see https://datadoghq.atlassian.net/browse/SLES-2262) of not seeing spans or distributed tracing from their
confluent_kafka
calls. Here is a highly simplified version of their lambda handler:datadog-lambda
callsddtrace.patch_all()
after the customer handler is imported. To see this look athandler.py
andwrapper.py
. Here,patch_all()
is currently called when initializing theDatadogWrapper
inwrapper.py
. This only happens after all customer code is imported inhandler.py
. To demonstrate, here's a commented version of a abridged version of ourhandler.py
file:The calling of
patch_all()
after their handler code is imported is causing the producer to not get any instrumentation applied. We can see this by inspecting the producer's type.💭 So wait a minute 💭, why is this only a problem now? This call to
patch_all()
was added over 5 years ago, why has no one reported this until now!?This has to do with the nature of how ddtrace does it's patching. It's individual to each contrib module patched and how the customer uses it.
Interestingly, if you were to inspect the producer type in a different way, we see a different result:
Why is this? Because
confluent_kafka.Producer
accesses the producer by reference whereasProducer
has initialized and saved the producer as the non-traced class.Testing Guidelines
Additional Notes
patch_all()
hadn't been called until after the handler code was fully imported. For example, this code will now produce a span for the requests http call made on the global level during cold start.The only problem is (and here's the⚠️ warning) that these newly created spans will always be orphaned. This is because of the way in which we manage cold start tracing. During cold start we are unable to determine the trace id because we have not yet started the root trace span nor have we been able to receive any inbound distributed tracing headers.
It should be possible to correctly parent these new orphaned spans. However, that is outside the scope of this PR because it will be difficult and significant undertaking. We can cross that bridge when we get there.
Types of Changes
Check all that apply