Skip to content

Commit b90948a

Browse files
author
Liudmila Molkova
committed
Add manual sample, add no-content tests
1 parent 515bc43 commit b90948a

File tree

14 files changed

+493
-2
lines changed

14 files changed

+493
-2
lines changed

instrumentation-genai/opentelemetry-instrumentation-openai-v2/README.rst

Lines changed: 54 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -19,8 +19,60 @@ package to your requirements.
1919

2020
pip install opentelemetry-instrumentation-openai-v2
2121

22-
If you don't have an OpenAI application, yet, try our `example <example>`_
23-
which only needs a valid OpenAI API key.
22+
If you don't have an OpenAI application, yet, try our `examples <examples>`_
23+
which only need a valid OpenAI API key.
24+
25+
Check out `zero-code example <examples/zero-code>`_ for a quick start.
26+
27+
Usage
28+
-----
29+
30+
This section describes how to set up OpenAI instrumentation if you're setting OpenTelemetry up manually.
31+
Check out the `manual example <examples/manual>`_ for more details.
32+
33+
Instrumenting all clients
34+
*************************
35+
36+
When using the instrumentor, all clients will automatically trace OpenAI chat completion operations.
37+
You can also optionally capture prompts and completions as log events.
38+
39+
Make sure to configure OpenTelemetry tracing, logging, and events to capture all telemetry emitted by the instrumentation.
40+
41+
.. code-block:: python
42+
43+
from opentelemetry.instrumentation.openai_v2 import OpenAIInstrumentor
44+
45+
OpenAIInstrumentor().instrument()
46+
47+
client = OpenAI()
48+
response = client.chat.completions.create(
49+
model="gpt-4o-mini",
50+
messages=[
51+
{"role": "user", "content": "Write a short poem on open telemetry."},
52+
],
53+
)
54+
55+
Enabling message content
56+
*************************
57+
58+
Message content such as the contents of the prompt, completion, function arguments and return values
59+
are not captured by default. To capture message content as log events, set the environment variable
60+
`OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT` to `true`.
61+
62+
Uninstrument
63+
************
64+
65+
To uninstrument clients, call the uninstrument method:
66+
67+
.. code-block:: python
68+
69+
from opentelemetry.instrumentation.openai_v2 import OpenAIInstrumentor
70+
71+
OpenAIInstrumentor().instrument()
72+
# ...
73+
74+
# Uninstrument all clients
75+
OpenAIInstrumentor().uninstrument()
2476
2577
References
2678
----------
Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,15 @@
1+
# Update this with your real OpenAI API key
2+
OPENAI_API_KEY=sk-YOUR_API_KEY
3+
4+
# Uncomment to use Ollama instead of OpenAI
5+
# OPENAI_BASE_URL=http://localhost:11434/v1
6+
# OPENAI_API_KEY=unused
7+
# CHAT_MODEL=qwen2.5:0.5b
8+
9+
# Uncomment and change to your OTLP endpoint
10+
# OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318
11+
# OTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf
12+
OTEL_SERVICE_NAME=opentelemetry-python-openai
13+
14+
# Change to 'false' to hide prompt and completion content
15+
OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT=true
Lines changed: 38 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,38 @@
1+
OpenTelemetry OpenAI Instrumentation Example
2+
============================================
3+
4+
This is an example of how to instrument OpenAI calls.
5+
6+
When `main.py <main.py>`_ is run, it exports traces and logs to an OTLP
7+
compatible endpoint. Traces include details such as the model used and the
8+
duration of the chat request. Logs capture the chat request and the generated
9+
response, providing a comprehensive view of the performance and behavior of
10+
your OpenAI requests.
11+
12+
Setup
13+
-----
14+
15+
Minimally, update the `.env <.env>`_ file with your "OPENAI_API_KEY". An
16+
OTLP compatible endpoint should be listening for traces and logs on
17+
http://localhost:4318. If not, update "OTEL_EXPORTER_OTLP_ENDPOINT" as well.
18+
19+
Next, set up a virtual environment like this:
20+
21+
::
22+
23+
python3 -m venv .venv
24+
source .venv/bin/activate
25+
pip install "python-dotenv[cli]"
26+
pip install -r requirements.txt
27+
28+
Run
29+
---
30+
31+
Run the example like this:
32+
33+
::
34+
35+
dotenv run -- python main.py
36+
37+
You should see a poem generated by OpenAI while traces and logs export to your
38+
configured observability tool.
Lines changed: 48 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,48 @@
1+
import os
2+
3+
from openai import OpenAI
4+
5+
from opentelemetry.instrumentation.openai_v2 import OpenAIInstrumentor
6+
7+
# NOTE: OpenTelemetry Python Logs and Events APIs are in beta
8+
from opentelemetry import trace, _logs, _events
9+
from opentelemetry.sdk.trace import TracerProvider
10+
from opentelemetry.sdk._logs import LoggerProvider
11+
from opentelemetry.sdk._events import EventLoggerProvider
12+
13+
from opentelemetry.sdk.trace.export import BatchSpanProcessor
14+
from opentelemetry.sdk._logs.export import BatchLogRecordProcessor
15+
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
16+
from opentelemetry.exporter.otlp.proto.http._log_exporter import OTLPLogExporter
17+
18+
# configure tracing
19+
trace.set_tracer_provider(TracerProvider())
20+
trace.get_tracer_provider().add_span_processor(
21+
BatchSpanProcessor(OTLPSpanExporter())
22+
)
23+
24+
# configure logging and events
25+
_logs.set_logger_provider(LoggerProvider())
26+
_logs.get_logger_provider().add_log_record_processor(BatchLogRecordProcessor(OTLPLogExporter()))
27+
_events.set_event_logger_provider(EventLoggerProvider())
28+
29+
# instrument OpenAI
30+
OpenAIInstrumentor().instrument()
31+
32+
def main():
33+
34+
client = OpenAI()
35+
chat_completion = client.chat.completions.create(
36+
model=os.getenv("CHAT_MODEL", "gpt-4o-mini"),
37+
messages=[
38+
{
39+
"role": "user",
40+
"content": "Write a short poem on OpenTelemetry.",
41+
},
42+
],
43+
)
44+
print(chat_completion.choices[0].message.content)
45+
46+
47+
if __name__ == "__main__":
48+
main()
Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
openai~=1.54.4
2+
3+
opentelemetry-sdk~=1.28.2
4+
opentelemetry-exporter-otlp-proto-http~=1.28.2
5+
opentelemetry-instrumentation-openai-v2~=2.0b0
Lines changed: 132 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,132 @@
1+
interactions:
2+
- request:
3+
body: |-
4+
{
5+
"messages": [
6+
{
7+
"role": "user",
8+
"content": "Say this is a test"
9+
}
10+
],
11+
"model": "gpt-4o-mini",
12+
"stream": false
13+
}
14+
headers:
15+
accept:
16+
- application/json
17+
accept-encoding:
18+
- gzip, deflate
19+
authorization:
20+
- Bearer test_openai_api_key
21+
connection:
22+
- keep-alive
23+
content-length:
24+
- '106'
25+
content-type:
26+
- application/json
27+
host:
28+
- api.openai.com
29+
user-agent:
30+
- AsyncOpenAI/Python 1.26.0
31+
x-stainless-arch:
32+
- arm64
33+
x-stainless-async:
34+
- async:asyncio
35+
x-stainless-lang:
36+
- python
37+
x-stainless-os:
38+
- MacOS
39+
x-stainless-package-version:
40+
- 1.26.0
41+
x-stainless-runtime:
42+
- CPython
43+
x-stainless-runtime-version:
44+
- 3.12.5
45+
method: POST
46+
uri: https://api.openai.com/v1/chat/completions
47+
response:
48+
body:
49+
string: |-
50+
{
51+
"id": "chatcmpl-ASv9R2E7Yhb2e7bj4Xl0qm9s3J42Y",
52+
"object": "chat.completion",
53+
"created": 1731456237,
54+
"model": "gpt-4o-mini-2024-07-18",
55+
"choices": [
56+
{
57+
"index": 0,
58+
"message": {
59+
"role": "assistant",
60+
"content": "This is a test. How can I assist you further?",
61+
"refusal": null
62+
},
63+
"logprobs": null,
64+
"finish_reason": "stop"
65+
}
66+
],
67+
"usage": {
68+
"prompt_tokens": 12,
69+
"completion_tokens": 12,
70+
"total_tokens": 24,
71+
"prompt_tokens_details": {
72+
"cached_tokens": 0,
73+
"audio_tokens": 0
74+
},
75+
"completion_tokens_details": {
76+
"reasoning_tokens": 0,
77+
"audio_tokens": 0,
78+
"accepted_prediction_tokens": 0,
79+
"rejected_prediction_tokens": 0
80+
}
81+
},
82+
"system_fingerprint": "fp_0ba0d124f1"
83+
}
84+
headers:
85+
CF-Cache-Status:
86+
- DYNAMIC
87+
CF-RAY:
88+
- 8e1a80679a8311a6-MRS
89+
Connection:
90+
- keep-alive
91+
Content-Type:
92+
- application/json
93+
Date:
94+
- Wed, 13 Nov 2024 00:03:58 GMT
95+
Server:
96+
- cloudflare
97+
Set-Cookie: test_set_cookie
98+
Transfer-Encoding:
99+
- chunked
100+
X-Content-Type-Options:
101+
- nosniff
102+
access-control-expose-headers:
103+
- X-Request-ID
104+
alt-svc:
105+
- h3=":443"; ma=86400
106+
content-length:
107+
- '796'
108+
openai-organization: test_openai_org_id
109+
openai-processing-ms:
110+
- '359'
111+
openai-version:
112+
- '2020-10-01'
113+
strict-transport-security:
114+
- max-age=31536000; includeSubDomains; preload
115+
x-ratelimit-limit-requests:
116+
- '30000'
117+
x-ratelimit-limit-tokens:
118+
- '150000000'
119+
x-ratelimit-remaining-requests:
120+
- '29999'
121+
x-ratelimit-remaining-tokens:
122+
- '149999978'
123+
x-ratelimit-reset-requests:
124+
- 2ms
125+
x-ratelimit-reset-tokens:
126+
- 0s
127+
x-request-id:
128+
- req_41ea134c1fc450d4ca4cf8d0c6a7c53a
129+
status:
130+
code: 200
131+
message: OK
132+
version: 1

0 commit comments

Comments
 (0)