-
Notifications
You must be signed in to change notification settings - Fork 4.2k
Closed
Labels
documentationImprovements or additions to documentationImprovements or additions to documentation
Description
Confirm this is an issue with the Python library and not an underlying OpenAI API
- This is an issue with the Python library
Describe the bug
The docs here say that the following should be possible
import openai
import asyncio
async def test_streaming():
client = openai.OpenAI()
async with client.beta.chat.completions.stream(
model='gpt-4o-2024-08-06',
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Tell me a joke."},
],
) as stream:
async for event in stream:
if event.type == 'content.delta':
print(event.delta, flush=True, end='')
elif event.type == 'content.done':
print("\nContent generation complete.")
break
# Run the streaming test
asyncio.run(test_streaming())
However, this gives
TypeError: 'ChatCompletionStreamManager' object does not support the asynchronous context manager protocol
When I run without async it works fine ie
import openai
def test_streaming():
client = openai.OpenAI()
with client.beta.chat.completions.stream(
model='gpt-4o-2024-08-06',
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Tell me a joke."},
],
) as stream:
for event in stream:
if event.type == 'content.delta':
print(event.delta, flush=True, end='')
elif event.type == 'content.done':
print("\nContent generation complete.")
break
# Run the streaming test
test_streaming()
To Reproduce
Run the above code snippet which is the beta async chat_completion (and should handle the new pydantic parsing)
Code snippets
OS
macOS
Python version
Python 3.11-3.12
Library version
1.40.4
Metadata
Metadata
Assignees
Labels
documentationImprovements or additions to documentationImprovements or additions to documentation