-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Closed
Labels
bugSomething isn't workingSomething isn't working
Description
System Info
M1 Macbook with 0.22 docker
Information
- The official example scripts
- My own modified scripts
🐛 Describe the bug
Response branching, as discussed in the Agents vs OpenAI Responses API doc is not working. Running the following script and I got the error __main__:257 core::server: Error executing endpoint route='/v1/openai/v1/responses' method='post': 'OpenAIResponseOutputMessageWebSearchToolCall' object has no attribute 'content'
from io import BytesIO
import uuid
from llama_stack_client import LlamaStackClient
client = LlamaStackClient(
base_url="http://0.0.0.0:8321",
provider_data = {
"tavily_search_api_key": os.environ['TAVILY_SEARCH_API_KEY'],
"groq_api_key": os.environ['GROQ_API_KEY']
}
)
model_id = 'groq/meta-llama/llama-4-maverick-17b-128e-instruct'
# delete any existing vector store
vector_stores_to_delete = [v.id for v in client.vector_stores.list()]
for del_vs_id in vector_stores_to_delete:
client.vector_stores.delete(vector_store_id=del_vs_id)
print('Deleted all existing vector stores')
# Create a dummy file for the file search
dummy_file_content = "Popular sorting implementations include quicksort, mergesort, heapsort, and insertion sort. Bubble sort and selection sort are used for small or simple datasets. Counting sort, radix sort, and bucket sort handle special numeric cases efficiently without comparisons. Timsort, a hybrid of merge and insertion sort, is widely used in Python and Java. Shell sort, comb sort, cocktail sort, and others are less common but exist for special scenarios."
with BytesIO(dummy_file_content.encode()) as file_buffer:
file_buffer.name = "sorting_algorithms.txt"
create_file_response = client.files.create(file=file_buffer, purpose="assistants")
print(create_file_response)
file_id = create_file_response.id
# Create a vector store with the dummy file
vector_store = client.vector_stores.create(
name="sorting_docs",
file_ids=[file_id],
embedding_model="sentence-transformers/all-MiniLM-L6-v2",
embedding_dimension=384, # This should match the embedding model
provider_id="faiss"
)
print("Listing available vector stores:")
vector_stores = client.vector_stores.list()
for vs in vector_stores:
print(f"- {vs.name} (ID: {vs.id})")
# First response: Use web search for latest algorithms
response1 = client.responses.create(
model=model_id, # Changed model to one available in the notebook
input="Search for the latest efficient sorting algorithms and their performance comparisons",
tools=[
{
"type": "web_search",
},
], # Web search for current information
)
print(f"Web search results: {response1.output[-1].content[0].text}")
# Continue conversation: Switch to file search for local docs
response2 = client.responses.create(
model=model_id, # Changed model to one available in the notebook
input="Now search my uploaded files for existing sorting implementations",
tools=[
{ # Using Responses API built-in tools
"type": "file_search",
"vector_store_ids": [vector_store.id], # Use the created vector store ID
},
],
previous_response_id=response1.id,
)
print(f"File search results: {response2.output_text}") # Changed to output_text
Error logs
NFO 2025-09-20 00:06:03,917 console_span_processor:28 telemetry: 00:06:03.917 [START] /v1/openai/v1/files
INFO 2025-09-20 00:06:03,940 console_span_processor:39 telemetry: 00:06:03.919 [END] LocalfsFilesImpl.openai_upload_file [StatusCode.OK] (0.07ms)
INFO 2025-09-20 00:06:03,942 console_span_processor:48 telemetry: output: {'object': 'file', 'id': 'file-10ececb2f1234dce803436ba78a718fe',
'bytes': 446, 'created_at': 1758326763, 'expires_at': 1789862763, 'filename': 'sorting_algorithms.txt', 'purpose': 'assistants'}
INFO 2025-09-20 00:06:03,960 console_span_processor:39 telemetry: 00:06:03.944 [END] /v1/openai/v1/files [StatusCode.OK] (27.20ms)
INFO 2025-09-20 00:06:03,963 console_span_processor:48 telemetry: raw_path: /v1/openai/v1/files
INFO 2025-09-20 00:06:03,964 console_span_processor:62 telemetry: 00:06:03.804 [WARN] Could not read or log request body for POST
/v1/openai/v1/files: Stream consumed
INFO 2025-09-20 00:06:03,966 console_span_processor:62 telemetry: 00:06:03.822 [INFO] 127.0.0.1:53656 - "POST /v1/openai/v1/files HTTP/1.1" 200
INFO 2025-09-20 00:06:03,968 console_span_processor:28 telemetry: 00:06:03.968 [START] /v1/openai/v1/vector_stores
INFO 2025-09-20 00:06:03,984 console_span_processor:39 telemetry: 00:06:03.971 [END] FaissVectorIOAdapter.register_vector_db [StatusCode.OK]
(0.05ms)
INFO 2025-09-20 00:06:03,986 console_span_processor:48 telemetry: output:
INFO 2025-09-20 00:06:03,996 console_span_processor:39 telemetry: 00:06:03.988 [END] FaissVectorIOAdapter.register_vector_db [StatusCode.OK]
(0.07ms)
INFO 2025-09-20 00:06:03,998 console_span_processor:48 telemetry: output:
INFO 2025-09-20 00:06:04,014 console_span_processor:39 telemetry: 00:06:04.004 [END] VectorDBsRoutingTable.register_vector_db [StatusCode.OK]
(33.32ms)
INFO 2025-09-20 00:06:04,017 console_span_processor:48 telemetry: output: {'identifier': 'vs_69afb313-1c2d-4115-a9f4-8d31f4ff1ef3',
'provider_resource_id': 'vs_69afb313-1c2d-4115-a9f4-8d31f4ff1ef3', 'provider_id': 'faiss', 'type': 'vector_db', 'owner': None, 'source':
'via_register_api', 'embedding_model': 'sentence-transformers/all-MiniLM-L6-v2', 'embedding_dimension': 384, 'vector_db_name':
'sorting_docs'}
INFO 2025-09-20 00:06:04,020 console_span_processor:62 telemetry: 00:06:03.829 [WARN] VectorDB is being deprecated in future releases in favor of
VectorStore. Please migrate your usage accordingly.
INFO 2025-09-20 00:06:04,022 console_span_processor:62 telemetry: 00:06:03.876 [WARN] Ignoring vector_db_id
vs_69afb313-1c2d-4115-a9f4-8d31f4ff1ef3 and using vector_store_id vs_69afb313-1c2d-4115-a9f4-8d31f4ff1ef3 instead. Setting VectorDB
vs_69afb313-1c2d-4115-a9f4-8d31f4ff1ef3 to VectorDB.vector_db_name
INFO 2025-09-20 00:06:04,034 console_span_processor:39 telemetry: 00:06:04.025 [END] FaissVectorIOAdapter.register_vector_db [StatusCode.OK]
(0.07ms)
INFO 2025-09-20 00:06:04,037 console_span_processor:48 telemetry: output:
INFO 2025-09-20 00:06:04,048 console_span_processor:39 telemetry: 00:06:04.039 [END] LocalfsFilesImpl.openai_retrieve_file [StatusCode.OK]
(0.07ms)
INFO 2025-09-20 00:06:04,050 console_span_processor:48 telemetry: output: {'object': 'file', 'id': 'file-10ececb2f1234dce803436ba78a718fe',
'bytes': 446, 'created_at': 1758326763, 'expires_at': 1789862763, 'filename': 'sorting_algorithms.txt', 'purpose': 'assistants'}
INFO 2025-09-20 00:06:04,062 console_span_processor:39 telemetry: 00:06:04.052 [END] LocalfsFilesImpl.openai_retrieve_file_content [StatusCode.OK]
(0.11ms)
INFO 2025-09-20 00:06:04,065 console_span_processor:48 telemetry: output: <starlette.responses.Response object at 0x7cd941413fb0>
INFO 2025-09-20 00:06:04,079 console_span_processor:39 telemetry: 00:06:04.067 [END] ModelsRoutingTable.get_model [StatusCode.OK] (0.05ms)
INFO 2025-09-20 00:06:04,081 console_span_processor:48 telemetry: output: {'identifier': 'sentence-transformers/all-MiniLM-L6-v2',
'provider_resource_id': 'all-MiniLM-L6-v2', 'provider_id': 'sentence-transformers', 'type': 'model', 'owner': None, 'source':
'listed_from_provider', 'metadata': {'embedding_dimension': 384}, 'model_type': 'embedding'}
INFO 2025-09-20 00:06:04,093 console_span_processor:39 telemetry: 00:06:04.084 [END] ModelsRoutingTable.get_provider_impl [StatusCode.OK] (0.07ms)
INFO 2025-09-20 00:06:04,095 console_span_processor:48 telemetry: output:
<llama_stack.providers.inline.inference.sentence_transformers.sentence_transformers.SentenceTransformersInferenceImpl object at
0x7cdb00446780>
INFO 2025-09-20 00:06:04,117 console_span_processor:39 telemetry: 00:06:04.098 [END] ModelsRoutingTable.get_model [StatusCode.OK] (0.11ms)
INFO 2025-09-20 00:06:04,119 console_span_processor:48 telemetry: output: {'identifier': 'sentence-transformers/all-MiniLM-L6-v2',
'provider_resource_id': 'all-MiniLM-L6-v2', 'provider_id': 'sentence-transformers', 'type': 'model', 'owner': None, 'source':
'listed_from_provider', 'metadata': {'embedding_dimension': 384}, 'model_type': 'embedding'}
INFO 2025-09-20 00:06:04,139 console_span_processor:39 telemetry: 00:06:04.122 [END] InferenceRouter.openai_embeddings [StatusCode.OK] (54.80ms)
INFO 2025-09-20 00:06:04,141 console_span_processor:48 telemetry: output: {'object': 'list', 'data': [{'object': 'embedding', 'embedding':
[0.013035329058766365, 0.013034510426223278, 0.04281679540872574, -0.06716399639844894, -0.08346692472696304, -0.09370012581348419,
-0.008743984624743462, 0.04956777021288872, -0.01121377944946289, 0.04812122881412506, 0.0377383716404438, 0.08820166438817978,
0.0005559420096687973, 0.054395001381635666, -0.08139821141958237, -0.041209060698747635, -0.015076945535838604, 0.03144806995987892,
0.03862414509057999, -0.12718142569065094, -0.08182334154844284, 0.03055885061621666, -0.04483885318040848, -0.03194671496748924,
0.05055531486868858, 0.015735194087028503, -0.03981253504753113, -0.027688469737768173, 0.003523128107190132, -0.05985233932733536,
-0.03468659520149231, 0.07749001681804657, 0.05056149140000343, 0.02759459987282753, -0.10215723514556885, -0.0019045071676373482,
-0.062740258872509, -0.04131557047367096, -0.025496220216155052, 0.011524871923029423, -0.086610347032547, 0.031985051929950714,
0.04112327471375...
INFO 2025-09-20 00:06:04,158 console_span_processor:39 telemetry: 00:06:04.144 [END] FaissVectorIOAdapter.insert_chunks [StatusCode.OK] (77.46ms)
INFO 2025-09-20 00:06:04,160 console_span_processor:48 telemetry: output:
INFO 2025-09-20 00:06:04,162 uvicorn.access:473 uncategorized: 127.0.0.1:53656 - "POST /v1/openai/v1/vector_stores HTTP/1.1" 200
INFO 2025-09-20 00:06:04,170 uvicorn.access:473 uncategorized: 127.0.0.1:53656 - "GET /v1/openai/v1/vector_stores HTTP/1.1" 200
INFO 2025-09-20 00:06:04,175 console_span_processor:39 telemetry: 00:06:04.164 [END] VectorIORouter.openai_create_vector_store [StatusCode.OK]
(193.44ms)
INFO 2025-09-20 00:06:04,181 console_span_processor:48 telemetry: output: {'id': 'vs_69afb313-1c2d-4115-a9f4-8d31f4ff1ef3', 'object':
'vector_store', 'created_at': 1758326763, 'name': 'sorting_docs', 'usage_bytes': 0, 'file_counts': {'completed': 1, 'cancelled': 0, 'failed':
0, 'in_progress': 0, 'total': 1}, 'status': 'completed', 'expires_after': None, 'expires_at': None, 'last_active_at': 1758326763, 'metadata':
{'provider_id': 'faiss', 'provider_vector_db_id': 'vs_69afb313-1c2d-4115-a9f4-8d31f4ff1ef3'}}
INFO 2025-09-20 00:06:04,194 console_span_processor:39 telemetry: 00:06:04.185 [END] /v1/openai/v1/vector_stores [StatusCode.OK] (217.19ms)
INFO 2025-09-20 00:06:04,197 console_span_processor:48 telemetry: raw_path: /v1/openai/v1/vector_stores
INFO 2025-09-20 00:06:04,199 console_span_processor:62 telemetry: 00:06:04.164 [INFO] 127.0.0.1:53656 - "POST /v1/openai/v1/vector_stores
HTTP/1.1" 200
INFO 2025-09-20 00:06:04,201 console_span_processor:28 telemetry: 00:06:04.201 [START] /v1/openai/v1/vector_stores
INFO 2025-09-20 00:06:04,212 console_span_processor:39 telemetry: 00:06:04.203 [END] VectorIORouter.openai_list_vector_stores [StatusCode.OK]
(0.10ms)
INFO 2025-09-20 00:06:04,214 console_span_processor:48 telemetry: output: {'object': 'list', 'data': [{'id':
'vs_69afb313-1c2d-4115-a9f4-8d31f4ff1ef3', 'object': 'vector_store', 'created_at': 1758326763, 'name': 'sorting_docs', 'usage_bytes': 0,
'file_counts': {'completed': 1, 'cancelled': 0, 'failed': 0, 'in_progress': 0, 'total': 1}, 'status': 'completed', 'expires_after': None,
'expires_at': None, 'last_active_at': 1758326763, 'metadata': {'provider_id': 'faiss', 'provider_vector_db_id':
'vs_69afb313-1c2d-4115-a9f4-8d31f4ff1ef3'}}], 'first_id': 'vs_69afb313-1c2d-4115-a9f4-8d31f4ff1ef3', 'last_id':
'vs_69afb313-1c2d-4115-a9f4-8d31f4ff1ef3', 'has_more': False}
INFO 2025-09-20 00:06:04,228 console_span_processor:39 telemetry: 00:06:04.219 [END] /v1/openai/v1/vector_stores [StatusCode.OK] (17.86ms)
INFO 2025-09-20 00:06:04,230 console_span_processor:48 telemetry: raw_path: /v1/openai/v1/vector_stores
INFO 2025-09-20 00:06:04,232 console_span_processor:62 telemetry: 00:06:04.172 [INFO] 127.0.0.1:53656 - "GET /v1/openai/v1/vector_stores
HTTP/1.1" 200
INFO 2025-09-20 00:06:04,235 console_span_processor:28 telemetry: 00:06:04.234 [START] /v1/openai/v1/responses
INFO 2025-09-20 00:06:04,245 console_span_processor:39 telemetry: 00:06:04.237 [END] ToolGroupsRoutingTable.get_tool [StatusCode.OK] (0.09ms)
INFO 2025-09-20 00:06:04,248 console_span_processor:48 telemetry: output: {'identifier': 'web_search', 'provider_resource_id': None,
'provider_id': 'tavily-search', 'type': 'tool', 'toolgroup_id': 'builtin::websearch', 'description': 'Search the web for information',
'parameters': [{'name': 'query', 'parameter_type': 'string', 'description': 'The query to search for', 'required': True, 'default': None}],
'metadata': None}
INFO 2025-09-20 00:06:04,262 console_span_processor:39 telemetry: 00:06:04.251 [END] ModelsRoutingTable.get_model [StatusCode.OK] (0.10ms)
INFO 2025-09-20 00:06:04,265 console_span_processor:48 telemetry: output: {'identifier': 'groq/meta-llama/llama-4-maverick-17b-128e-instruct',
'provider_resource_id': 'meta-llama/llama-4-maverick-17b-128e-instruct', 'provider_id': 'groq', 'type': 'model', 'owner': None, 'source':
'listed_from_provider', 'metadata': {}, 'model_type': 'llm'}
INFO 2025-09-20 00:06:04,278 console_span_processor:39 telemetry: 00:06:04.268 [END] ModelsRoutingTable.get_provider_impl [StatusCode.OK] (0.11ms)
INFO 2025-09-20 00:06:04,279 console_span_processor:48 telemetry: output:
<llama_stack.providers.remote.inference.groq.groq.GroqInferenceAdapter object at 0x7cdb002ef7a0>
INFO 2025-09-20 00:06:04,290 console_span_processor:39 telemetry: 00:06:04.281 [END] GroqInferenceAdapter.get_api_key [StatusCode.OK] (0.06ms)
INFO 2025-09-20 00:06:04,292 console_span_processor:48 telemetry: output: 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
INFO 2025-09-20 00:06:04,302 console_span_processor:39 telemetry: 00:06:04.293 [END] GroqInferenceAdapter.get_api_key [StatusCode.OK] (12.28ms)
INFO 2025-09-20 00:06:04,304 console_span_processor:48 telemetry: output: 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
INFO 2025-09-20 00:06:04,315 console_span_processor:39 telemetry: 00:06:04.306 [END] GroqInferenceAdapter.get_base_url [StatusCode.OK] (0.07ms)
INFO 2025-09-20 00:06:04,317 console_span_processor:48 telemetry: output: https://api.groq.com/openai/v1
INFO 2025-09-20 00:06:04,327 console_span_processor:39 telemetry: 00:06:04.319 [END] ModelsRoutingTable.get_model [StatusCode.OK] (0.07ms)
INFO 2025-09-20 00:06:04,329 console_span_processor:48 telemetry: output: {'identifier': 'groq/meta-llama/llama-4-maverick-17b-128e-instruct',
'provider_resource_id': 'meta-llama/llama-4-maverick-17b-128e-instruct', 'provider_id': 'groq', 'type': 'model', 'owner': None, 'source':
'listed_from_provider', 'metadata': {}, 'model_type': 'llm'}
INFO 2025-09-20 00:06:04,483 console_span_processor:39 telemetry: 00:06:04.473 [END] InferenceRouter.openai_chat_completion [StatusCode.OK]
(221.87ms)
INFO 2025-09-20 00:06:04,486 console_span_processor:48 telemetry: output: <async_generator object
InferenceRouter.stream_tokens_and_compute_metrics_openai_chat at 0x7cdad57c9a80>
INFO 2025-09-20 00:06:04,500 console_span_processor:39 telemetry: 00:06:04.489 [END] InferenceRouter.stream_tokens_and_compute_metrics_openai_chat
[StatusCode.OK] (0.09ms)
INFO 2025-09-20 00:06:04,503 console_span_processor:48 telemetry: chunk_count: 3
INFO 2025-09-20 00:06:04,514 console_span_processor:39 telemetry: 00:06:04.505 [END] ToolGroupsRoutingTable.get_provider_impl [StatusCode.OK]
(0.08ms)
INFO 2025-09-20 00:06:04,517 console_span_processor:48 telemetry: output:
<llama_stack.providers.remote.tool_runtime.tavily_search.tavily_search.TavilySearchToolRuntimeImpl object at 0x7cdaee89e960>
INFO 2025-09-20 00:06:05,896 console_span_processor:39 telemetry: 00:06:05.887 [END] TavilySearchToolRuntimeImpl.invoke_tool [StatusCode.OK]
(1367.69ms)
INFO 2025-09-20 00:06:05,899 console_span_processor:48 telemetry: output: {'content': '{"query": "latest efficient sorting algorithms and
performance comparisons", "top_k": [{"url":
"https://www.codemotion.com/magazine/data-science/a-performance-comparison-of-quick-sort-algorithms/", "title": "A Performance comparison of
quick sort algorithms - Codemotion", "content": "I compared three popular sorting algorithms, Merge Sort, Quick Sort, and Heap Sort, to
evaluate their efficiency in processing different amounts", "score": 0.75049835, "raw_content": null}, {"url":
"https://medium.com/@arthurvinice/exploring-sorting-algorithms-performance-a-study-on-runtime-variations-87446bef6503", "title": "Exploring
sorting algorithm performance: a study on runtime variations.", "content": "During this investigation, we\'ll delve into six renowned sorting
algorithms: Bubble Sort, Selection Sort, Insertion Sort, Merge Sort, Quick Sort", "score": 0.6175132, "raw_content": null}, {"url":
"https://www.quora.com/What-is-the-most-efficient-sorting-algorithm-in-terms-of-the-num...
INFO 2025-09-20 00:06:05,917 console_span_processor:39 telemetry: 00:06:05.907 [END] ToolRuntimeRouter.invoke_tool [StatusCode.OK] (1402.22ms)
INFO 2025-09-20 00:06:05,919 console_span_processor:48 telemetry: output: {'content': '{"query": "latest efficient sorting algorithms and
performance comparisons", "top_k": [{"url":
"https://www.codemotion.com/magazine/data-science/a-performance-comparison-of-quick-sort-algorithms/", "title": "A Performance comparison of
quick sort algorithms - Codemotion", "content": "I compared three popular sorting algorithms, Merge Sort, Quick Sort, and Heap Sort, to
evaluate their efficiency in processing different amounts", "score": 0.75049835, "raw_content": null}, {"url":
"https://medium.com/@arthurvinice/exploring-sorting-algorithms-performance-a-study-on-runtime-variations-87446bef6503", "title": "Exploring
sorting algorithm performance: a study on runtime variations.", "content": "During this investigation, we\'ll delve into six renowned sorting
algorithms: Bubble Sort, Selection Sort, Insertion Sort, Merge Sort, Quick Sort", "score": 0.6175132, "raw_content": null}, {"url":
"https://www.quora.com/What-is-the-most-efficient-sorting-algorithm-in-terms-of-the-num...
INFO 2025-09-20 00:06:05,933 console_span_processor:39 telemetry: 00:06:05.924 [END] ModelsRoutingTable.get_model [StatusCode.OK] (0.09ms)
INFO 2025-09-20 00:06:05,936 console_span_processor:48 telemetry: output: {'identifier': 'groq/meta-llama/llama-4-maverick-17b-128e-instruct',
'provider_resource_id': 'meta-llama/llama-4-maverick-17b-128e-instruct', 'provider_id': 'groq', 'type': 'model', 'owner': None, 'source':
'listed_from_provider', 'metadata': {}, 'model_type': 'llm'}
INFO 2025-09-20 00:06:05,948 console_span_processor:39 telemetry: 00:06:05.939 [END] ModelsRoutingTable.get_provider_impl [StatusCode.OK] (0.11ms)
INFO 2025-09-20 00:06:05,951 console_span_processor:48 telemetry: output:
<llama_stack.providers.remote.inference.groq.groq.GroqInferenceAdapter object at 0x7cdb002ef7a0>
INFO 2025-09-20 00:06:05,965 console_span_processor:39 telemetry: 00:06:05.954 [END] GroqInferenceAdapter.get_api_key [StatusCode.OK] (0.09ms)
INFO 2025-09-20 00:06:05,969 console_span_processor:48 telemetry: output: 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
INFO 2025-09-20 00:06:05,983 console_span_processor:39 telemetry: 00:06:05.973 [END] GroqInferenceAdapter.get_api_key [StatusCode.OK] (19.46ms)
INFO 2025-09-20 00:06:05,985 console_span_processor:48 telemetry: output: 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
INFO 2025-09-20 00:06:05,997 console_span_processor:39 telemetry: 00:06:05.987 [END] GroqInferenceAdapter.get_base_url [StatusCode.OK] (0.08ms)
INFO 2025-09-20 00:06:05,999 console_span_processor:48 telemetry: output: https://api.groq.com/openai/v1
INFO 2025-09-20 00:06:06,010 console_span_processor:39 telemetry: 00:06:06.001 [END] ModelsRoutingTable.get_model [StatusCode.OK] (0.07ms)
INFO 2025-09-20 00:06:06,011 console_span_processor:48 telemetry: output: {'identifier': 'groq/meta-llama/llama-4-maverick-17b-128e-instruct',
'provider_resource_id': 'meta-llama/llama-4-maverick-17b-128e-instruct', 'provider_id': 'groq', 'type': 'model', 'owner': None, 'source':
'listed_from_provider', 'metadata': {}, 'model_type': 'llm'}
INFO 2025-09-20 00:06:06,191 console_span_processor:39 telemetry: 00:06:06.178 [END] InferenceRouter.openai_chat_completion [StatusCode.OK]
(254.07ms)
INFO 2025-09-20 00:06:06,192 console_span_processor:48 telemetry: output: <async_generator object
InferenceRouter.stream_tokens_and_compute_metrics_openai_chat at 0x7cd93b483380>
INFO 2025-09-20 00:06:06,437 uvicorn.access:473 uncategorized: 127.0.0.1:53656 - "POST /v1/openai/v1/responses HTTP/1.1" 200
INFO 2025-09-20 00:06:06,440 console_span_processor:39 telemetry: 00:06:06.416 [END] InferenceRouter.stream_tokens_and_compute_metrics_openai_chat
[StatusCode.OK] (221.39ms)
INFO 2025-09-20 00:06:06,442 console_span_processor:48 telemetry: chunk_count: 105
ERROR 2025-09-20 00:06:06,449 __main__:257 core::server: Error executing endpoint route='/v1/openai/v1/responses' method='post':
'OpenAIResponseOutputMessageWebSearchToolCall' object has no attribute 'content'
INFO 2025-09-20 00:06:06,451 uvicorn.access:473 uncategorized: 127.0.0.1:53656 - "POST /v1/openai/v1/responses HTTP/1.1" 500
INFO 2025-09-20 00:06:06,463 console_span_processor:39 telemetry: 00:06:06.446 [END] /v1/openai/v1/responses [StatusCode.OK] (2211.55ms)
INFO 2025-09-20 00:06:06,465 console_span_processor:48 telemetry: raw_path: /v1/openai/v1/responses
INFO 2025-09-20 00:06:06,466 console_span_processor:62 telemetry: 00:06:06.439 [INFO] 127.0.0.1:53656 - "POST /v1/openai/v1/responses HTTP/1.1"
200
INFO 2025-09-20 00:06:06,467 console_span_processor:28 telemetry: 00:06:06.467 [START] /v1/openai/v1/responses
INFO 2025-09-20 00:06:06,479 console_span_processor:39 telemetry: 00:06:06.469 [END] /v1/openai/v1/responses [StatusCode.OK] (1.41ms)
INFO 2025-09-20 00:06:06,481 console_span_processor:48 telemetry: raw_path: /v1/openai/v1/responses
INFO 2025-09-20 00:06:06,482 console_span_processor:62 telemetry: 00:06:06.451 [ERROR] Error executing endpoint route='/v1/openai/v1/responses'
method='post': 'OpenAIResponseOutputMessageWebSearchToolCall' object has no attribute 'content'
INFO 2025-09-20 00:06:06,484 console_span_processor:62 telemetry: 00:06:06.452 [INFO] 127.0.0.1:53656 - "POST /v1/openai/v1/responses HTTP/1.1"
500
INFO 2025-09-20 00:06:06,906 console_span_processor:28 telemetry: 00:06:06.906 [START] /v1/openai/v1/responses
ERROR 2025-09-20 00:06:06,912 __main__:257 core::server: Error executing endpoint route='/v1/openai/v1/responses' method='post':
'OpenAIResponseOutputMessageWebSearchToolCall' object has no attribute 'content'
INFO 2025-09-20 00:06:06,915 uvicorn.access:473 uncategorized: 127.0.0.1:53656 - "POST /v1/openai/v1/responses HTTP/1.1" 500
INFO 2025-09-20 00:06:06,938 console_span_processor:39 telemetry: 00:06:06.927 [END] /v1/openai/v1/responses [StatusCode.OK] (21.40ms)
INFO 2025-09-20 00:06:06,941 console_span_processor:48 telemetry: raw_path: /v1/openai/v1/responses
INFO 2025-09-20 00:06:06,943 console_span_processor:62 telemetry: 00:06:06.915 [ERROR] Error executing endpoint route='/v1/openai/v1/responses'
method='post': 'OpenAIResponseOutputMessageWebSearchToolCall' object has no attribute 'content'
INFO 2025-09-20 00:06:06,949 console_span_processor:62 telemetry: 00:06:06.921 [INFO] 127.0.0.1:53656 - "POST /v1/openai/v1/responses HTTP/1.1"
500
INFO 2025-09-20 00:06:07,730 console_span_processor:28 telemetry: 00:06:07.730 [START] /v1/openai/v1/responses
ERROR 2025-09-20 00:06:07,735 __main__:257 core::server: Error executing endpoint route='/v1/openai/v1/responses' method='post':
'OpenAIResponseOutputMessageWebSearchToolCall' object has no attribute 'content'
INFO 2025-09-20 00:06:07,738 uvicorn.access:473 uncategorized: 127.0.0.1:53656 - "POST /v1/openai/v1/responses HTTP/1.1" 500
INFO 2025-09-20 00:06:07,760 console_span_processor:39 telemetry: 00:06:07.748 [END] /v1/openai/v1/responses [StatusCode.OK] (17.64ms)
INFO 2025-09-20 00:06:07,763 console_span_processor:48 telemetry: raw_path: /v1/openai/v1/responses
INFO 2025-09-20 00:06:07,765 console_span_processor:62 telemetry: 00:06:07.737 [ERROR] Error executing endpoint route='/v1/openai/v1/responses'
method='post': 'OpenAIResponseOutputMessageWebSearchToolCall' object has no attribute 'content'
INFO 2025-09-20 00:06:07,767 console_span_processor:62 telemetry: 00:06:07.740 [INFO] 127.0.0.1:53656 - "POST /v1/openai/v1/responses HTTP/1.1"
500
Expected behavior
response branching should work as discussed in the Agents vs OpenAI Responses API doc
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working