Skip to content

Commit 5cf1e99

Browse files
authored
Merge branch 'main' into add-reasoning-param-to-model-settings
2 parents deb478a + 0110f3a commit 5cf1e99

File tree

27 files changed

+278
-46
lines changed

27 files changed

+278
-46
lines changed

.github/ISSUE_TEMPLATE/feature_request.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ assignees: ''
1010
### Please read this first
1111

1212
- **Have you read the docs?**[Agents SDK docs](https://openai.github.io/openai-agents-python/)
13-
- **Have you searched for related issues?** Others may have had similar requesrs
13+
- **Have you searched for related issues?** Others may have had similar requests
1414

1515
### Describe the feature
1616
What is the feature you're requesting? How would it work? Please provide examples and details if possible.

.github/ISSUE_TEMPLATE/question.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ assignees: ''
1010
### Please read this first
1111

1212
- **Have you read the docs?**[Agents SDK docs](https://openai.github.io/openai-agents-python/)
13-
- **Have you searched for related issues?** Others may have had similar requesrs
13+
- **Have you searched for related issues?** Others may have had similar requests
1414

1515
### Question
1616
Describe your question. Provide details if available.

.vscode/settings.json

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
{
2+
"python.testing.pytestArgs": [
3+
"tests"
4+
],
5+
"python.testing.unittestEnabled": false,
6+
"python.testing.pytestEnabled": true
7+
}

docs/running_agents.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -53,7 +53,7 @@ The `run_config` parameter lets you configure some global settings for the agent
5353
- [`handoff_input_filter`][agents.run.RunConfig.handoff_input_filter]: A global input filter to apply to all handoffs, if the handoff doesn't already have one. The input filter allows you to edit the inputs that are sent to the new agent. See the documentation in [`Handoff.input_filter`][agents.handoffs.Handoff.input_filter] for more details.
5454
- [`tracing_disabled`][agents.run.RunConfig.tracing_disabled]: Allows you to disable [tracing](tracing.md) for the entire run.
5555
- [`trace_include_sensitive_data`][agents.run.RunConfig.trace_include_sensitive_data]: Configures whether traces will include potentially sensitive data, such as LLM and tool call inputs/outputs.
56-
- [`workflow_name`][agents.run.RunConfig.workflow_name], [`trace_id`][agents.run.RunConfig.trace_id], [`group_id`][agents.run.RunConfig.group_id]: Sets the tracing workflow name, trace ID and trace group ID for the run. We recommend at least setting `workflow_name`. The session ID is an optional field that lets you link traces across multiple runs.
56+
- [`workflow_name`][agents.run.RunConfig.workflow_name], [`trace_id`][agents.run.RunConfig.trace_id], [`group_id`][agents.run.RunConfig.group_id]: Sets the tracing workflow name, trace ID and trace group ID for the run. We recommend at least setting `workflow_name`. The group ID is an optional field that lets you link traces across multiple runs.
5757
- [`trace_metadata`][agents.run.RunConfig.trace_metadata]: Metadata to include on all traces.
5858

5959
## Conversations/chat threads

docs/tracing.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -101,7 +101,8 @@ To customize this default setup, to send traces to alternative or additional bac
101101

102102
- [Weights & Biases](https://weave-docs.wandb.ai/guides/integrations/openai_agents)
103103
- [Arize-Phoenix](https://docs.arize.com/phoenix/tracing/integrations-tracing/openai-agents-sdk)
104-
- [MLflow](https://mlflow.org/docs/latest/tracing/integrations/openai-agent)
104+
- [MLflow (self-hosted/OSS](https://mlflow.org/docs/latest/tracing/integrations/openai-agent)
105+
- [MLflow (Databricks hosted](https://docs.databricks.com/aws/en/mlflow/mlflow-tracing#-automatic-tracing)
105106
- [Braintrust](https://braintrust.dev/docs/guides/traces/integrations#openai-agents-sdk)
106107
- [Pydantic Logfire](https://logfire.pydantic.dev/docs/integrations/llms/openai/#openai-agents)
107108
- [AgentOps](https://docs.agentops.ai/v1/integrations/agentssdk)
@@ -112,3 +113,4 @@ To customize this default setup, to send traces to alternative or additional bac
112113
- [Comet Opik](https://www.comet.com/docs/opik/tracing/integrations/openai_agents)
113114
- [Langfuse](https://langfuse.com/docs/integrations/openaiagentssdk/openai-agents)
114115
- [Langtrace](https://docs.langtrace.ai/supported-integrations/llm-frameworks/openai-agents-sdk)
116+
- [Okahu-Monocle](https://github.com/monocle2ai/monocle)

examples/financial_research_agent/manager.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@ async def run(self, query: str) -> None:
3838
with trace("Financial research trace", trace_id=trace_id):
3939
self.printer.update_item(
4040
"trace_id",
41-
f"View trace: https://platform.openai.com/traces/{trace_id}",
41+
f"View trace: https://platform.openai.com/traces/trace?trace_id={trace_id}",
4242
is_done=True,
4343
hide_checkmark=True,
4444
)

examples/mcp/filesystem_example/main.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,7 @@ async def main():
4545
) as server:
4646
trace_id = gen_trace_id()
4747
with trace(workflow_name="MCP Filesystem Example", trace_id=trace_id):
48-
print(f"View trace: https://platform.openai.com/traces/{trace_id}\n")
48+
print(f"View trace: https://platform.openai.com/traces/trace?trace_id={trace_id}\n")
4949
await run(server)
5050

5151

examples/mcp/sse_example/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ This example uses a local SSE server in [server.py](server.py).
55
Run the example via:
66

77
```
8-
uv run python python examples/mcp/sse_example/main.py
8+
uv run python examples/mcp/sse_example/main.py
99
```
1010

1111
## Details

examples/mcp/sse_example/main.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@ async def main():
4646
) as server:
4747
trace_id = gen_trace_id()
4848
with trace(workflow_name="SSE Example", trace_id=trace_id):
49-
print(f"View trace: https://platform.openai.com/traces/{trace_id}\n")
49+
print(f"View trace: https://platform.openai.com/traces/trace?trace_id={trace_id}\n")
5050
await run(server)
5151

5252

examples/research_bot/manager.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ async def run(self, query: str) -> None:
2323
with trace("Research trace", trace_id=trace_id):
2424
self.printer.update_item(
2525
"trace_id",
26-
f"View trace: https://platform.openai.com/traces/{trace_id}",
26+
f"View trace: https://platform.openai.com/traces/trace?trace_id={trace_id}",
2727
is_done=True,
2828
hide_checkmark=True,
2929
)

examples/research_bot/sample_outputs/product_recs.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
$ uv run python -m examples.research_bot.main
44

55
What would you like to research? Best surfboards for beginners. I can catch my own waves, but previously used an 11ft board. What should I look for, what are my options? Various budget ranges.
6-
View trace: https://platform.openai.com/traces/trace_...
6+
View trace: https://platform.openai.com/traces/trace?trace_id=trace_...
77
Starting research...
88
✅ Will perform 15 searches
99
✅ Searching... 15/15 completed

examples/research_bot/sample_outputs/vacation.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
$ uv run python -m examples.research_bot.main
44
What would you like to research? Caribbean vacation spots in April, optimizing for surfing, hiking and water sports
5-
View trace: https://platform.openai.com/traces/trace_....
5+
View trace: https://platform.openai.com/traces/trace?trace_id=trace_....
66
Starting research...
77
✅ Will perform 15 searches
88
✅ Searching... 15/15 completed

pyproject.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ dependencies = [
1313
"typing-extensions>=4.12.2, <5",
1414
"requests>=2.0, <3",
1515
"types-requests>=2.0, <3",
16-
"mcp; python_version >= '3.10'",
16+
"mcp>=1.6.0, <2; python_version >= '3.10'",
1717
]
1818
classifiers = [
1919
"Typing :: Typed",

src/agents/__init__.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -100,6 +100,7 @@
100100
transcription_span,
101101
)
102102
from .usage import Usage
103+
from .version import __version__
103104

104105

105106
def set_default_openai_key(key: str, use_for_tracing: bool = True) -> None:
@@ -247,4 +248,5 @@ def enable_verbose_stdout_logging():
247248
"gen_trace_id",
248249
"gen_span_id",
249250
"default_tool_error_function",
251+
"__version__",
250252
]

src/agents/agent.py

Lines changed: 16 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@
66
from dataclasses import dataclass, field
77
from typing import TYPE_CHECKING, Any, Callable, Generic, Literal, cast
88

9-
from typing_extensions import TypeAlias, TypedDict
9+
from typing_extensions import NotRequired, TypeAlias, TypedDict
1010

1111
from .guardrail import InputGuardrail, OutputGuardrail
1212
from .handoffs import Handoff
@@ -44,7 +44,7 @@ class ToolsToFinalOutputResult:
4444
MaybeAwaitable[ToolsToFinalOutputResult],
4545
]
4646
"""A function that takes a run context and a list of tool results, and returns a
47-
`ToolToFinalOutputResult`.
47+
`ToolsToFinalOutputResult`.
4848
"""
4949

5050

@@ -53,6 +53,15 @@ class StopAtTools(TypedDict):
5353
"""A list of tool names, any of which will stop the agent from running further."""
5454

5555

56+
class MCPConfig(TypedDict):
57+
"""Configuration for MCP servers."""
58+
59+
convert_schemas_to_strict: NotRequired[bool]
60+
"""If True, we will attempt to convert the MCP schemas to strict-mode schemas. This is a
61+
best-effort conversion, so some schemas may not be convertible. Defaults to False.
62+
"""
63+
64+
5665
@dataclass
5766
class Agent(Generic[TContext]):
5867
"""An agent is an AI model configured with instructions, tools, guardrails, handoffs and more.
@@ -119,6 +128,9 @@ class Agent(Generic[TContext]):
119128
longer needed.
120129
"""
121130

131+
mcp_config: MCPConfig = field(default_factory=lambda: MCPConfig())
132+
"""Configuration for MCP servers."""
133+
122134
input_guardrails: list[InputGuardrail[TContext]] = field(default_factory=list)
123135
"""A list of checks that run in parallel to the agent's execution, before generating a
124136
response. Runs only if the agent is the first agent in the chain.
@@ -224,7 +236,8 @@ async def get_system_prompt(self, run_context: RunContextWrapper[TContext]) -> s
224236

225237
async def get_mcp_tools(self) -> list[Tool]:
226238
"""Fetches the available tools from the MCP servers."""
227-
return await MCPUtil.get_all_function_tools(self.mcp_servers)
239+
convert_schemas_to_strict = self.mcp_config.get("convert_schemas_to_strict", False)
240+
return await MCPUtil.get_all_function_tools(self.mcp_servers, convert_schemas_to_strict)
228241

229242
async def get_all_tools(self) -> list[Tool]:
230243
"""All agent tools, including MCP tools and function tools."""

src/agents/mcp/util.py

Lines changed: 23 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,8 @@
22
import json
33
from typing import TYPE_CHECKING, Any
44

5+
from agents.strict_schema import ensure_strict_json_schema
6+
57
from .. import _debug
68
from ..exceptions import AgentsException, ModelBehaviorError, UserError
79
from ..logger import logger
@@ -19,12 +21,14 @@ class MCPUtil:
1921
"""Set of utilities for interop between MCP and Agents SDK tools."""
2022

2123
@classmethod
22-
async def get_all_function_tools(cls, servers: list["MCPServer"]) -> list[Tool]:
24+
async def get_all_function_tools(
25+
cls, servers: list["MCPServer"], convert_schemas_to_strict: bool
26+
) -> list[Tool]:
2327
"""Get all function tools from a list of MCP servers."""
2428
tools = []
2529
tool_names: set[str] = set()
2630
for server in servers:
27-
server_tools = await cls.get_function_tools(server)
31+
server_tools = await cls.get_function_tools(server, convert_schemas_to_strict)
2832
server_tool_names = {tool.name for tool in server_tools}
2933
if len(server_tool_names & tool_names) > 0:
3034
raise UserError(
@@ -37,25 +41,37 @@ async def get_all_function_tools(cls, servers: list["MCPServer"]) -> list[Tool]:
3741
return tools
3842

3943
@classmethod
40-
async def get_function_tools(cls, server: "MCPServer") -> list[Tool]:
44+
async def get_function_tools(
45+
cls, server: "MCPServer", convert_schemas_to_strict: bool
46+
) -> list[Tool]:
4147
"""Get all function tools from a single MCP server."""
4248

4349
with mcp_tools_span(server=server.name) as span:
4450
tools = await server.list_tools()
4551
span.span_data.result = [tool.name for tool in tools]
4652

47-
return [cls.to_function_tool(tool, server) for tool in tools]
53+
return [cls.to_function_tool(tool, server, convert_schemas_to_strict) for tool in tools]
4854

4955
@classmethod
50-
def to_function_tool(cls, tool: "MCPTool", server: "MCPServer") -> FunctionTool:
56+
def to_function_tool(
57+
cls, tool: "MCPTool", server: "MCPServer", convert_schemas_to_strict: bool
58+
) -> FunctionTool:
5159
"""Convert an MCP tool to an Agents SDK function tool."""
5260
invoke_func = functools.partial(cls.invoke_mcp_tool, server, tool)
61+
schema, is_strict = tool.inputSchema, False
62+
if convert_schemas_to_strict:
63+
try:
64+
schema = ensure_strict_json_schema(schema)
65+
is_strict = True
66+
except Exception as e:
67+
logger.info(f"Error converting MCP schema to strict mode: {e}")
68+
5369
return FunctionTool(
5470
name=tool.name,
5571
description=tool.description or "",
56-
params_json_schema=tool.inputSchema,
72+
params_json_schema=schema,
5773
on_invoke_tool=invoke_func,
58-
strict_json_schema=False,
74+
strict_json_schema=is_strict,
5975
)
6076

6177
@classmethod

src/agents/model_settings.py

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -48,6 +48,10 @@ class ModelSettings:
4848
reasoning effort. For computer_use_preview: Use 'generate_summary' key with values
4949
'concise' or 'detailed' to get reasoning summaries."""
5050

51+
metadata: dict[str, str] | None = None
52+
"""Metadata to include with the model response call."""
53+
54+
5155
store: bool | None = None
5256
"""Whether to store the generated model response for later retrieval.
5357
Defaults to True if not provided."""

src/agents/models/openai_chatcompletions.py

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -540,6 +540,7 @@ async def _fetch_response(
540540
store=store,
541541
reasoning_effort=self._non_null_or_not_given(reasoning_effort),
542542
extra_headers=_HEADERS,
543+
metadata=model_settings.metadata,
543544
)
544545

545546
if isinstance(ret, ChatCompletion):
@@ -927,12 +928,13 @@ def ensure_assistant_message() -> ChatCompletionAssistantMessageParam:
927928
elif func_call := cls.maybe_function_tool_call(item):
928929
asst = ensure_assistant_message()
929930
tool_calls = list(asst.get("tool_calls", []))
931+
arguments = func_call["arguments"] if func_call["arguments"] else "{}"
930932
new_tool_call = ChatCompletionMessageToolCallParam(
931933
id=func_call["call_id"],
932934
type="function",
933935
function={
934936
"name": func_call["name"],
935-
"arguments": func_call["arguments"],
937+
"arguments": arguments,
936938
},
937939
)
938940
tool_calls.append(new_tool_call)
@@ -975,7 +977,7 @@ def to_openai(cls, tool: Tool) -> ChatCompletionToolParam:
975977
}
976978

977979
raise UserError(
978-
f"Hosted tools are not supported with the ChatCompletions API. FGot tool type: "
980+
f"Hosted tools are not supported with the ChatCompletions API. Got tool type: "
979981
f"{type(tool)}, tool: {tool}"
980982
)
981983

src/agents/models/openai_responses.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -248,6 +248,7 @@ async def _fetch_response(
248248
text=response_format,
249249
store=self._non_null_or_not_given(model_settings.store),
250250
reasoning=self._non_null_or_not_given(model_settings.reasoning),
251+
metadata=model_settings.metadata,
251252
)
252253

253254
def _get_client(self) -> AsyncOpenAI:

src/agents/tracing/processors.py

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -182,7 +182,6 @@ def __init__(
182182
# Track when we next *must* perform a scheduled export
183183
self._next_export_time = time.time() + self._schedule_delay
184184

185-
self._shutdown_event = threading.Event()
186185
self._worker_thread = threading.Thread(target=self._run, daemon=True)
187186
self._worker_thread.start()
188187

src/agents/tracing/span_data.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -236,7 +236,7 @@ def export(self) -> dict[str, Any]:
236236

237237

238238
class SpeechSpanData(SpanData):
239-
__slots__ = ("input", "output", "model", "model_config", "first_byte_at")
239+
__slots__ = ("input", "output", "model", "model_config", "first_content_at")
240240

241241
def __init__(
242242
self,

src/agents/version.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
import importlib.metadata
22

33
try:
4-
__version__ = importlib.metadata.version("agents")
4+
__version__ = importlib.metadata.version("openai-agents")
55
except importlib.metadata.PackageNotFoundError:
66
# Fallback if running from source without being installed
77
__version__ = "0.0.0"

0 commit comments

Comments
 (0)