-
-
Notifications
You must be signed in to change notification settings - Fork 109
docs: update OpenAI model examples from gpt-4o to gpt-5.2 #222
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
docs: update OpenAI model examples from gpt-4o to gpt-5.2 #222
Conversation
📝 WalkthroughWalkthroughDocumentation-only updates: replaced model identifier strings from gpt-4o/gpt-4o-mini to gpt-5.2/gpt-5-mini across examples in guides, API, protocol, and reference docs. No functional code, signatures, or control flow were modified. (≈26 files) Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Update documentation examples to use gpt-5.2 instead of gpt-4o, and gpt-5-mini instead of gpt-4o-mini for summarization examples. Audio/transcription docs were intentionally left unchanged as they reference specialized audio models (gpt-4o-audio-preview, whisper-1, etc.) that have different naming conventions.
6272491 to
e6e670e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In @docs/guides/tree-shaking.md:
- Line 46: The docs reference an invalid OpenAI model identifier `gpt-5.2-mini`;
update each occurrence (e.g., the adapter call like openaiText('gpt-5.2-mini')
at lines noted) to a valid GPT-5.2 variant such as `gpt-5.2-instant` (or
`gpt-5.2-thinking` / `gpt-5.2-pro`) so all adapter entries and examples
consistently use a supported model name.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (30)
docs/adapters/openai.mddocs/api/ai.mddocs/getting-started/overview.mddocs/getting-started/quick-start.mddocs/guides/agentic-cycle.mddocs/guides/client-tools.mddocs/guides/migration.mddocs/guides/multimodal-content.mddocs/guides/runtime-adapter-switching.mddocs/guides/server-tools.mddocs/guides/streaming.mddocs/guides/structured-outputs.mddocs/guides/tool-approval.mddocs/guides/tool-architecture.mddocs/guides/tools.mddocs/guides/tree-shaking.mddocs/protocol/chunk-definitions.mddocs/protocol/http-stream-protocol.mddocs/protocol/sse-protocol.mddocs/reference/functions/chat.mddocs/reference/functions/combineStrategies.mddocs/reference/functions/maxIterations.mddocs/reference/functions/streamToText.mddocs/reference/functions/summarize.mddocs/reference/functions/toHttpResponse.mddocs/reference/functions/toHttpStream.mddocs/reference/functions/toServerSentEventsResponse.mddocs/reference/functions/untilFinishReason.mddocs/reference/interfaces/SummarizeAdapter.mddocs/reference/interfaces/TextAdapter.md
🧰 Additional context used
🧠 Learnings (9)
📚 Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Implement framework integrations using the headless `tanstack/ai-client` for state management with framework-specific hooks (useChat) on top
Applied to files:
docs/getting-started/overview.mddocs/reference/functions/chat.mddocs/api/ai.mddocs/guides/tools.mddocs/adapters/openai.mddocs/guides/multimodal-content.mddocs/guides/migration.mddocs/guides/tree-shaking.mddocs/guides/streaming.mddocs/guides/client-tools.md
📚 Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to **/*.{ts,tsx} : Implement stream processing with StreamProcessor for handling chunked responses and support partial JSON parsing for streaming AI responses
Applied to files:
docs/reference/functions/chat.mddocs/reference/functions/toHttpResponse.mddocs/api/ai.mddocs/adapters/openai.mddocs/protocol/chunk-definitions.mddocs/reference/functions/toServerSentEventsResponse.mddocs/reference/functions/untilFinishReason.mddocs/getting-started/quick-start.mddocs/protocol/sse-protocol.mddocs/guides/migration.mddocs/guides/server-tools.mddocs/reference/functions/toHttpStream.md
📚 Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to packages/typescript/*/src/adapters/*.ts : Create individual adapter implementations for each provider capability (text, embed, summarize, image) with separate exports to enable tree-shaking
Applied to files:
docs/guides/tools.mddocs/reference/functions/summarize.mddocs/guides/multimodal-content.mddocs/guides/migration.mddocs/guides/runtime-adapter-switching.mddocs/guides/tree-shaking.mddocs/reference/interfaces/TextAdapter.md
📚 Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to **/*.{ts,tsx} : Use type-safe per-model configuration with provider options typed based on selected model to ensure compile-time safety
Applied to files:
docs/reference/interfaces/SummarizeAdapter.mddocs/guides/tree-shaking.md
📚 Learning: 2025-12-27T21:39:29.563Z
Learnt from: harry-whorlow
Repo: TanStack/ai PR: 117
File: packages/typescript/ai-ollama/src/meta/model-meta-llama-guard3.ts:70-75
Timestamp: 2025-12-27T21:39:29.563Z
Learning: The standard template comments in ai-ollama model-meta files (like "Models with text, image, audio, video (no document)") should not be modified to match individual model capabilities, as they are intentionally kept uniform across all model-meta-*.ts files for consistency, regardless of what each specific model actually supports.
Applied to files:
docs/guides/multimodal-content.mddocs/reference/interfaces/TextAdapter.md
📚 Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Maintain type safety through multimodal content support (image, audio, video, document) with model capability awareness
Applied to files:
docs/guides/multimodal-content.md
📚 Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to **/*.{ts,tsx} : Use tree-shakeable adapter architecture for provider implementations - export specialized adapters (text, embedding, summarize, image) as separate imports from `/adapters` subpath rather than monolithic adapters
Applied to files:
docs/guides/migration.mddocs/guides/runtime-adapter-switching.mddocs/guides/tree-shaking.mddocs/reference/interfaces/TextAdapter.md
📚 Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to packages/typescript/*/src/index.ts : Export tree-shakeable adapters with clear subpath exports in package.json (e.g., `tanstack/ai/adapters`, `tanstack/ai-openai/adapters`) to minimize bundle size
Applied to files:
docs/guides/tree-shaking.md
📚 Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to **/*.{ts,tsx} : Implement isomorphic tool system using `toolDefinition()` with `.server()` and `.client()` implementations for dual-environment execution
Applied to files:
docs/guides/client-tools.md
🪛 markdownlint-cli2 (0.18.1)
docs/protocol/sse-protocol.md
82-82: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
88-88: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🔇 Additional comments (25)
docs/reference/functions/maxIterations.md (1)
35-35: LGTM! Model update is correct.The update to
gpt-5.2is accurate and aligns with OpenAI's latest model offerings. The example correctly demonstrates themaxIterationsagent loop strategy with the current flagship model.docs/reference/functions/toHttpResponse.md (1)
45-45: LGTM! Model reference updated correctly.The update to
gpt-5.2is valid. The example accurately demonstrates converting a chat stream to an HTTP response format using the current OpenAI flagship model.docs/reference/functions/untilFinishReason.md (1)
35-35: LGTM! Documentation update is accurate.The model reference has been correctly updated to
gpt-5.2. The example properly illustrates theuntilFinishReasonstrategy with OpenAI's current flagship model.docs/guides/tools.md (1)
191-191: GPT-5.2 model is confirmed to exist and is publicly available.OpenAI released GPT-5.2 on December 11, 2025, and it is available via the OpenAI API for developers. The code examples referencing this model are accurate.
docs/reference/functions/toHttpStream.md (1)
45-45: The gpt-5.2 model reference is accurate. OpenAI released GPT-5.2 in December 2025 with the official model identifiersgpt-5.2,gpt-5.2-chat-latest, andgpt-5.2-pro. The code example in the documentation is valid and will work for users.docs/guides/tool-approval.md (1)
68-68: No action needed—GPT-5.2 is a valid model.OpenAI released GPT-5.2 on December 11, 2025, and it is available to developers via the API. The documentation reference is accurate.
docs/guides/agentic-cycle.md (1)
126-126: No issues found. The gpt-5.2 model was announced by OpenAI on December 11, 2025, and is available in the API. The documentation example is accurate.docs/reference/functions/streamToText.md (1)
38-38: LGTM! Model reference updated correctly.The example now uses the latest GPT-5.2 model, which is appropriate for documentation updates.
docs/reference/functions/summarize.md (1)
45-45: LGTM! Summarization examples updated correctly.All examples now consistently use
gpt-5-mini, which is the correct model name for the faster, more cost-efficient version of GPT-5. The updates are appropriate for summarization tasks.Also applies to: 54-54, 63-63, 71-71
docs/reference/interfaces/SummarizeAdapter.md (1)
16-16: LGTM! Type parameter example updated correctly.The TModel generic parameter example now references
gpt-5.2, keeping the documentation current with the latest OpenAI models.docs/adapters/openai.md (1)
22-22: Thegpt-5.2model identifier is valid and publicly available as of January 2026. OpenAI's API documentation confirms this model, along with variants likegpt-5.2-chat-latestandgpt-5.2-pro. The documentation examples are accurate.docs/guides/server-tools.md (1)
152-152: Model identifier update looks consistent.The changes from
gpt-4otogpt-5.2are applied consistently in both examples. Pending verification that thegpt-5.2model exists and is publicly available (see verification request in combineStrategies.md).Also applies to: 210-210
docs/guides/streaming.md (1)
18-18: Model identifier update looks consistent.The changes from
gpt-4otogpt-5.2are applied consistently in both streaming examples. Pending verification that thegpt-5.2model exists and is publicly available (see verification request in combineStrategies.md).Also applies to: 40-40
docs/reference/functions/combineStrategies.md (1)
36-36: No action needed — GPT-5.2 is confirmed as a publicly available OpenAI model (released December 11, 2025) and the documentation update is valid.docs/protocol/http-stream-protocol.md (2)
86-88: Documentation examples consistently updated.All NDJSON example chunks have been systematically updated to reference
gpt-5.2, maintaining consistency across different chunk types (content, tool_call, done, error).Also applies to: 94-95, 101-101, 133-137, 153-153, 363-365
184-184: Code examples updated to usegpt-5.2.The implementation examples correctly demonstrate the new model identifier in the
openaiText()adapter calls.Also applies to: 239-239
docs/api/ai.md (2)
55-55: Clarify model choice for summarization examples.The PR description states that "summarization examples should use
gpt-5-miniinstead ofgpt-4o-mini", but these lines usegpt-5.2instead ofgpt-5.2-mini. Please confirm whether:
- Summarization should use
gpt-5.2(the main model)- Summarization should use
gpt-5.2-mini(the smaller, cost-effective variant)- Or if the PR description should be updated to reflect the actual changes
Also applies to: 329-329
24-24: API documentation examples consistently updated.All code examples across different API functions (
chat(),toolDefinition(), SSE utilities, and usage examples) have been systematically updated to usegpt-5.2.Also applies to: 34-34, 102-102, 115-115, 143-143, 170-170, 194-194, 277-277, 283-283, 291-291, 313-313
docs/protocol/chunk-definitions.md (1)
19-19: The documentation is accurate.gpt-5.2is a valid, released OpenAI model (released December 2025) and represents the current frontier release as of January 2026. The model identifier follows OpenAI's standard naming convention, with multiple variants available (gpt-5.2-instant, gpt-5.2-thinking, gpt-5.2-pro, gpt-5.2-codex). No API compatibility changes are required for this model reference.docs/guides/client-tools.md (1)
105-105: LGTM - Model references updated consistently.The adapter model references have been updated consistently from
gpt-4otogpt-5.2in both the POST handler example and the hybrid tools examples. The changes align with the broader documentation updates across the PR.Also applies to: 300-303
docs/guides/multimodal-content.md (2)
61-61: LGTM - Adapter usage updated consistently.The adapter model references and accompanying comments have been updated consistently to reference
gpt-5.2throughout the multimodal content examples.Also applies to: 285-287
106-107: Remove references to non-existent model variants.The documentation claims support for
gpt-5.2-miniandgpt-5.2-audio-preview, but OpenAI's current model lineup does not include these variants. As of December 2025, the gpt-5.2 family consists of text + image-capable models (Instant, Thinking, Pro variants), not separate "mini" or "audio-preview" versions. Audio functionality is provided through distinct models likegpt-realtimeandgpt-4o-audio-preview, not throughgpt-5.2-audio-preview. Update the documentation to reflect the actual available models and their actual capabilities.Likely an incorrect or invalid review comment.
docs/protocol/sse-protocol.md (1)
77-77: No action needed—gpt-5.2 is a valid and available model.OpenAI released GPT-5.2 on December 11, 2025, and it is currently available via the API in multiple variants (Instant, Thinking, and Pro). The model references in the SSE protocol documentation are accurate.
docs/guides/runtime-adapter-switching.md (1)
25-25: Model references updated consistently.The model references have been updated from
gpt-4otogpt-5.2and fromgpt-4o-minitogpt-5.2-miniconsistently across all adapter examples. Pending verification of model availability (see previous comment).Also applies to: 43-47, 75-75, 135-135, 166-166, 182-182
docs/guides/migration.md (1)
33-33: Migration examples updated consistently.All migration examples have been updated to use
gpt-5.2instead ofgpt-4o. The before/after patterns clearly demonstrate the API changes while using the updated model identifiers. Pending verification of model availability.Also applies to: 45-45, 52-52, 126-126, 141-141, 160-160, 174-174, 200-200, 214-214, 231-238, 260-260, 280-280, 374-374, 401-401
|
|
||
| const stream = chat({ | ||
| adapter: openaiText('gpt-4o'), | ||
| adapter: openaiText('gpt-5.2'), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🌐 Web query:
OpenAI GPT models available January 2026 gpt-5.2
💡 Result:
As of January 12, 2026 the latest OpenAI release is GPT‑5.2 (available as GPT‑5.2 Instant, GPT‑5.2 Thinking, and GPT‑5.2 Pro; API names include gpt-5.2-instant and gpt-5.2-thinking). Earlier GPT families (GPT‑5.1, GPT‑5) and GPT‑4.1 remain available to developers/users via the API/ChatGPT. [1][2][3]
Sources:
- OpenAI — “Introducing GPT‑5.2” (Dec 11, 2025). [1]
- OpenAI — GPT‑5 system card / GPT‑5.2 update (Dec 11, 2025). [2]
- Reuters coverage of GPT‑5.2 launch (Dec 11, 2025). [3]
Replace gpt-5.2-mini with a valid OpenAI model identifier.
The documentation correctly references gpt-5.2 at multiple lines, which is valid as of January 2026. However, gpt-5.2-mini (lines 151, 168) does not exist in OpenAI's official offerings. According to OpenAI's December 2025 announcement, the available GPT-5.2 variants are gpt-5.2-instant, gpt-5.2-thinking, and gpt-5.2-pro. Update these references to use one of the valid API model names.
🤖 Prompt for AI Agents
In @docs/guides/tree-shaking.md at line 46, The docs reference an invalid OpenAI
model identifier `gpt-5.2-mini`; update each occurrence (e.g., the adapter call
like openaiText('gpt-5.2-mini') at lines noted) to a valid GPT-5.2 variant such
as `gpt-5.2-instant` (or `gpt-5.2-thinking` / `gpt-5.2-pro`) so all adapter
entries and examples consistently use a supported model name.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In @docs/guides/multimodal-content.md:
- Around line 106-107: The docs list a non-existent model
`gpt-5.2-audio-preview`; update the multimodal table so it only references valid
models: keep `gpt-5.2` and `gpt-5-mini` as text+image, and either remove the
`gpt-5.2-audio-preview` row entirely or replace it with a documented
audio-capable model such as `gpt-4o-audio-preview` (or another official
audio-preview model) and adjust the modalities column accordingly.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (30)
docs/adapters/openai.mddocs/api/ai.mddocs/getting-started/overview.mddocs/getting-started/quick-start.mddocs/guides/agentic-cycle.mddocs/guides/client-tools.mddocs/guides/migration.mddocs/guides/multimodal-content.mddocs/guides/runtime-adapter-switching.mddocs/guides/server-tools.mddocs/guides/streaming.mddocs/guides/structured-outputs.mddocs/guides/tool-approval.mddocs/guides/tool-architecture.mddocs/guides/tools.mddocs/guides/tree-shaking.mddocs/protocol/chunk-definitions.mddocs/protocol/http-stream-protocol.mddocs/protocol/sse-protocol.mddocs/reference/functions/chat.mddocs/reference/functions/combineStrategies.mddocs/reference/functions/maxIterations.mddocs/reference/functions/streamToText.mddocs/reference/functions/summarize.mddocs/reference/functions/toHttpResponse.mddocs/reference/functions/toHttpStream.mddocs/reference/functions/toServerSentEventsResponse.mddocs/reference/functions/untilFinishReason.mddocs/reference/interfaces/SummarizeAdapter.mddocs/reference/interfaces/TextAdapter.md
🚧 Files skipped from review as they are similar to previous changes (11)
- docs/reference/functions/toHttpStream.md
- docs/guides/agentic-cycle.md
- docs/reference/functions/streamToText.md
- docs/guides/runtime-adapter-switching.md
- docs/guides/tool-architecture.md
- docs/reference/functions/untilFinishReason.md
- docs/reference/functions/combineStrategies.md
- docs/getting-started/overview.md
- docs/guides/client-tools.md
- docs/reference/functions/toServerSentEventsResponse.md
- docs/reference/functions/maxIterations.md
🧰 Additional context used
🧠 Learnings (7)
📚 Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Implement framework integrations using the headless `tanstack/ai-client` for state management with framework-specific hooks (useChat) on top
Applied to files:
docs/guides/streaming.mddocs/adapters/openai.mddocs/guides/migration.mddocs/guides/tree-shaking.mddocs/guides/tools.mddocs/guides/multimodal-content.mddocs/api/ai.mddocs/reference/functions/chat.md
📚 Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to **/*.{ts,tsx} : Implement stream processing with StreamProcessor for handling chunked responses and support partial JSON parsing for streaming AI responses
Applied to files:
docs/adapters/openai.mddocs/guides/migration.mddocs/getting-started/quick-start.mddocs/reference/functions/toHttpResponse.mddocs/api/ai.mddocs/protocol/sse-protocol.mddocs/protocol/chunk-definitions.mddocs/guides/server-tools.mddocs/reference/functions/chat.md
📚 Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to packages/typescript/*/src/adapters/*.ts : Create individual adapter implementations for each provider capability (text, embed, summarize, image) with separate exports to enable tree-shaking
Applied to files:
docs/guides/migration.mddocs/guides/tree-shaking.mddocs/guides/tools.mddocs/guides/multimodal-content.mddocs/reference/interfaces/TextAdapter.mddocs/reference/functions/summarize.md
📚 Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to **/*.{ts,tsx} : Use tree-shakeable adapter architecture for provider implementations - export specialized adapters (text, embedding, summarize, image) as separate imports from `/adapters` subpath rather than monolithic adapters
Applied to files:
docs/guides/migration.mddocs/guides/tree-shaking.mddocs/reference/interfaces/TextAdapter.md
📚 Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to packages/typescript/*/src/index.ts : Export tree-shakeable adapters with clear subpath exports in package.json (e.g., `tanstack/ai/adapters`, `tanstack/ai-openai/adapters`) to minimize bundle size
Applied to files:
docs/guides/migration.mddocs/guides/tree-shaking.md
📚 Learning: 2025-12-27T21:39:29.563Z
Learnt from: harry-whorlow
Repo: TanStack/ai PR: 117
File: packages/typescript/ai-ollama/src/meta/model-meta-llama-guard3.ts:70-75
Timestamp: 2025-12-27T21:39:29.563Z
Learning: The standard template comments in ai-ollama model-meta files (like "Models with text, image, audio, video (no document)") should not be modified to match individual model capabilities, as they are intentionally kept uniform across all model-meta-*.ts files for consistency, regardless of what each specific model actually supports.
Applied to files:
docs/guides/multimodal-content.md
📚 Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to **/*.{ts,tsx} : Use type-safe per-model configuration with provider options typed based on selected model to ensure compile-time safety
Applied to files:
docs/reference/interfaces/SummarizeAdapter.md
🪛 markdownlint-cli2 (0.18.1)
docs/protocol/sse-protocol.md
82-82: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
88-88: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🔇 Additional comments (16)
docs/reference/functions/summarize.md (1)
45-45: LGTM! Model identifiers updated correctly.The documentation updates to use
gpt-5-miniare correct—this model is OpenAI's faster, more cost-efficient version designed for well-defined tasks like summarization. The changes appropriately update all four code examples in this reference documentation.Also applies to: 54-54, 63-63, 71-71
docs/reference/functions/toHttpResponse.md (1)
45-45: Same verification needed: confirm gpt-5.2 model availability.This example also references the
gpt-5.2model. The verification request from the previous file applies here as well.docs/guides/tools.md (1)
191-191: Same verification needed: confirm gpt-5.2 model availability.Both examples reference the
gpt-5.2model. The verification request from the first file applies to these lines as well.Also applies to: 283-283
docs/guides/tool-approval.md (1)
68-68: The gpt-5.2 model is valid and available. OpenAI announced GPT-5.2 on December 11, 2025, and it is publicly available via the OpenAI API under the aliasesgpt-5.2,gpt-5.2-chat-latest, andgpt-5.2-pro. The documentation example is accurate.docs/reference/functions/chat.md (1)
53-53: Model reference is valid. The gpt-5.2 model is available from OpenAI (announced December 11, 2025) and is correctly referenced in the examples across lines 53, 65, 74, and 85.docs/reference/interfaces/SummarizeAdapter.md (1)
16-16: Verify consistency with PR objectives regarding model selection for this example.Both
gpt-5.2andgpt-5-miniare valid, available OpenAI models as of January 2026. However, confirm whether the PR objectives specify a particular model for summarization examples and whethergpt-5.2aligns with those objectives.docs/reference/interfaces/TextAdapter.md (1)
16-16: Documentation example update is consistent.The model name in the generic parameter example has been updated to match the broader documentation changes. Ensure
gpt-5.2is a valid model identifier as noted in other files.docs/guides/multimodal-content.md (1)
61-61: Model name updates are consistent with multimodal examples.The examples have been updated to use
gpt-5.2, which should support the multimodal content demonstrated (text + image). Verification of model capabilities is requested in the modality support section above.Also applies to: 285-285, 287-287
docs/guides/structured-outputs.md (1)
54-54: No issue found.gpt-5.2is a valid OpenAI model with full support for Structured Outputs, including schema-constrained JSON responses and function-calling. The documentation reference is accurate.Likely an incorrect or invalid review comment.
docs/guides/tree-shaking.md (1)
46-46: No action needed. The OpenAI modelsgpt-5.2andgpt-5-miniare valid and publicly available (announced December 11, 2025). The documentation examples are correct.Likely an incorrect or invalid review comment.
docs/adapters/openai.md (1)
22-22: Consistent with PR objectives; model validity verification needed.The updates correctly replace
gpt-4owithgpt-5.2in chat examples andgpt-4o-miniwithgpt-5-miniin the summarization example (line 144). Specialized models for audio/transcription (whisper-1,tts-1) and image generation (gpt-image-1) are appropriately left unchanged, which aligns with the PR description.The model identifier validity checks requested in previous files will confirm whether these updates are correct.
Also applies to: 38-38, 66-66, 95-95, 107-107, 144-144
docs/api/ai.md (1)
24-24: Bothgpt-5.2andgpt-5-miniare valid OpenAI model identifiers confirmed as available in the API as of January 2026. These model references in the documentation are accurate and will not cause integration failures.docs/protocol/chunk-definitions.md (1)
19-19: No action required —gpt-5.2is a valid and currently available OpenAI model identifier.OpenAI released GPT-5.2 on December 11, 2025, with the API identifier "gpt-5.2" available to developers. The documentation's use of this model identifier is accurate and reflects currently available APIs.
docs/protocol/http-stream-protocol.md (1)
86-88: LGTM! Valid model identifier updates.The documentation examples have been correctly updated to use the newer
gpt-5.2model identifier, which is confirmed as a valid OpenAI model released in December 2025. All changes are documentation-only and maintain consistency throughout the file.Also applies to: 94-95, 101-101, 133-137, 153-153, 184-184, 239-239, 363-365
docs/protocol/sse-protocol.md (1)
77-77: LGTM! Valid model identifier updates.The SSE protocol documentation has been correctly updated to use the
gpt-5.2model identifier throughout all examples. The changes are consistent and maintain the integrity of the protocol documentation.Also applies to: 83-83, 89-89, 123-127, 149-149, 177-177, 226-226, 307-311
docs/guides/migration.md (1)
33-33: LGTM! Migration guide updated with current model identifiers.The migration guide has been appropriately updated to use
gpt-5.2throughout all code examples. This ensures developers following the migration guide will be using current, valid model identifiers when implementing the new API patterns.Also applies to: 45-45, 52-52, 126-126, 141-141, 160-160, 174-174, 200-200, 214-214, 231-231, 233-233, 238-238, 260-260, 280-280, 374-374, 401-401
| - `gpt-5.2`, `gpt-5-mini`: text, image | ||
| - `gpt-5.2-audio-preview`: text, image, audio |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🌐 Web query:
OpenAI gpt-5.2 gpt-5-mini gpt-5.2-audio-preview model capabilities modalities support
💡 Result:
Summary — current OpenAI docs (Jan 2026):
-
GPT-5.2 (API name gpt-5.2 / gpt-5.2-chat-latest / gpt-5.2-pro): text I/O; image input only; audio not supported; video not supported. Supports Responses, Chat Completions, Realtime, Assistants, Batch, streaming, function-calling, tools (web search, file search, image generation, etc.), structured outputs; Pro/Thinking/Instant variants and reasoning-effort settings. [1][3]
-
GPT-5 mini (gpt-5-mini): smaller/cost‑optimized GPT‑5 family model — text I/O; image input only; audio not supported; video not supported. Same core endpoints and many features (streaming, function calling, code interpreter support), but lower cost and limits. [2]
-
“gpt-5.2-audio-preview”: there is no public GPT‑5.2 audio-preview model documented — OpenAI’s audio-preview model listings (e.g., gpt-4o-audio-preview) are separate and show audio-capable models under distinct names. If you need audio I/O, use the documented audio-preview models (check OpenAI’s audio model pages) or confirm whether OpenAI has released an explicit GPT‑5 audio preview. [1][4]
Sources:
[1] OpenAI Platform — GPT‑5.2 model page.
[2] OpenAI Platform — GPT‑5 mini model page.
[3] OpenAI — Introducing GPT‑5.2 announcement.
[4] OpenAI Platform — example audio-preview model (gpt‑4o‑audio‑preview) for reference.
Remove or replace the non-existent gpt-5.2-audio-preview model reference.
According to OpenAI's official documentation, gpt-5.2 and gpt-5-mini correctly support text and image modalities. However, gpt-5.2-audio-preview does not exist in OpenAI's public documentation. If audio support is needed, use OpenAI's documented audio-preview models (e.g., gpt-4o-audio-preview) instead, or remove this entry if not yet available.
🤖 Prompt for AI Agents
In @docs/guides/multimodal-content.md around lines 106 - 107, The docs list a
non-existent model `gpt-5.2-audio-preview`; update the multimodal table so it
only references valid models: keep `gpt-5.2` and `gpt-5-mini` as text+image, and
either remove the `gpt-5.2-audio-preview` row entirely or replace it with a
documented audio-capable model such as `gpt-4o-audio-preview` (or another
official audio-preview model) and adjust the modalities column accordingly.
Summary
gpt-5.2instead ofgpt-4ogpt-5-miniinstead ofgpt-4o-minigpt-4o-audio-preview,whisper-1, etc.)The Story Behind This PR 📖
You know how AI coding agents have knowledge cutoffs? Well, at their cutoff,
gpt-4owas the latest and greatest OpenAI model. This means every AI agent out there defaults to suggestinggpt-4oin their code examples.I've had to fight AI agents so many times to stop using
gpt-4oin my code that now whenever I seegpt-4oin documentation, it triggers me. 😅So I decided to update all the examples to
gpt-5.2- not just for my sanity, but also because:gpt-4oTest Plan
Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings.