Skip to content

Conversation

@shkumbinhasani
Copy link
Contributor

@shkumbinhasani shkumbinhasani commented Jan 12, 2026

Summary

  • Updated all documentation examples to use gpt-5.2 instead of gpt-4o
  • Updated summarization examples to use gpt-5-mini instead of gpt-4o-mini
  • Left audio/transcription docs unchanged (they use specialized models like gpt-4o-audio-preview, whisper-1, etc.)

The Story Behind This PR 📖

You know how AI coding agents have knowledge cutoffs? Well, at their cutoff, gpt-4o was the latest and greatest OpenAI model. This means every AI agent out there defaults to suggesting gpt-4o in their code examples.

I've had to fight AI agents so many times to stop using gpt-4o in my code that now whenever I see gpt-4o in documentation, it triggers me. 😅

So I decided to update all the examples to gpt-5.2 - not just for my sanity, but also because:

  1. It keeps the docs feeling fresh and forward-looking
  2. It might help break the habit of AI agents always suggesting gpt-4o
  3. Future readers won't think the SDK only works with older models

Test Plan

  • Verified all changes are in documentation only
  • Confirmed audio/transcription files were left unchanged (different model naming)
  • No code changes, only markdown files

Summary by CodeRabbit

  • Documentation
    • Updated documentation examples across guides, tutorials, and API/reference pages to use gpt-5.2 (and gpt-5-mini where applicable) in place of gpt-4o. All changes are illustrative text/sample updates; no behavioral, API signature, or runtime logic changes.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 12, 2026

📝 Walkthrough

Walkthrough

Documentation-only updates: replaced model identifier strings from gpt-4o/gpt-4o-mini to gpt-5.2/gpt-5-mini across examples in guides, API, protocol, and reference docs. No functional code, signatures, or control flow were modified. (≈26 files)

Changes

Cohort / File(s) Summary
Getting Started
docs/getting-started/overview.md, docs/getting-started/quick-start.md
Replace chat adapter examples from gpt-4ogpt-5.2.
API & Adapters
docs/adapters/openai.md, docs/api/ai.md
Update OpenAI adapter samples and API guide references to gpt-5.2 / gpt-5-mini.
Guides: Tools & Agentic
docs/guides/agentic-cycle.md, docs/guides/client-tools.md, docs/guides/server-tools.md, docs/guides/tool-approval.md, docs/guides/tool-architecture.md, docs/guides/tools.md
Change documented adapter model strings from gpt-4ogpt-5.2 in tool-related examples.
Guides: Core Features
docs/guides/multimodal-content.md, docs/guides/runtime-adapter-switching.md, docs/guides/streaming.md, docs/guides/structured-outputs.md, docs/guides/tree-shaking.md
Update model identifiers and modality mappings to gpt-5.2 / gpt-5-mini in feature guides.
Migration Guide
docs/guides/migration.md
Revise migration examples to use activity-specific adapters (e.g., openaiText('gpt-5.2')) and updated example option shapes; model strings updated throughout.
Protocol Documentation
docs/protocol/chunk-definitions.md, docs/protocol/http-stream-protocol.md, docs/protocol/sse-protocol.md
Replace JSON/NDJSON examples and stream chunks to use gpt-5.2; adapter snippets updated accordingly.
Reference: Functions
docs/reference/functions/chat.md, docs/reference/functions/combineStrategies.md, docs/reference/functions/maxIterations.md, docs/reference/functions/streamToText.md, docs/reference/functions/summarize.md, docs/reference/functions/toHttpResponse.md, docs/reference/functions/toHttpStream.md, docs/reference/functions/toServerSentEventsResponse.md, docs/reference/functions/untilFinishReason.md
Update function/example model strings from gpt-4o/gpt-4o-minigpt-5.2/gpt-5-mini.
Reference: Interfaces
docs/reference/interfaces/SummarizeAdapter.md, docs/reference/interfaces/TextAdapter.md
Adjust generic example parameter values from gpt-4ogpt-5.2.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Possibly related PRs

Suggested reviewers

  • AlemTuzlak
  • jherr

Poem

🐰 I hopped through docs, a gentle clue,
Swapping four-oh for five-point-two,
Examples shine, all strings aligned,
A tidy change, no code unkind —
nibble, hop, and celebrate anew! 🥕✨

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 inconclusive)
Check name Status Explanation Resolution
Description check ❓ Inconclusive The description provides clear motivation and context, but lacks completion of the required checklist items and does not indicate whether a changeset was generated. Complete the checklist by confirming whether the Contributing guide steps were followed and tests were run, and clarify if a changeset is needed for documentation-only changes.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and concisely summarizes the main change: updating OpenAI model examples from gpt-4o to gpt-5.2 across documentation.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Update documentation examples to use gpt-5.2 instead of gpt-4o,
and gpt-5-mini instead of gpt-4o-mini for summarization examples.

Audio/transcription docs were intentionally left unchanged as they
reference specialized audio models (gpt-4o-audio-preview, whisper-1, etc.)
that have different naming conventions.
@shkumbinhasani shkumbinhasani force-pushed the docs/update-openai-model-examples branch from 6272491 to e6e670e Compare January 12, 2026 21:21
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In @docs/guides/tree-shaking.md:
- Line 46: The docs reference an invalid OpenAI model identifier `gpt-5.2-mini`;
update each occurrence (e.g., the adapter call like openaiText('gpt-5.2-mini')
at lines noted) to a valid GPT-5.2 variant such as `gpt-5.2-instant` (or
`gpt-5.2-thinking` / `gpt-5.2-pro`) so all adapter entries and examples
consistently use a supported model name.
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 0e37d8b and 6272491.

📒 Files selected for processing (30)
  • docs/adapters/openai.md
  • docs/api/ai.md
  • docs/getting-started/overview.md
  • docs/getting-started/quick-start.md
  • docs/guides/agentic-cycle.md
  • docs/guides/client-tools.md
  • docs/guides/migration.md
  • docs/guides/multimodal-content.md
  • docs/guides/runtime-adapter-switching.md
  • docs/guides/server-tools.md
  • docs/guides/streaming.md
  • docs/guides/structured-outputs.md
  • docs/guides/tool-approval.md
  • docs/guides/tool-architecture.md
  • docs/guides/tools.md
  • docs/guides/tree-shaking.md
  • docs/protocol/chunk-definitions.md
  • docs/protocol/http-stream-protocol.md
  • docs/protocol/sse-protocol.md
  • docs/reference/functions/chat.md
  • docs/reference/functions/combineStrategies.md
  • docs/reference/functions/maxIterations.md
  • docs/reference/functions/streamToText.md
  • docs/reference/functions/summarize.md
  • docs/reference/functions/toHttpResponse.md
  • docs/reference/functions/toHttpStream.md
  • docs/reference/functions/toServerSentEventsResponse.md
  • docs/reference/functions/untilFinishReason.md
  • docs/reference/interfaces/SummarizeAdapter.md
  • docs/reference/interfaces/TextAdapter.md
🧰 Additional context used
🧠 Learnings (9)
📚 Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Implement framework integrations using the headless `tanstack/ai-client` for state management with framework-specific hooks (useChat) on top

Applied to files:

  • docs/getting-started/overview.md
  • docs/reference/functions/chat.md
  • docs/api/ai.md
  • docs/guides/tools.md
  • docs/adapters/openai.md
  • docs/guides/multimodal-content.md
  • docs/guides/migration.md
  • docs/guides/tree-shaking.md
  • docs/guides/streaming.md
  • docs/guides/client-tools.md
📚 Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to **/*.{ts,tsx} : Implement stream processing with StreamProcessor for handling chunked responses and support partial JSON parsing for streaming AI responses

Applied to files:

  • docs/reference/functions/chat.md
  • docs/reference/functions/toHttpResponse.md
  • docs/api/ai.md
  • docs/adapters/openai.md
  • docs/protocol/chunk-definitions.md
  • docs/reference/functions/toServerSentEventsResponse.md
  • docs/reference/functions/untilFinishReason.md
  • docs/getting-started/quick-start.md
  • docs/protocol/sse-protocol.md
  • docs/guides/migration.md
  • docs/guides/server-tools.md
  • docs/reference/functions/toHttpStream.md
📚 Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to packages/typescript/*/src/adapters/*.ts : Create individual adapter implementations for each provider capability (text, embed, summarize, image) with separate exports to enable tree-shaking

Applied to files:

  • docs/guides/tools.md
  • docs/reference/functions/summarize.md
  • docs/guides/multimodal-content.md
  • docs/guides/migration.md
  • docs/guides/runtime-adapter-switching.md
  • docs/guides/tree-shaking.md
  • docs/reference/interfaces/TextAdapter.md
📚 Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to **/*.{ts,tsx} : Use type-safe per-model configuration with provider options typed based on selected model to ensure compile-time safety

Applied to files:

  • docs/reference/interfaces/SummarizeAdapter.md
  • docs/guides/tree-shaking.md
📚 Learning: 2025-12-27T21:39:29.563Z
Learnt from: harry-whorlow
Repo: TanStack/ai PR: 117
File: packages/typescript/ai-ollama/src/meta/model-meta-llama-guard3.ts:70-75
Timestamp: 2025-12-27T21:39:29.563Z
Learning: The standard template comments in ai-ollama model-meta files (like "Models with text, image, audio, video (no document)") should not be modified to match individual model capabilities, as they are intentionally kept uniform across all model-meta-*.ts files for consistency, regardless of what each specific model actually supports.

Applied to files:

  • docs/guides/multimodal-content.md
  • docs/reference/interfaces/TextAdapter.md
📚 Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Maintain type safety through multimodal content support (image, audio, video, document) with model capability awareness

Applied to files:

  • docs/guides/multimodal-content.md
📚 Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to **/*.{ts,tsx} : Use tree-shakeable adapter architecture for provider implementations - export specialized adapters (text, embedding, summarize, image) as separate imports from `/adapters` subpath rather than monolithic adapters

Applied to files:

  • docs/guides/migration.md
  • docs/guides/runtime-adapter-switching.md
  • docs/guides/tree-shaking.md
  • docs/reference/interfaces/TextAdapter.md
📚 Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to packages/typescript/*/src/index.ts : Export tree-shakeable adapters with clear subpath exports in package.json (e.g., `tanstack/ai/adapters`, `tanstack/ai-openai/adapters`) to minimize bundle size

Applied to files:

  • docs/guides/tree-shaking.md
📚 Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to **/*.{ts,tsx} : Implement isomorphic tool system using `toolDefinition()` with `.server()` and `.client()` implementations for dual-environment execution

Applied to files:

  • docs/guides/client-tools.md
🪛 markdownlint-cli2 (0.18.1)
docs/protocol/sse-protocol.md

82-82: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


88-88: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

🔇 Additional comments (25)
docs/reference/functions/maxIterations.md (1)

35-35: LGTM! Model update is correct.

The update to gpt-5.2 is accurate and aligns with OpenAI's latest model offerings. The example correctly demonstrates the maxIterations agent loop strategy with the current flagship model.

docs/reference/functions/toHttpResponse.md (1)

45-45: LGTM! Model reference updated correctly.

The update to gpt-5.2 is valid. The example accurately demonstrates converting a chat stream to an HTTP response format using the current OpenAI flagship model.

docs/reference/functions/untilFinishReason.md (1)

35-35: LGTM! Documentation update is accurate.

The model reference has been correctly updated to gpt-5.2. The example properly illustrates the untilFinishReason strategy with OpenAI's current flagship model.

docs/guides/tools.md (1)

191-191: GPT-5.2 model is confirmed to exist and is publicly available.

OpenAI released GPT-5.2 on December 11, 2025, and it is available via the OpenAI API for developers. The code examples referencing this model are accurate.

docs/reference/functions/toHttpStream.md (1)

45-45: The gpt-5.2 model reference is accurate. OpenAI released GPT-5.2 in December 2025 with the official model identifiers gpt-5.2, gpt-5.2-chat-latest, and gpt-5.2-pro. The code example in the documentation is valid and will work for users.

docs/guides/tool-approval.md (1)

68-68: No action needed—GPT-5.2 is a valid model.

OpenAI released GPT-5.2 on December 11, 2025, and it is available to developers via the API. The documentation reference is accurate.

docs/guides/agentic-cycle.md (1)

126-126: No issues found. The gpt-5.2 model was announced by OpenAI on December 11, 2025, and is available in the API. The documentation example is accurate.

docs/reference/functions/streamToText.md (1)

38-38: LGTM! Model reference updated correctly.

The example now uses the latest GPT-5.2 model, which is appropriate for documentation updates.

docs/reference/functions/summarize.md (1)

45-45: LGTM! Summarization examples updated correctly.

All examples now consistently use gpt-5-mini, which is the correct model name for the faster, more cost-efficient version of GPT-5. The updates are appropriate for summarization tasks.

Also applies to: 54-54, 63-63, 71-71

docs/reference/interfaces/SummarizeAdapter.md (1)

16-16: LGTM! Type parameter example updated correctly.

The TModel generic parameter example now references gpt-5.2, keeping the documentation current with the latest OpenAI models.

docs/adapters/openai.md (1)

22-22: The gpt-5.2 model identifier is valid and publicly available as of January 2026. OpenAI's API documentation confirms this model, along with variants like gpt-5.2-chat-latest and gpt-5.2-pro. The documentation examples are accurate.

docs/guides/server-tools.md (1)

152-152: Model identifier update looks consistent.

The changes from gpt-4o to gpt-5.2 are applied consistently in both examples. Pending verification that the gpt-5.2 model exists and is publicly available (see verification request in combineStrategies.md).

Also applies to: 210-210

docs/guides/streaming.md (1)

18-18: Model identifier update looks consistent.

The changes from gpt-4o to gpt-5.2 are applied consistently in both streaming examples. Pending verification that the gpt-5.2 model exists and is publicly available (see verification request in combineStrategies.md).

Also applies to: 40-40

docs/reference/functions/combineStrategies.md (1)

36-36: No action needed — GPT-5.2 is confirmed as a publicly available OpenAI model (released December 11, 2025) and the documentation update is valid.

docs/protocol/http-stream-protocol.md (2)

86-88: Documentation examples consistently updated.

All NDJSON example chunks have been systematically updated to reference gpt-5.2, maintaining consistency across different chunk types (content, tool_call, done, error).

Also applies to: 94-95, 101-101, 133-137, 153-153, 363-365


184-184: Code examples updated to use gpt-5.2.

The implementation examples correctly demonstrate the new model identifier in the openaiText() adapter calls.

Also applies to: 239-239

docs/api/ai.md (2)

55-55: Clarify model choice for summarization examples.

The PR description states that "summarization examples should use gpt-5-mini instead of gpt-4o-mini", but these lines use gpt-5.2 instead of gpt-5.2-mini. Please confirm whether:

  • Summarization should use gpt-5.2 (the main model)
  • Summarization should use gpt-5.2-mini (the smaller, cost-effective variant)
  • Or if the PR description should be updated to reflect the actual changes

Also applies to: 329-329


24-24: API documentation examples consistently updated.

All code examples across different API functions (chat(), toolDefinition(), SSE utilities, and usage examples) have been systematically updated to use gpt-5.2.

Also applies to: 34-34, 102-102, 115-115, 143-143, 170-170, 194-194, 277-277, 283-283, 291-291, 313-313

docs/protocol/chunk-definitions.md (1)

19-19: The documentation is accurate. gpt-5.2 is a valid, released OpenAI model (released December 2025) and represents the current frontier release as of January 2026. The model identifier follows OpenAI's standard naming convention, with multiple variants available (gpt-5.2-instant, gpt-5.2-thinking, gpt-5.2-pro, gpt-5.2-codex). No API compatibility changes are required for this model reference.

docs/guides/client-tools.md (1)

105-105: LGTM - Model references updated consistently.

The adapter model references have been updated consistently from gpt-4o to gpt-5.2 in both the POST handler example and the hybrid tools examples. The changes align with the broader documentation updates across the PR.

Also applies to: 300-303

docs/guides/multimodal-content.md (2)

61-61: LGTM - Adapter usage updated consistently.

The adapter model references and accompanying comments have been updated consistently to reference gpt-5.2 throughout the multimodal content examples.

Also applies to: 285-287


106-107: Remove references to non-existent model variants.

The documentation claims support for gpt-5.2-mini and gpt-5.2-audio-preview, but OpenAI's current model lineup does not include these variants. As of December 2025, the gpt-5.2 family consists of text + image-capable models (Instant, Thinking, Pro variants), not separate "mini" or "audio-preview" versions. Audio functionality is provided through distinct models like gpt-realtime and gpt-4o-audio-preview, not through gpt-5.2-audio-preview. Update the documentation to reflect the actual available models and their actual capabilities.

Likely an incorrect or invalid review comment.

docs/protocol/sse-protocol.md (1)

77-77: No action needed—gpt-5.2 is a valid and available model.

OpenAI released GPT-5.2 on December 11, 2025, and it is currently available via the API in multiple variants (Instant, Thinking, and Pro). The model references in the SSE protocol documentation are accurate.

docs/guides/runtime-adapter-switching.md (1)

25-25: Model references updated consistently.

The model references have been updated from gpt-4o to gpt-5.2 and from gpt-4o-mini to gpt-5.2-mini consistently across all adapter examples. Pending verification of model availability (see previous comment).

Also applies to: 43-47, 75-75, 135-135, 166-166, 182-182

docs/guides/migration.md (1)

33-33: Migration examples updated consistently.

All migration examples have been updated to use gpt-5.2 instead of gpt-4o. The before/after patterns clearly demonstrate the API changes while using the updated model identifiers. Pending verification of model availability.

Also applies to: 45-45, 52-52, 126-126, 141-141, 160-160, 174-174, 200-200, 214-214, 231-238, 260-260, 280-280, 374-374, 401-401


const stream = chat({
adapter: openaiText('gpt-4o'),
adapter: openaiText('gpt-5.2'),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

OpenAI GPT models available January 2026 gpt-5.2

💡 Result:

As of January 12, 2026 the latest OpenAI release is GPT‑5.2 (available as GPT‑5.2 Instant, GPT‑5.2 Thinking, and GPT‑5.2 Pro; API names include gpt-5.2-instant and gpt-5.2-thinking). Earlier GPT families (GPT‑5.1, GPT‑5) and GPT‑4.1 remain available to developers/users via the API/ChatGPT. [1][2][3]

Sources:

  • OpenAI — “Introducing GPT‑5.2” (Dec 11, 2025). [1]
  • OpenAI — GPT‑5 system card / GPT‑5.2 update (Dec 11, 2025). [2]
  • Reuters coverage of GPT‑5.2 launch (Dec 11, 2025). [3]

Replace gpt-5.2-mini with a valid OpenAI model identifier.

The documentation correctly references gpt-5.2 at multiple lines, which is valid as of January 2026. However, gpt-5.2-mini (lines 151, 168) does not exist in OpenAI's official offerings. According to OpenAI's December 2025 announcement, the available GPT-5.2 variants are gpt-5.2-instant, gpt-5.2-thinking, and gpt-5.2-pro. Update these references to use one of the valid API model names.

🤖 Prompt for AI Agents
In @docs/guides/tree-shaking.md at line 46, The docs reference an invalid OpenAI
model identifier `gpt-5.2-mini`; update each occurrence (e.g., the adapter call
like openaiText('gpt-5.2-mini') at lines noted) to a valid GPT-5.2 variant such
as `gpt-5.2-instant` (or `gpt-5.2-thinking` / `gpt-5.2-pro`) so all adapter
entries and examples consistently use a supported model name.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In @docs/guides/multimodal-content.md:
- Around line 106-107: The docs list a non-existent model
`gpt-5.2-audio-preview`; update the multimodal table so it only references valid
models: keep `gpt-5.2` and `gpt-5-mini` as text+image, and either remove the
`gpt-5.2-audio-preview` row entirely or replace it with a documented
audio-capable model such as `gpt-4o-audio-preview` (or another official
audio-preview model) and adjust the modalities column accordingly.
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 6272491 and e6e670e.

📒 Files selected for processing (30)
  • docs/adapters/openai.md
  • docs/api/ai.md
  • docs/getting-started/overview.md
  • docs/getting-started/quick-start.md
  • docs/guides/agentic-cycle.md
  • docs/guides/client-tools.md
  • docs/guides/migration.md
  • docs/guides/multimodal-content.md
  • docs/guides/runtime-adapter-switching.md
  • docs/guides/server-tools.md
  • docs/guides/streaming.md
  • docs/guides/structured-outputs.md
  • docs/guides/tool-approval.md
  • docs/guides/tool-architecture.md
  • docs/guides/tools.md
  • docs/guides/tree-shaking.md
  • docs/protocol/chunk-definitions.md
  • docs/protocol/http-stream-protocol.md
  • docs/protocol/sse-protocol.md
  • docs/reference/functions/chat.md
  • docs/reference/functions/combineStrategies.md
  • docs/reference/functions/maxIterations.md
  • docs/reference/functions/streamToText.md
  • docs/reference/functions/summarize.md
  • docs/reference/functions/toHttpResponse.md
  • docs/reference/functions/toHttpStream.md
  • docs/reference/functions/toServerSentEventsResponse.md
  • docs/reference/functions/untilFinishReason.md
  • docs/reference/interfaces/SummarizeAdapter.md
  • docs/reference/interfaces/TextAdapter.md
🚧 Files skipped from review as they are similar to previous changes (11)
  • docs/reference/functions/toHttpStream.md
  • docs/guides/agentic-cycle.md
  • docs/reference/functions/streamToText.md
  • docs/guides/runtime-adapter-switching.md
  • docs/guides/tool-architecture.md
  • docs/reference/functions/untilFinishReason.md
  • docs/reference/functions/combineStrategies.md
  • docs/getting-started/overview.md
  • docs/guides/client-tools.md
  • docs/reference/functions/toServerSentEventsResponse.md
  • docs/reference/functions/maxIterations.md
🧰 Additional context used
🧠 Learnings (7)
📚 Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Implement framework integrations using the headless `tanstack/ai-client` for state management with framework-specific hooks (useChat) on top

Applied to files:

  • docs/guides/streaming.md
  • docs/adapters/openai.md
  • docs/guides/migration.md
  • docs/guides/tree-shaking.md
  • docs/guides/tools.md
  • docs/guides/multimodal-content.md
  • docs/api/ai.md
  • docs/reference/functions/chat.md
📚 Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to **/*.{ts,tsx} : Implement stream processing with StreamProcessor for handling chunked responses and support partial JSON parsing for streaming AI responses

Applied to files:

  • docs/adapters/openai.md
  • docs/guides/migration.md
  • docs/getting-started/quick-start.md
  • docs/reference/functions/toHttpResponse.md
  • docs/api/ai.md
  • docs/protocol/sse-protocol.md
  • docs/protocol/chunk-definitions.md
  • docs/guides/server-tools.md
  • docs/reference/functions/chat.md
📚 Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to packages/typescript/*/src/adapters/*.ts : Create individual adapter implementations for each provider capability (text, embed, summarize, image) with separate exports to enable tree-shaking

Applied to files:

  • docs/guides/migration.md
  • docs/guides/tree-shaking.md
  • docs/guides/tools.md
  • docs/guides/multimodal-content.md
  • docs/reference/interfaces/TextAdapter.md
  • docs/reference/functions/summarize.md
📚 Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to **/*.{ts,tsx} : Use tree-shakeable adapter architecture for provider implementations - export specialized adapters (text, embedding, summarize, image) as separate imports from `/adapters` subpath rather than monolithic adapters

Applied to files:

  • docs/guides/migration.md
  • docs/guides/tree-shaking.md
  • docs/reference/interfaces/TextAdapter.md
📚 Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to packages/typescript/*/src/index.ts : Export tree-shakeable adapters with clear subpath exports in package.json (e.g., `tanstack/ai/adapters`, `tanstack/ai-openai/adapters`) to minimize bundle size

Applied to files:

  • docs/guides/migration.md
  • docs/guides/tree-shaking.md
📚 Learning: 2025-12-27T21:39:29.563Z
Learnt from: harry-whorlow
Repo: TanStack/ai PR: 117
File: packages/typescript/ai-ollama/src/meta/model-meta-llama-guard3.ts:70-75
Timestamp: 2025-12-27T21:39:29.563Z
Learning: The standard template comments in ai-ollama model-meta files (like "Models with text, image, audio, video (no document)") should not be modified to match individual model capabilities, as they are intentionally kept uniform across all model-meta-*.ts files for consistency, regardless of what each specific model actually supports.

Applied to files:

  • docs/guides/multimodal-content.md
📚 Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to **/*.{ts,tsx} : Use type-safe per-model configuration with provider options typed based on selected model to ensure compile-time safety

Applied to files:

  • docs/reference/interfaces/SummarizeAdapter.md
🪛 markdownlint-cli2 (0.18.1)
docs/protocol/sse-protocol.md

82-82: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


88-88: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

🔇 Additional comments (16)
docs/reference/functions/summarize.md (1)

45-45: LGTM! Model identifiers updated correctly.

The documentation updates to use gpt-5-mini are correct—this model is OpenAI's faster, more cost-efficient version designed for well-defined tasks like summarization. The changes appropriately update all four code examples in this reference documentation.

Also applies to: 54-54, 63-63, 71-71

docs/reference/functions/toHttpResponse.md (1)

45-45: Same verification needed: confirm gpt-5.2 model availability.

This example also references the gpt-5.2 model. The verification request from the previous file applies here as well.

docs/guides/tools.md (1)

191-191: Same verification needed: confirm gpt-5.2 model availability.

Both examples reference the gpt-5.2 model. The verification request from the first file applies to these lines as well.

Also applies to: 283-283

docs/guides/tool-approval.md (1)

68-68: The gpt-5.2 model is valid and available. OpenAI announced GPT-5.2 on December 11, 2025, and it is publicly available via the OpenAI API under the aliases gpt-5.2, gpt-5.2-chat-latest, and gpt-5.2-pro. The documentation example is accurate.

docs/reference/functions/chat.md (1)

53-53: Model reference is valid. The gpt-5.2 model is available from OpenAI (announced December 11, 2025) and is correctly referenced in the examples across lines 53, 65, 74, and 85.

docs/reference/interfaces/SummarizeAdapter.md (1)

16-16: Verify consistency with PR objectives regarding model selection for this example.

Both gpt-5.2 and gpt-5-mini are valid, available OpenAI models as of January 2026. However, confirm whether the PR objectives specify a particular model for summarization examples and whether gpt-5.2 aligns with those objectives.

docs/reference/interfaces/TextAdapter.md (1)

16-16: Documentation example update is consistent.

The model name in the generic parameter example has been updated to match the broader documentation changes. Ensure gpt-5.2 is a valid model identifier as noted in other files.

docs/guides/multimodal-content.md (1)

61-61: Model name updates are consistent with multimodal examples.

The examples have been updated to use gpt-5.2, which should support the multimodal content demonstrated (text + image). Verification of model capabilities is requested in the modality support section above.

Also applies to: 285-285, 287-287

docs/guides/structured-outputs.md (1)

54-54: No issue found. gpt-5.2 is a valid OpenAI model with full support for Structured Outputs, including schema-constrained JSON responses and function-calling. The documentation reference is accurate.

Likely an incorrect or invalid review comment.

docs/guides/tree-shaking.md (1)

46-46: No action needed. The OpenAI models gpt-5.2 and gpt-5-mini are valid and publicly available (announced December 11, 2025). The documentation examples are correct.

Likely an incorrect or invalid review comment.

docs/adapters/openai.md (1)

22-22: Consistent with PR objectives; model validity verification needed.

The updates correctly replace gpt-4o with gpt-5.2 in chat examples and gpt-4o-mini with gpt-5-mini in the summarization example (line 144). Specialized models for audio/transcription (whisper-1, tts-1) and image generation (gpt-image-1) are appropriately left unchanged, which aligns with the PR description.

The model identifier validity checks requested in previous files will confirm whether these updates are correct.

Also applies to: 38-38, 66-66, 95-95, 107-107, 144-144

docs/api/ai.md (1)

24-24: Both gpt-5.2 and gpt-5-mini are valid OpenAI model identifiers confirmed as available in the API as of January 2026. These model references in the documentation are accurate and will not cause integration failures.

docs/protocol/chunk-definitions.md (1)

19-19: No action required — gpt-5.2 is a valid and currently available OpenAI model identifier.

OpenAI released GPT-5.2 on December 11, 2025, with the API identifier "gpt-5.2" available to developers. The documentation's use of this model identifier is accurate and reflects currently available APIs.

docs/protocol/http-stream-protocol.md (1)

86-88: LGTM! Valid model identifier updates.

The documentation examples have been correctly updated to use the newer gpt-5.2 model identifier, which is confirmed as a valid OpenAI model released in December 2025. All changes are documentation-only and maintain consistency throughout the file.

Also applies to: 94-95, 101-101, 133-137, 153-153, 184-184, 239-239, 363-365

docs/protocol/sse-protocol.md (1)

77-77: LGTM! Valid model identifier updates.

The SSE protocol documentation has been correctly updated to use the gpt-5.2 model identifier throughout all examples. The changes are consistent and maintain the integrity of the protocol documentation.

Also applies to: 83-83, 89-89, 123-127, 149-149, 177-177, 226-226, 307-311

docs/guides/migration.md (1)

33-33: LGTM! Migration guide updated with current model identifiers.

The migration guide has been appropriately updated to use gpt-5.2 throughout all code examples. This ensures developers following the migration guide will be using current, valid model identifiers when implementing the new API patterns.

Also applies to: 45-45, 52-52, 126-126, 141-141, 160-160, 174-174, 200-200, 214-214, 231-231, 233-233, 238-238, 260-260, 280-280, 374-374, 401-401

Comment on lines +106 to +107
- `gpt-5.2`, `gpt-5-mini`: text, image
- `gpt-5.2-audio-preview`: text, image, audio
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

OpenAI gpt-5.2 gpt-5-mini gpt-5.2-audio-preview model capabilities modalities support

💡 Result:

Summary — current OpenAI docs (Jan 2026):

  • GPT-5.2 (API name gpt-5.2 / gpt-5.2-chat-latest / gpt-5.2-pro): text I/O; image input only; audio not supported; video not supported. Supports Responses, Chat Completions, Realtime, Assistants, Batch, streaming, function-calling, tools (web search, file search, image generation, etc.), structured outputs; Pro/Thinking/Instant variants and reasoning-effort settings. [1][3]

  • GPT-5 mini (gpt-5-mini): smaller/cost‑optimized GPT‑5 family model — text I/O; image input only; audio not supported; video not supported. Same core endpoints and many features (streaming, function calling, code interpreter support), but lower cost and limits. [2]

  • “gpt-5.2-audio-preview”: there is no public GPT‑5.2 audio-preview model documented — OpenAI’s audio-preview model listings (e.g., gpt-4o-audio-preview) are separate and show audio-capable models under distinct names. If you need audio I/O, use the documented audio-preview models (check OpenAI’s audio model pages) or confirm whether OpenAI has released an explicit GPT‑5 audio preview. [1][4]

Sources:
[1] OpenAI Platform — GPT‑5.2 model page.
[2] OpenAI Platform — GPT‑5 mini model page.
[3] OpenAI — Introducing GPT‑5.2 announcement.
[4] OpenAI Platform — example audio-preview model (gpt‑4o‑audio‑preview) for reference.


Remove or replace the non-existent gpt-5.2-audio-preview model reference.

According to OpenAI's official documentation, gpt-5.2 and gpt-5-mini correctly support text and image modalities. However, gpt-5.2-audio-preview does not exist in OpenAI's public documentation. If audio support is needed, use OpenAI's documented audio-preview models (e.g., gpt-4o-audio-preview) instead, or remove this entry if not yet available.

🤖 Prompt for AI Agents
In @docs/guides/multimodal-content.md around lines 106 - 107, The docs list a
non-existent model `gpt-5.2-audio-preview`; update the multimodal table so it
only references valid models: keep `gpt-5.2` and `gpt-5-mini` as text+image, and
either remove the `gpt-5.2-audio-preview` row entirely or replace it with a
documented audio-capable model such as `gpt-4o-audio-preview` (or another
official audio-preview model) and adjust the modalities column accordingly.

@AlemTuzlak AlemTuzlak merged commit 8e93ce2 into TanStack:main Jan 13, 2026
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants