-
Notifications
You must be signed in to change notification settings - Fork 21
feat: add output_text property on Response object
#42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
output_text property on Response object
Merged
ashwinb
added a commit
that referenced
this pull request
Nov 19, 2025
Adds a simple helper function to extract aggregated text output from ResponseObject instances. This is particularly useful for streaming responses where chunk.response objects don't automatically have the output_text property that non-streaming responses get from PR #42. Usage: const stream = await client.responses.create({ stream: true, ... }); for await (const chunk of stream) { if (chunk.type === 'response.completed') { const text = getOutputText(chunk.response); } }
ashwinb
added a commit
that referenced
this pull request
Nov 19, 2025
Adds a simple helper function to extract aggregated text output from ResponseObject instances. This is particularly useful for streaming responses where chunk.response objects don't automatically have the output_text property that non-streaming responses get from PR #42. Usage: import { getResponseOutputText } from 'llama-stack-client'; const stream = await client.responses.create({ stream: true, ... }); for await (const chunk of stream) { if (chunk.type === 'response.completed') { const text = getResponseOutputText(chunk.response); } }
ashwinb
added a commit
that referenced
this pull request
Nov 19, 2025
Adds a simple helper function to extract aggregated text output from ResponseObject instances. This replaces the automatic output_text property approach from PR #42 with an explicit helper function, providing a cleaner API without prototype patching. Usage: import { getResponseOutputText } from 'llama-stack-client'; const stream = await client.responses.create({ stream: true, ... }); for await (const chunk of stream) { if (chunk.type === 'response.completed') { const text = getResponseOutputText(chunk.response); } } // Also works for non-streaming responses const response = await client.responses.create({ stream: false, ... }); const text = getResponseOutputText(response);
ashwinb
added a commit
that referenced
this pull request
Nov 19, 2025
…#48) Adds a simple helper function to extract aggregated text output from ResponseObject instances. This **replaces** the automatic `output_text` property approach from PR #42 with an explicit helper function, providing a cleaner API without prototype patching or side-effectful imports. ## Changes - **Removes** `src/lib/init.ts` (automatic property patching) - **Adds** `getResponseOutputText()` helper function - Works for both streaming and non-streaming responses ## Usage ### Streaming responses ```typescript import { getResponseOutputText } from 'llama-stack-client'; const stream = await client.responses.create({ stream: true, ... }); for await (const chunk of stream) { if (chunk.type === 'response.completed') { const text = getResponseOutputText(chunk.response); console.log(text); } } ``` ### Non-streaming responses ```typescript import { getResponseOutputText } from 'llama-stack-client'; const response = await client.responses.create({ stream: false, ... }); const text = getResponseOutputText(response); ``` This provides a clean, explicit API for accessing aggregated text from responses without relying on magic property injection.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Mirroring the python SDK. Used by integration tests.