Skip to content

Conversation

@jrobertboos
Copy link
Contributor

@jrobertboos jrobertboos commented Aug 6, 2025

Description

The /conversations endpoint was returning an empty chat history when there should have been messages in the history. I think that the upgraded version of llama-stack broke the GET /v1/agents/{agent_id}/sessions endpoint so that the turns are no longer returned in the response. This fix instead uses both the GET /v1/agents/{agent_id}/sessions (list()) to get the session_id and then uses the GET /v1/agents/{agent_id}/sessions/{session_id} (retrieve()) to then get the session_data.

Type of change

  • Refactor
  • New feature
  • Bug fix
  • CVE fix
  • Optimization
  • Documentation Update
  • Configuration Update
  • Bump-up service version
  • Bump-up dependent library
  • Bump-up library or tool used for development (does not change the final image)
  • CI configuration change
  • Konflux configuration change
  • Unit tests improvement
  • Integration tests improvement
  • End to end tests improvement

Related Tickets & Documents

Checklist before requesting a review

  • I have performed a self-review of my code.
  • PR has passed all pre-merge test jobs.
  • If it is a core feature, I have added thorough tests.

Testing

  • Please provide detailed steps to perform tests related to this code change.
  • How were the fix/results from this change verified? Please provide relevant screenshots or results.

Summary by CodeRabbit

  • Bug Fixes

    • Improved reliability of conversation session retrieval to ensure accurate session details are returned.
  • Tests

    • Enhanced unit tests to better simulate session retrieval and validate correct data handling.

…rrectly

- Updated the endpoint handler to first list agent sessions and then retrieve the specific session data using the session ID.
- Modified unit tests to mock the session retrieval process, ensuring the model_dump method is called correctly.

This change improves the accuracy of conversation data retrieval.
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Aug 6, 2025

Walkthrough

The conversation session retrieval logic was updated to first obtain a session list, extract the session ID, and then explicitly fetch the full session details using a separate API call. Corresponding test cases were updated to mock the additional retrieval step and ensure proper interface alignment.

Changes

Cohort / File(s) Change Summary
Conversation Session Retrieval Logic
src/app/endpoints/conversations.py
Refactored session retrieval to a two-step process: list sessions, extract session ID, then retrieve full session details using session.retrieve, and convert to dict with model_dump().
Unit Tests for Conversation Endpoint
tests/unit/app/endpoints/test_conversations.py
Updated test mocks to include session.retrieve with a mock object supporting model_dump(), ensuring the test matches the new retrieval flow and interface expectations.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant Endpoint
    participant SessionAPI

    Client->>Endpoint: Request conversation session
    Endpoint->>SessionAPI: List sessions for agent
    SessionAPI-->>Endpoint: Return list of sessions
    Endpoint->>SessionAPI: Retrieve session details (by session_id)
    SessionAPI-->>Endpoint: Return full session data
    Endpoint->>Client: Return session data as dict
Loading

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~8 minutes

Possibly related PRs

Suggested reviewers

  • manstis

Poem

A rabbit hopped through code so bright,
Fetching sessions left and right.
First a list, then ID in paw,
Retrieves details without a flaw.
Tests now mock the journey too—
Hopping forward, code anew! 🐇✨

Note

⚡️ Unit Test Generation is now available in beta!

Learn more here, or try it out under "Finishing Touches" below.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@jrobertboos
Copy link
Contributor Author

If you could review this @tisnik, @eranco74 and @maorfr. That would be great!

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 8cc0ac7 and c21be66.

📒 Files selected for processing (2)
  • src/app/endpoints/conversations.py (1 hunks)
  • tests/unit/app/endpoints/test_conversations.py (1 hunks)
🧰 Additional context used
🧠 Learnings (2)
📓 Common learnings
Learnt from: eranco74
PR: lightspeed-core/lightspeed-stack#348
File: src/utils/endpoints.py:91-94
Timestamp: 2025-08-06T06:02:21.060Z
Learning: The direct assignment to `agent._agent_id` in `src/utils/endpoints.py` is a necessary workaround for the missing agent rehydration feature in the LLS client SDK. This allows preserving conversation IDs when handling existing agents.
📚 Learning: the direct assignment to `agent._agent_id` in `src/utils/endpoints.py` is a necessary workaround for...
Learnt from: eranco74
PR: lightspeed-core/lightspeed-stack#348
File: src/utils/endpoints.py:91-94
Timestamp: 2025-08-06T06:02:21.060Z
Learning: The direct assignment to `agent._agent_id` in `src/utils/endpoints.py` is a necessary workaround for the missing agent rehydration feature in the LLS client SDK. This allows preserving conversation IDs when handling existing agents.

Applied to files:

  • src/app/endpoints/conversations.py
  • tests/unit/app/endpoints/test_conversations.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: build-pr
  • GitHub Check: e2e_tests
🔇 Additional comments (1)
tests/unit/app/endpoints/test_conversations.py (1)

309-312: LGTM! Test mocks properly align with the updated implementation.

The additional mock for session.retrieve() correctly simulates the new two-step retrieval process, ensuring the test validates both the list() and retrieve() calls with proper model_dump() method behavior.

Comment on lines +137 to +143
agent_sessions = (await client.agents.session.list(agent_id=agent_id)).data
session_id = str(agent_sessions[0].get("session_id"))

session_response = await client.agents.session.retrieve(
agent_id=agent_id, session_id=session_id
)
session_data = session_response.model_dump()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Good fix for the llama-stack upgrade issue, but add empty sessions check.

The two-step approach correctly addresses the issue where the upgraded llama-stack no longer returns conversation turns in the list response. However, there's a potential IndexError if no sessions exist.

Add a check for empty sessions:

 agent_sessions = (await client.agents.session.list(agent_id=agent_id)).data
+if not agent_sessions:
+    logger.warning("No sessions found for conversation %s", conversation_id)
+    return ConversationResponse(
+        conversation_id=conversation_id,
+        chat_history=[],
+    )
 session_id = str(agent_sessions[0].get("session_id"))
🤖 Prompt for AI Agents
In src/app/endpoints/conversations.py around lines 137 to 143, add a check to
verify that agent_sessions is not empty before accessing the first element to
avoid an IndexError. If agent_sessions is empty, handle this case appropriately,
such as returning early or raising a clear exception. This ensures the code
safely handles cases where no sessions exist.

Copy link
Member

@maorfr maorfr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

brilliant! LGTM

@eranco74
Copy link
Contributor

eranco74 commented Aug 6, 2025

/lgtm

Copy link
Contributor

@tisnik tisnik left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks good, thank you.

@tisnik tisnik merged commit 7a639e2 into lightspeed-core:main Aug 7, 2025
18 checks passed
@coderabbitai coderabbitai bot mentioned this pull request Aug 13, 2025
18 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants