-
Notifications
You must be signed in to change notification settings - Fork 54
LCORE-492: added tests for v1/conversations endpoints #646
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LCORE-492: added tests for v1/conversations endpoints #646
Conversation
WalkthroughReplaces disabled conversation tests with active authenticated scenarios, adds detailed GET/DELETE flows, error-path coverage, and schema checks. Introduces concrete conversation step implementations performing HTTP requests with optional Authorization. Standardizes step phrases in feedback tests and updates common HTTP step annotations used by Gherkin scenarios. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant F as Feature (Gherkin)
participant S as Step Impl (Behave)
participant API as API Server
Note over F,S: Authenticated conversation retrieval
F->>S: Given I set Authorization and conversation_id
S->>API: GET /v1/conversations (Authorization)
API-->>S: 200 list with conversation details
S-->>F: Validate schema, ID, and content
rect rgba(200,230,255,0.25)
Note over F,S: Delete then verify not found
F->>S: When I DELETE conversation by id
S->>API: DELETE /v1/conversations/{id} (Authorization)
API-->>S: 200 deleted
S->>API: GET /v1/conversations/{id} (Authorization)
API-->>S: 404 not found
S-->>F: Assert 404 and error schema
end
rect rgba(255,230,200,0.25)
Note over F,S: Missing/invalid auth and malformed ID
F->>S: When I GET without Authorization
S->>API: GET /v1/conversations
API-->>S: 400 "No Authorization header found"
S-->>F: Assert 400
F->>S: When I GET /{malformed_id}
S->>API: GET /v1/conversations/{bad}
API-->>S: 422 validation error
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested reviewers
Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
tests/e2e/features/conversations.feature(1 hunks)tests/e2e/features/feedback.feature(5 hunks)tests/e2e/features/steps/common_http.py(2 hunks)tests/e2e/features/steps/conversation.py(1 hunks)
🧰 Additional context used
📓 Path-based instructions (4)
tests/e2e/features/**/*.feature
📄 CodeRabbit inference engine (CLAUDE.md)
Write E2E tests as Gherkin feature files for behave
Files:
tests/e2e/features/feedback.featuretests/e2e/features/conversations.feature
**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.py: All modules start with descriptive module-level docstrings explaining purpose
Use logger = logging.getLogger(name) for module logging after import logging
Define type aliases at module level for clarity
All functions require docstrings with brief descriptions
Provide complete type annotations for all function parameters and return types
Use typing_extensions.Self in model validators where appropriate
Use modern union syntax (str | int) and Optional[T] or T | None consistently
Function names use snake_case with descriptive, action-oriented prefixes (get_, validate_, check_)
Avoid in-place parameter modification; return new data structures instead of mutating arguments
Use appropriate logging levels: debug, info, warning, error with clear messages
All classes require descriptive docstrings explaining purpose
Class names use PascalCase with conventional suffixes (Configuration, Error/Exception, Resolver, Interface)
Abstract base classes should use abc.ABC and @AbstractMethod for interfaces
Provide complete type annotations for all class attributes
Follow Google Python docstring style for modules, classes, and functions, including Args, Returns, Raises, Attributes sections as needed
Files:
tests/e2e/features/steps/conversation.pytests/e2e/features/steps/common_http.py
tests/e2e/features/steps/**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
Place behave step definitions under tests/e2e/features/steps/
Files:
tests/e2e/features/steps/conversation.pytests/e2e/features/steps/common_http.py
tests/**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
tests/**/*.py: Use pytest-mock to create AsyncMock objects for async interactions in tests
Use the shared auth mock constant: MOCK_AUTH = ("mock_user_id", "mock_username", False, "mock_token") in tests
Files:
tests/e2e/features/steps/conversation.pytests/e2e/features/steps/common_http.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: build-pr
- GitHub Check: e2e_tests (ci)
| expected_data = json.loads(context.text) | ||
| found_conversation = context.found_conversation | ||
|
|
||
| assert ( | ||
| found_conversation["last_used_model"] == expected_data["last_used_model"] | ||
| ), f"last_used_model mismatch, was {found_conversation["last_used_model"]}" | ||
| assert ( | ||
| found_conversation["last_used_provider"] == expected_data["last_used_provider"] | ||
| ), f"last_used_provider mismatch, was {found_conversation["last_used_provider"]}" | ||
| assert ( | ||
| found_conversation["message_count"] == expected_data["message_count"] | ||
| ), f"message count mismatch, was {found_conversation["message_count"]}" | ||
|
|
||
|
|
||
| @then("The returned conversation details have expected conversation_id") | ||
| def check_found_conversation_id(context: Context) -> None: | ||
| """Check whether the conversation details have expected conversation_id.""" | ||
| response_json = context.response.json() | ||
|
|
||
| assert ( | ||
| response_json["conversation_id"] == context.response_data["conversation_id"] | ||
| ), "found wrong conversation" | ||
|
|
||
|
|
||
| @then("The body of the response has following messages") | ||
| def check_found_conversation_content(context: Context) -> None: | ||
| """Check whether the conversation details have expected data.""" | ||
| expected_data = json.loads(context.text) | ||
| response_json = context.response.json() | ||
| chat_messages = response_json["chat_history"][0]["messages"] | ||
|
|
||
| assert chat_messages[0]["content"] == expected_data["content"] | ||
| assert chat_messages[0]["type"] == expected_data["type"] | ||
| assert ( | ||
| expected_data["content_response"] in chat_messages[1]["content"] | ||
| ), f"expected substring not in response, has {chat_messages[1]["content"]}" | ||
| assert chat_messages[1]["type"] == expected_data["type_response"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix the f-string quoting syntax errors.
The new assertions embed found_conversation["…"] (and similar) inside double-quoted f-strings. That pattern is invalid Python syntax and will stop the step module from loading at import time, breaking every scenario. Swap the inner quotes to singles (or assign the values to locals) so the f-strings parse correctly.
Example fix:
- assert (
- found_conversation["last_used_model"] == expected_data["last_used_model"]
- ), f"last_used_model mismatch, was {found_conversation["last_used_model"]}"
+ actual_model = found_conversation["last_used_model"]
+ expected_model = expected_data["last_used_model"]
+ assert actual_model == expected_model, f"last_used_model mismatch, was {actual_model}"
@@
- assert (
- expected_data["content_response"] in chat_messages[1]["content"]
- ), f"expected substring not in response, has {chat_messages[1]["content"]}"
+ response_body = chat_messages[1]["content"]
+ assert expected_data["content_response"] in response_body, (
+ f"expected substring not in response, has {response_body}"
+ )Apply the same pattern for the provider/message-count assertions.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| expected_data = json.loads(context.text) | |
| found_conversation = context.found_conversation | |
| assert ( | |
| found_conversation["last_used_model"] == expected_data["last_used_model"] | |
| ), f"last_used_model mismatch, was {found_conversation["last_used_model"]}" | |
| assert ( | |
| found_conversation["last_used_provider"] == expected_data["last_used_provider"] | |
| ), f"last_used_provider mismatch, was {found_conversation["last_used_provider"]}" | |
| assert ( | |
| found_conversation["message_count"] == expected_data["message_count"] | |
| ), f"message count mismatch, was {found_conversation["message_count"]}" | |
| @then("The returned conversation details have expected conversation_id") | |
| def check_found_conversation_id(context: Context) -> None: | |
| """Check whether the conversation details have expected conversation_id.""" | |
| response_json = context.response.json() | |
| assert ( | |
| response_json["conversation_id"] == context.response_data["conversation_id"] | |
| ), "found wrong conversation" | |
| @then("The body of the response has following messages") | |
| def check_found_conversation_content(context: Context) -> None: | |
| """Check whether the conversation details have expected data.""" | |
| expected_data = json.loads(context.text) | |
| response_json = context.response.json() | |
| chat_messages = response_json["chat_history"][0]["messages"] | |
| assert chat_messages[0]["content"] == expected_data["content"] | |
| assert chat_messages[0]["type"] == expected_data["type"] | |
| assert ( | |
| expected_data["content_response"] in chat_messages[1]["content"] | |
| ), f"expected substring not in response, has {chat_messages[1]["content"]}" | |
| assert chat_messages[1]["type"] == expected_data["type_response"] | |
| expected_data = json.loads(context.text) | |
| found_conversation = context.found_conversation | |
| actual_model = found_conversation["last_used_model"] | |
| expected_model = expected_data["last_used_model"] | |
| assert actual_model == expected_model, f"last_used_model mismatch, was {actual_model}" | |
| assert ( | |
| found_conversation["last_used_provider"] == expected_data["last_used_provider"] | |
| ), f"last_used_provider mismatch, was {found_conversation['last_used_provider']}" | |
| assert ( | |
| found_conversation["message_count"] == expected_data["message_count"] | |
| ), f"message count mismatch, was {found_conversation['message_count']}" | |
| @then("The returned conversation details have expected conversation_id") | |
| def check_found_conversation_id(context: Context) -> None: | |
| """Check whether the conversation details have expected conversation_id.""" | |
| response_json = context.response.json() | |
| assert ( | |
| response_json["conversation_id"] == context.response_data["conversation_id"] | |
| ), "found wrong conversation" | |
| @then("The body of the response has following messages") | |
| def check_found_conversation_content(context: Context) -> None: | |
| """Check whether the conversation details have expected data.""" | |
| expected_data = json.loads(context.text) | |
| response_json = context.response.json() | |
| chat_messages = response_json["chat_history"][0]["messages"] | |
| assert chat_messages[0]["content"] == expected_data["content"] | |
| assert chat_messages[0]["type"] == expected_data["type"] | |
| response_body = chat_messages[1]["content"] | |
| assert expected_data["content_response"] in response_body, ( | |
| f"expected substring not in response, has {response_body}" | |
| ) | |
| assert chat_messages[1]["type"] == expected_data["type_response"] |
🤖 Prompt for AI Agents
In tests/e2e/features/steps/conversation.py around lines 112 to 148, the
f-strings in the assertions embed double-quoted dict lookups like
f"...{found_conversation["last_used_model"]}" which is invalid syntax; change
the inner quotes to single quotes (e.g. found_conversation['last_used_model'])
or assign the dict values to local variables before using them in f-strings, and
apply the same fix to the last_used_provider and message_count assertions so the
module imports correctly.
tisnik
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Description
Type of change
Related Tickets & Documents
Checklist before requesting a review
Testing
Summary by CodeRabbit
Tests
Style