Skip to content

Conversation

@radofuchs
Copy link
Contributor

@radofuchs radofuchs commented Oct 9, 2025

Description

Type of change

  • Refactor
  • New feature
  • Bug fix
  • CVE fix
  • Optimization
  • Documentation Update
  • Configuration Update
  • Bump-up service version
  • Bump-up dependent library
  • Bump-up library or tool used for development (does not change the final image)
  • CI configuration change
  • Konflux configuration change
  • Unit tests improvement
  • Integration tests improvement
  • End to end tests improvement

Related Tickets & Documents

  • Related Issue #
  • Closes #

Checklist before requesting a review

  • I have performed a self-review of my code.
  • PR has passed all pre-merge test jobs.
  • If it is a core feature, I have added thorough tests.

Testing

  • Please provide detailed steps to perform tests related to this code change.
  • How were the fix/results from this change verified? Please provide relevant screenshots or results.

Summary by CodeRabbit

  • Tests

    • Added comprehensive authenticated end-to-end coverage for conversations endpoints (GET/DELETE), including success paths and error cases for missing auth, malformed IDs (422), not found (404), and service unavailability (503).
    • Verified delete flow: successful deletion followed by not-found on subsequent retrieval.
    • Introduced explicit response schema validations and structured JSON payload checks for conversation details and messages.
    • Unified step definitions and header management with a default request timeout.
  • Style

    • Standardized Gherkin phrasing in feedback scenarios, removing redundant wording without changing behavior or assertions.

@radofuchs radofuchs changed the title LCORWadded tests for v1/conversations endpoints LCORE-492: added tests for v1/conversations endpoints Oct 9, 2025
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 9, 2025

Walkthrough

Replaces disabled conversation tests with active authenticated scenarios, adds detailed GET/DELETE flows, error-path coverage, and schema checks. Introduces concrete conversation step implementations performing HTTP requests with optional Authorization. Standardizes step phrases in feedback tests and updates common HTTP step annotations used by Gherkin scenarios.

Changes

Cohort / File(s) Summary
E2E Conversation scenarios
tests/e2e/features/conversations.feature
Rewrote feature to authenticated tests covering GET/DELETE endpoints, success and error paths (400/404/422/503), schema validations, and ID handling.
E2E Feedback phrasing cleanup
tests/e2e/features/feedback.feature
Removed redundant “And” in step text across scenarios; no behavioral changes.
Common HTTP step annotations
tests/e2e/features/steps/common_http.py
Changed status-code step to @step(...); normalized response-structure step text to @then("the body of the response has the following structure").
Conversation step implementations
tests/e2e/features/steps/conversation.py
Added GET/DELETE step functions (generic and by ID), request execution with optional auth and timeout, and validation steps for IDs and content; updated deleted-conversation not-found step text.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant F as Feature (Gherkin)
  participant S as Step Impl (Behave)
  participant API as API Server
  Note over F,S: Authenticated conversation retrieval
  F->>S: Given I set Authorization and conversation_id
  S->>API: GET /v1/conversations (Authorization)
  API-->>S: 200 list with conversation details
  S-->>F: Validate schema, ID, and content

  rect rgba(200,230,255,0.25)
  Note over F,S: Delete then verify not found
  F->>S: When I DELETE conversation by id
  S->>API: DELETE /v1/conversations/{id} (Authorization)
  API-->>S: 200 deleted
  S->>API: GET /v1/conversations/{id} (Authorization)
  API-->>S: 404 not found
  S-->>F: Assert 404 and error schema
  end

  rect rgba(255,230,200,0.25)
  Note over F,S: Missing/invalid auth and malformed ID
  F->>S: When I GET without Authorization
  S->>API: GET /v1/conversations
  API-->>S: 400 "No Authorization header found"
  S-->>F: Assert 400

  F->>S: When I GET /{malformed_id}
  S->>API: GET /v1/conversations/{bad}
  API-->>S: 422 validation error
  end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly related PRs

Suggested reviewers

  • tisnik
  • umago

Poem

I thump my paw—new steps take flight,
Headers packed, we test the night.
GET, DELETE, then 404,
Burrows clean, with tidy lore.
Schemas checked, IDs in line—
Hippity-hop, the tests align.
(_/) ✔️ (•_•) 🥕 <(`)>

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The title correctly references the related ticket and succinctly describes the primary change of adding end-to-end tests for the v1/conversations endpoints, making it clear, specific, and directly aligned with the changes in the pull request.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@radofuchs radofuchs requested a review from tisnik October 9, 2025 09:40
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9c1ab9b and 558f22e.

📒 Files selected for processing (4)
  • tests/e2e/features/conversations.feature (1 hunks)
  • tests/e2e/features/feedback.feature (5 hunks)
  • tests/e2e/features/steps/common_http.py (2 hunks)
  • tests/e2e/features/steps/conversation.py (1 hunks)
🧰 Additional context used
📓 Path-based instructions (4)
tests/e2e/features/**/*.feature

📄 CodeRabbit inference engine (CLAUDE.md)

Write E2E tests as Gherkin feature files for behave

Files:

  • tests/e2e/features/feedback.feature
  • tests/e2e/features/conversations.feature
**/*.py

📄 CodeRabbit inference engine (CLAUDE.md)

**/*.py: All modules start with descriptive module-level docstrings explaining purpose
Use logger = logging.getLogger(name) for module logging after import logging
Define type aliases at module level for clarity
All functions require docstrings with brief descriptions
Provide complete type annotations for all function parameters and return types
Use typing_extensions.Self in model validators where appropriate
Use modern union syntax (str | int) and Optional[T] or T | None consistently
Function names use snake_case with descriptive, action-oriented prefixes (get_, validate_, check_)
Avoid in-place parameter modification; return new data structures instead of mutating arguments
Use appropriate logging levels: debug, info, warning, error with clear messages
All classes require descriptive docstrings explaining purpose
Class names use PascalCase with conventional suffixes (Configuration, Error/Exception, Resolver, Interface)
Abstract base classes should use abc.ABC and @AbstractMethod for interfaces
Provide complete type annotations for all class attributes
Follow Google Python docstring style for modules, classes, and functions, including Args, Returns, Raises, Attributes sections as needed

Files:

  • tests/e2e/features/steps/conversation.py
  • tests/e2e/features/steps/common_http.py
tests/e2e/features/steps/**/*.py

📄 CodeRabbit inference engine (CLAUDE.md)

Place behave step definitions under tests/e2e/features/steps/

Files:

  • tests/e2e/features/steps/conversation.py
  • tests/e2e/features/steps/common_http.py
tests/**/*.py

📄 CodeRabbit inference engine (CLAUDE.md)

tests/**/*.py: Use pytest-mock to create AsyncMock objects for async interactions in tests
Use the shared auth mock constant: MOCK_AUTH = ("mock_user_id", "mock_username", False, "mock_token") in tests

Files:

  • tests/e2e/features/steps/conversation.py
  • tests/e2e/features/steps/common_http.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: build-pr
  • GitHub Check: e2e_tests (ci)

Comment on lines +112 to +148
expected_data = json.loads(context.text)
found_conversation = context.found_conversation

assert (
found_conversation["last_used_model"] == expected_data["last_used_model"]
), f"last_used_model mismatch, was {found_conversation["last_used_model"]}"
assert (
found_conversation["last_used_provider"] == expected_data["last_used_provider"]
), f"last_used_provider mismatch, was {found_conversation["last_used_provider"]}"
assert (
found_conversation["message_count"] == expected_data["message_count"]
), f"message count mismatch, was {found_conversation["message_count"]}"


@then("The returned conversation details have expected conversation_id")
def check_found_conversation_id(context: Context) -> None:
"""Check whether the conversation details have expected conversation_id."""
response_json = context.response.json()

assert (
response_json["conversation_id"] == context.response_data["conversation_id"]
), "found wrong conversation"


@then("The body of the response has following messages")
def check_found_conversation_content(context: Context) -> None:
"""Check whether the conversation details have expected data."""
expected_data = json.loads(context.text)
response_json = context.response.json()
chat_messages = response_json["chat_history"][0]["messages"]

assert chat_messages[0]["content"] == expected_data["content"]
assert chat_messages[0]["type"] == expected_data["type"]
assert (
expected_data["content_response"] in chat_messages[1]["content"]
), f"expected substring not in response, has {chat_messages[1]["content"]}"
assert chat_messages[1]["type"] == expected_data["type_response"]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Fix the f-string quoting syntax errors.

The new assertions embed found_conversation["…"] (and similar) inside double-quoted f-strings. That pattern is invalid Python syntax and will stop the step module from loading at import time, breaking every scenario. Swap the inner quotes to singles (or assign the values to locals) so the f-strings parse correctly.

Example fix:

-    assert (
-        found_conversation["last_used_model"] == expected_data["last_used_model"]
-    ), f"last_used_model mismatch, was {found_conversation["last_used_model"]}"
+    actual_model = found_conversation["last_used_model"]
+    expected_model = expected_data["last_used_model"]
+    assert actual_model == expected_model, f"last_used_model mismatch, was {actual_model}"
@@
-    assert (
-        expected_data["content_response"] in chat_messages[1]["content"]
-    ), f"expected substring not in response, has {chat_messages[1]["content"]}"
+    response_body = chat_messages[1]["content"]
+    assert expected_data["content_response"] in response_body, (
+        f"expected substring not in response, has {response_body}"
+    )

Apply the same pattern for the provider/message-count assertions.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
expected_data = json.loads(context.text)
found_conversation = context.found_conversation
assert (
found_conversation["last_used_model"] == expected_data["last_used_model"]
), f"last_used_model mismatch, was {found_conversation["last_used_model"]}"
assert (
found_conversation["last_used_provider"] == expected_data["last_used_provider"]
), f"last_used_provider mismatch, was {found_conversation["last_used_provider"]}"
assert (
found_conversation["message_count"] == expected_data["message_count"]
), f"message count mismatch, was {found_conversation["message_count"]}"
@then("The returned conversation details have expected conversation_id")
def check_found_conversation_id(context: Context) -> None:
"""Check whether the conversation details have expected conversation_id."""
response_json = context.response.json()
assert (
response_json["conversation_id"] == context.response_data["conversation_id"]
), "found wrong conversation"
@then("The body of the response has following messages")
def check_found_conversation_content(context: Context) -> None:
"""Check whether the conversation details have expected data."""
expected_data = json.loads(context.text)
response_json = context.response.json()
chat_messages = response_json["chat_history"][0]["messages"]
assert chat_messages[0]["content"] == expected_data["content"]
assert chat_messages[0]["type"] == expected_data["type"]
assert (
expected_data["content_response"] in chat_messages[1]["content"]
), f"expected substring not in response, has {chat_messages[1]["content"]}"
assert chat_messages[1]["type"] == expected_data["type_response"]
expected_data = json.loads(context.text)
found_conversation = context.found_conversation
actual_model = found_conversation["last_used_model"]
expected_model = expected_data["last_used_model"]
assert actual_model == expected_model, f"last_used_model mismatch, was {actual_model}"
assert (
found_conversation["last_used_provider"] == expected_data["last_used_provider"]
), f"last_used_provider mismatch, was {found_conversation['last_used_provider']}"
assert (
found_conversation["message_count"] == expected_data["message_count"]
), f"message count mismatch, was {found_conversation['message_count']}"
@then("The returned conversation details have expected conversation_id")
def check_found_conversation_id(context: Context) -> None:
"""Check whether the conversation details have expected conversation_id."""
response_json = context.response.json()
assert (
response_json["conversation_id"] == context.response_data["conversation_id"]
), "found wrong conversation"
@then("The body of the response has following messages")
def check_found_conversation_content(context: Context) -> None:
"""Check whether the conversation details have expected data."""
expected_data = json.loads(context.text)
response_json = context.response.json()
chat_messages = response_json["chat_history"][0]["messages"]
assert chat_messages[0]["content"] == expected_data["content"]
assert chat_messages[0]["type"] == expected_data["type"]
response_body = chat_messages[1]["content"]
assert expected_data["content_response"] in response_body, (
f"expected substring not in response, has {response_body}"
)
assert chat_messages[1]["type"] == expected_data["type_response"]
🤖 Prompt for AI Agents
In tests/e2e/features/steps/conversation.py around lines 112 to 148, the
f-strings in the assertions embed double-quoted dict lookups like
f"...{found_conversation["last_used_model"]}" which is invalid syntax; change
the inner quotes to single quotes (e.g. found_conversation['last_used_model'])
or assign the dict values to local variables before using them in f-strings, and
apply the same fix to the last_used_provider and message_count assertions so the
module imports correctly.

Copy link
Contributor

@tisnik tisnik left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants