Skip to content

Conversation

@radofuchs
Copy link
Contributor

@radofuchs radofuchs commented Oct 20, 2025

Description

add new e2e tests for tools endpoint

Type of change

  • Refactor
  • New feature
  • Bug fix
  • CVE fix
  • Optimization
  • Documentation Update
  • Configuration Update
  • Bump-up service version
  • Bump-up dependent library
  • Bump-up library or tool used for development (does not change the final image)
  • CI configuration change
  • Konflux configuration change
  • Unit tests improvement
  • Integration tests improvement
  • End to end tests improvement

Related Tickets & Documents

  • Related Issue #LCORE-817
  • Closes #LCORE-817

Checklist before requesting a review

  • I have performed a self-review of my code.
  • PR has passed all pre-merge test jobs.
  • If it is a core feature, I have added thorough tests.

Testing

  • Please provide detailed steps to perform tests related to this code change.
  • How were the fix/results from this change verified? Please provide relevant screenshots or results.

Summary by CodeRabbit

  • Tests
    • Added test scenarios for the tools endpoint: successful retrieval (HTTP 200) validating two tools for a provider with a detailed JSON schema and example entry, and an error scenario (HTTP 500) when the backend is unreachable.
    • Added step validations for tool count and detailed tool structure (identifier, description, provider and group IDs, server source, type).

@radofuchs radofuchs requested a review from tisnik October 20, 2025 09:31
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 20, 2025

Walkthrough

Adds two end-to-end test scenarios for the Info "tools" endpoint (success and error) and two step implementations to validate tool count and tool JSON structure for a given provider.

Changes

Cohort / File(s) Change Summary
E2E Test Scenarios
tests/e2e/features/info.feature
Inserts two scenarios: a "tools" success scenario asserting HTTP 200, that provider rag-runtime has 2 tools and the response body matches a detailed JSON schema/example; and a "tools" error scenario asserting HTTP 500 with a specific error body when llama-stack is unreachable.
Step Implementations
tests/e2e/features/steps/info.py
Adds import json and two step definitions: check_tool_count(context, count, provider_name) to assert tool count filtered by provider, and check_tool_structure(context, provider_name) to parse expected JSON from context.text and validate fields (identifier, description, provider_id, toolgroup_id, server_source, type) for provider tools.

Sequence Diagram(s)

sequenceDiagram
  participant TR as Test Runner
  participant S as API Server (/info/tools)
  participant L as llama-stack (backend)

  rect rgb(235,245,255)
    TR->>S: GET /info/tools
    alt llama-stack reachable
      S->>L: request tools list
      L-->>S: tools JSON (200)
      S-->>TR: 200 + tools JSON
      TR->>TR: validate status 200
      TR->>TR: filter by provider (rag-runtime) and assert count == 2
      TR->>TR: validate tool JSON structure vs expected schema
    else llama-stack unreachable
      S->>L: request tools list
      L-->>S: error
      S-->>TR: 500 + error body
      TR->>TR: validate status 500 and error body
    end
  end
Loading

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Suggested reviewers

  • tisnik

Poem

🐰 I hopped through tests to count and see,
Two tools for rag-runtime delighted me,
A 500 grumble when llama's away,
JSON parsed neatly in a bunny's play,
Small hops, bright checks — celebrate with me! 🥕

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The pull request title "LCORE-817: add new e2e tests for tools endpoint" is directly and fully aligned with the changeset. The main changes involve adding two new test scenarios to the info.feature file and two corresponding step definitions in info.py, all specifically for testing the tools endpoint through end-to-end tests. The title accurately captures the essence of these changes in clear, concise language that would help a developer quickly understand the purpose of this PR when scanning history.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (1)
tests/e2e/features/steps/info.py (1)

119-119: Add descriptive error message to assertion.

The assertion should include a message showing both expected and actual counts to aid debugging when tests fail, consistent with assertions elsewhere in this file.

Apply this diff:

-    assert len(provider_tools) == count
+    assert len(provider_tools) == count, f"Expected {count} tools for provider '{provider_name}', but found {len(provider_tools)}"
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 09ec1f6 and 2dfdd02.

📒 Files selected for processing (2)
  • tests/e2e/features/info.feature (1 hunks)
  • tests/e2e/features/steps/info.py (2 hunks)
🧰 Additional context used
📓 Path-based instructions (4)
tests/e2e/features/**/*.feature

📄 CodeRabbit inference engine (CLAUDE.md)

Write E2E tests as Gherkin feature files for behave

Files:

  • tests/e2e/features/info.feature
**/*.py

📄 CodeRabbit inference engine (CLAUDE.md)

**/*.py: All modules start with descriptive module-level docstrings explaining purpose
Use logger = logging.getLogger(name) for module logging after import logging
Define type aliases at module level for clarity
All functions require docstrings with brief descriptions
Provide complete type annotations for all function parameters and return types
Use typing_extensions.Self in model validators where appropriate
Use modern union syntax (str | int) and Optional[T] or T | None consistently
Function names use snake_case with descriptive, action-oriented prefixes (get_, validate_, check_)
Avoid in-place parameter modification; return new data structures instead of mutating arguments
Use appropriate logging levels: debug, info, warning, error with clear messages
All classes require descriptive docstrings explaining purpose
Class names use PascalCase with conventional suffixes (Configuration, Error/Exception, Resolver, Interface)
Abstract base classes should use abc.ABC and @AbstractMethod for interfaces
Provide complete type annotations for all class attributes
Follow Google Python docstring style for modules, classes, and functions, including Args, Returns, Raises, Attributes sections as needed

Files:

  • tests/e2e/features/steps/info.py
tests/e2e/features/steps/**/*.py

📄 CodeRabbit inference engine (CLAUDE.md)

Place behave step definitions under tests/e2e/features/steps/

Files:

  • tests/e2e/features/steps/info.py
tests/**/*.py

📄 CodeRabbit inference engine (CLAUDE.md)

tests/**/*.py: Use pytest-mock to create AsyncMock objects for async interactions in tests
Use the shared auth mock constant: MOCK_AUTH = ("mock_user_id", "mock_username", False, "mock_token") in tests

Files:

  • tests/e2e/features/steps/info.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: build-pr
  • GitHub Check: e2e_tests (ci)
🔇 Additional comments (2)
tests/e2e/features/info.feature (1)

113-121: LGTM!

The error scenario follows the established pattern for testing unreachable llama-stack services and validates the expected error response correctly.

tests/e2e/features/steps/info.py (1)

3-3: LGTM!

The json import is necessary for parsing expected tool structure from context.text in the new step definition.

@radofuchs radofuchs force-pushed the LCORE_817_tools_e2e_tests branch from e422fc2 to 763cdd1 Compare October 20, 2025 09:41
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (3)
tests/e2e/features/steps/info.py (3)

122-122: Fix typo in step definition.

The step decorator contains "he response" instead of "the response". This must match the feature file exactly for Behave to recognize the step.

Apply this diff:

-@then("The body of the response has proper structure for provider {provider_name}")
+@then("The body of the response has proper structure for provider {provider_name}")

Note: Verify the exact wording in tests/e2e/features/info.feature to ensure alignment.


141-141: Use more specific error message with provider name.

Following the pattern from check_model_structure (line 48) and check_shield_structure (line 86), the error message should include the provider name for better diagnostics.

Apply this diff:

-    assert provider_tool is not None, "No tool found in response"
+    assert provider_tool is not None, f"No tool found for provider '{provider_name}'"

150-152: Fix copy-paste error in assertion message.

The assertion message incorrectly references "model_type" when the assertion is checking "provider_id". This appears to be a copy-paste error from similar validation code.

Apply this diff:

     assert (
         provider_tool["provider_id"] == expected_json["provider_id"]
-    ), f"model_type should be {expected_json["provider_id"]}"
+    ), f"provider_id should be {expected_json['provider_id']}"

Note: Also consider using single quotes inside the f-string to avoid potential quote-nesting issues.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 2dfdd02 and 29027d1.

📒 Files selected for processing (2)
  • tests/e2e/features/info.feature (1 hunks)
  • tests/e2e/features/steps/info.py (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • tests/e2e/features/info.feature
🧰 Additional context used
📓 Path-based instructions (3)
**/*.py

📄 CodeRabbit inference engine (CLAUDE.md)

**/*.py: All modules start with descriptive module-level docstrings explaining purpose
Use logger = logging.getLogger(name) for module logging after import logging
Define type aliases at module level for clarity
All functions require docstrings with brief descriptions
Provide complete type annotations for all function parameters and return types
Use typing_extensions.Self in model validators where appropriate
Use modern union syntax (str | int) and Optional[T] or T | None consistently
Function names use snake_case with descriptive, action-oriented prefixes (get_, validate_, check_)
Avoid in-place parameter modification; return new data structures instead of mutating arguments
Use appropriate logging levels: debug, info, warning, error with clear messages
All classes require descriptive docstrings explaining purpose
Class names use PascalCase with conventional suffixes (Configuration, Error/Exception, Resolver, Interface)
Abstract base classes should use abc.ABC and @AbstractMethod for interfaces
Provide complete type annotations for all class attributes
Follow Google Python docstring style for modules, classes, and functions, including Args, Returns, Raises, Attributes sections as needed

Files:

  • tests/e2e/features/steps/info.py
tests/e2e/features/steps/**/*.py

📄 CodeRabbit inference engine (CLAUDE.md)

Place behave step definitions under tests/e2e/features/steps/

Files:

  • tests/e2e/features/steps/info.py
tests/**/*.py

📄 CodeRabbit inference engine (CLAUDE.md)

tests/**/*.py: Use pytest-mock to create AsyncMock objects for async interactions in tests
Use the shared auth mock constant: MOCK_AUTH = ("mock_user_id", "mock_username", False, "mock_token") in tests

Files:

  • tests/e2e/features/steps/info.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: build-pr
  • GitHub Check: e2e_tests (azure)
  • GitHub Check: e2e_tests (ci)
🔇 Additional comments (1)
tests/e2e/features/steps/info.py (1)

3-3: LGTM!

The json import is necessary for parsing expected tool structure from context.text in the new check_tool_structure function.

if tool["provider_id"] == provider_name:
provider_tools.append(tool)

assert len(provider_tools) == count
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Add descriptive assertion message.

The assertion should include a descriptive error message for better test failure diagnostics, consistent with other assertions in this file (lines 12, 37, 38, 48, 75, 76, 86).

Apply this diff:

-    assert len(provider_tools) == count
+    assert len(provider_tools) == count, f"Expected {count} tools for provider '{provider_name}', but found {len(provider_tools)}"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
assert len(provider_tools) == count
assert len(provider_tools) == count, f"Expected {count} tools for provider '{provider_name}', but found {len(provider_tools)}"
🤖 Prompt for AI Agents
In tests/e2e/features/steps/info.py around line 119, the assertion assert
len(provider_tools) == count lacks a descriptive failure message; update it to
include context and actual vs expected values (e.g., "expected {count} provider
tools, got {len(provider_tools)}") so test failures show clear diagnostics,
matching the style used on lines 12, 37, 38, 48, 75, 76, 86.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
tests/e2e/features/steps/info.py (1)

119-119: Add descriptive assertion message.

This issue remains unaddressed from the previous review. The assertion lacks a descriptive error message for better test failure diagnostics, inconsistent with other assertions in this file.

Apply this diff:

-    assert len(provider_tools) == count
+    assert len(provider_tools) == count, f"Expected {count} tools for provider '{provider_name}', but found {len(provider_tools)}"
🧹 Nitpick comments (3)
tests/e2e/features/steps/info.py (3)

104-105: Consider enhancing docstring with Google-style sections.

The docstring could be improved to include Args and Returns sections for better documentation clarity, consistent with the project's coding guidelines for Google Python docstring style.

Apply this diff:

-def check_tool_count(context: Context, count: int, provider_name: str) -> None:
-    """Check that the number of tools for defined provider is correct."""
+def check_tool_count(context: Context, count: int, provider_name: str) -> None:
+    """Check that the number of tools for defined provider is correct.
+    
+    Args:
+        context: Behave context containing the response.
+        count: Expected number of tools.
+        provider_name: Name of the provider to filter tools by.
+    """

Based on coding guidelines.


141-141: Consider making error message more specific.

The error message could include the provider name for clearer diagnostics, matching the pattern used in similar functions throughout the file.

Apply this diff:

-    assert provider_tool is not None, "No tool found in response"
+    assert provider_tool is not None, f"No tool found for provider '{provider_name}'"

123-124: Consider enhancing docstring with Google-style sections.

The docstring could be improved to include Args and Returns sections for better documentation clarity, consistent with the project's coding guidelines for Google Python docstring style.

Apply this diff:

-def check_tool_structure(context: Context, provider_name: str) -> None:
-    """Check that the first listed tool for defined provider has the correct structure."""
+def check_tool_structure(context: Context, provider_name: str) -> None:
+    """Check that the first listed tool for defined provider has the correct structure.
+    
+    Args:
+        context: Behave context containing the response and expected JSON in context.text.
+        provider_name: Name of the provider to filter tools by.
+    """

Based on coding guidelines.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 29027d1 and 763cdd1.

📒 Files selected for processing (2)
  • tests/e2e/features/info.feature (1 hunks)
  • tests/e2e/features/steps/info.py (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • tests/e2e/features/info.feature
🧰 Additional context used
📓 Path-based instructions (3)
**/*.py

📄 CodeRabbit inference engine (CLAUDE.md)

**/*.py: All modules start with descriptive module-level docstrings explaining purpose
Use logger = logging.getLogger(name) for module logging after import logging
Define type aliases at module level for clarity
All functions require docstrings with brief descriptions
Provide complete type annotations for all function parameters and return types
Use typing_extensions.Self in model validators where appropriate
Use modern union syntax (str | int) and Optional[T] or T | None consistently
Function names use snake_case with descriptive, action-oriented prefixes (get_, validate_, check_)
Avoid in-place parameter modification; return new data structures instead of mutating arguments
Use appropriate logging levels: debug, info, warning, error with clear messages
All classes require descriptive docstrings explaining purpose
Class names use PascalCase with conventional suffixes (Configuration, Error/Exception, Resolver, Interface)
Abstract base classes should use abc.ABC and @AbstractMethod for interfaces
Provide complete type annotations for all class attributes
Follow Google Python docstring style for modules, classes, and functions, including Args, Returns, Raises, Attributes sections as needed

Files:

  • tests/e2e/features/steps/info.py
tests/e2e/features/steps/**/*.py

📄 CodeRabbit inference engine (CLAUDE.md)

Place behave step definitions under tests/e2e/features/steps/

Files:

  • tests/e2e/features/steps/info.py
tests/**/*.py

📄 CodeRabbit inference engine (CLAUDE.md)

tests/**/*.py: Use pytest-mock to create AsyncMock objects for async interactions in tests
Use the shared auth mock constant: MOCK_AUTH = ("mock_user_id", "mock_username", False, "mock_token") in tests

Files:

  • tests/e2e/features/steps/info.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: build-pr
  • GitHub Check: e2e_tests (ci)
  • GitHub Check: e2e_tests (azure)
🔇 Additional comments (1)
tests/e2e/features/steps/info.py (1)

3-3: LGTM!

The json import is appropriately added to support parsing expected tool structures from context text in the new step definition.

Comment on lines +144 to +161
assert (
provider_tool["identifier"] == expected_json["identifier"]
), f"identifier should be {expected_json["identifier"]}, but was {provider_tool["identifier"]}"
assert (
provider_tool["description"] == expected_json["description"]
), f"description should be {expected_json["description"]}"
assert (
provider_tool["provider_id"] == expected_json["provider_id"]
), f"provider_id should be {expected_json["provider_id"]}"
assert (
provider_tool["toolgroup_id"] == expected_json["toolgroup_id"]
), f"toolgroup_id should be {expected_json["toolgroup_id"]}"
assert (
provider_tool["server_source"] == expected_json["server_source"]
), f"server_source should be {expected_json["server_source"]}"
assert (
provider_tool["type"] == expected_json["type"]
), f"type should be {expected_json["type"]}"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Fix syntax errors in assertion messages caused by nested double quotes.

All assertion error messages in this function contain nested double quotes within f-strings, which will cause SyntaxError at runtime. The inner dictionary access uses double quotes that conflict with the f-string delimiters.

Apply this diff to fix all assertion messages:

     assert (
         provider_tool["identifier"] == expected_json["identifier"]
-    ), f"identifier should be {expected_json["identifier"]}, but was {provider_tool["identifier"]}"
+    ), f"identifier should be {expected_json['identifier']}, but was {provider_tool['identifier']}"
     assert (
         provider_tool["description"] == expected_json["description"]
-    ), f"description should be {expected_json["description"]}"
+    ), f"description should be {expected_json['description']}"
     assert (
         provider_tool["provider_id"] == expected_json["provider_id"]
-    ), f"provider_id should be {expected_json["provider_id"]}"
+    ), f"provider_id should be {expected_json['provider_id']}"
     assert (
         provider_tool["toolgroup_id"] == expected_json["toolgroup_id"]
-    ), f"toolgroup_id should be {expected_json["toolgroup_id"]}"
+    ), f"toolgroup_id should be {expected_json['toolgroup_id']}"
     assert (
         provider_tool["server_source"] == expected_json["server_source"]
-    ), f"server_source should be {expected_json["server_source"]}"
+    ), f"server_source should be {expected_json['server_source']}"
     assert (
         provider_tool["type"] == expected_json["type"]
-    ), f"type should be {expected_json["type"]}"
+    ), f"type should be {expected_json['type']}"
🤖 Prompt for AI Agents
In tests/e2e/features/steps/info.py around lines 144 to 161, the f-string
assertion messages use nested double quotes which causes a SyntaxError; change
the inner dictionary access quotes to single quotes (e.g.
expected_json['identifier']) or escape the inner double quotes so each f-string
is valid, and update all assertion messages in this block consistently.

Copy link
Contributor

@tisnik tisnik left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@tisnik tisnik merged commit a229124 into lightspeed-core:main Oct 20, 2025
18 of 20 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants