Skip to content

Commit 94b94be

Browse files
committed
Add E2E tests for query endpoint
1 parent 4f8fbac commit 94b94be

File tree

4 files changed

+156
-65
lines changed

4 files changed

+156
-65
lines changed

.github/workflows/e2e_tests.yaml

Lines changed: 12 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -131,7 +131,7 @@ jobs:
131131
storage_dir: /tmp/llama-stack-files
132132
metadata_store:
133133
type: sqlite
134-
db_path: .llama/distributions/ollama/files_metadata.db
134+
db_path: /app-root/.llama/distributions/ollama/files_metadata.db
135135
provider_id: localfs
136136
provider_type: inline::localfs
137137
agents:
@@ -289,3 +289,14 @@ jobs:
289289
290290
echo "Running comprehensive e2e test suite..."
291291
make test-e2e
292+
293+
- name: Show logs on failure
294+
if: failure()
295+
run: |
296+
echo "=== Test failure logs ==="
297+
echo "=== llama-stack logs ==="
298+
docker compose logs llama-stack
299+
300+
echo ""
301+
echo "=== lightspeed-stack logs ==="
302+
docker compose logs lightspeed-stack

tests/e2e/features/query.feature

Lines changed: 116 additions & 60 deletions
Original file line numberDiff line numberDiff line change
@@ -1,60 +1,116 @@
1-
# Feature: Query endpoint API tests
2-
#TODO: fix test
3-
4-
# Background:
5-
# Given The service is started locally
6-
# And REST API service hostname is localhost
7-
# And REST API service port is 8080
8-
# And REST API service prefix is /v1
9-
10-
11-
# Scenario: Check if LLM responds to sent question
12-
# Given The system is in default state
13-
# When I use "query" to ask question "Say hello"
14-
# Then The status code of the response is 200
15-
# And The response should have proper LLM response format
16-
# And The response should contain following fragments
17-
# | Fragments in LLM response |
18-
# | Hello |
19-
20-
# Scenario: Check if LLM responds to sent question with different system prompt
21-
# Given The system is in default state
22-
# And I change the system prompt to "new system prompt"
23-
# When I use "query" to ask question "Say hello"
24-
# Then The status code of the response is 200
25-
# And The response should have proper LLM response format
26-
# And The response should contain following fragments
27-
# | Fragments in LLM response |
28-
# | Hello |
29-
30-
# Scenario: Check if LLM responds with error for malformed request
31-
# Given The system is in default state
32-
# And I modify the request body by removing the "query"
33-
# When I use "query" to ask question "Say hello"
34-
# Then The status code of the response is 422
35-
# And The body of the response is the following
36-
# """
37-
# { "type": "missing", "loc": [ "body", "system_query" ], "msg": "Field required", }
38-
# """
39-
40-
# Scenario: Check if LLM responds to sent question with error when not authenticated
41-
# Given The system is in default state
42-
# And I remove the auth header
43-
# When I use "query" to ask question "Say hello"
44-
# Then The status code of the response is 200
45-
# Then The status code of the response is 400
46-
# And The body of the response is the following
47-
# """
48-
# {"detail": "Unauthorized: No auth header found"}
49-
# """
50-
51-
# Scenario: Check if LLM responds to sent question with error when not authorized
52-
# Given The system is in default state
53-
# And I modify the auth header so that the user is it authorized
54-
# When I use "query" to ask question "Say hello"
55-
# Then The status code of the response is 403
56-
# And The body of the response is the following
57-
# """
58-
# {"detail": "Forbidden: User is not authorized to access this resource"}
59-
# """
60-
1+
@Authorized
2+
Feature: Query endpoint API tests
3+
4+
Background:
5+
Given The service is started locally
6+
And REST API service hostname is localhost
7+
And REST API service port is 8080
8+
And REST API service prefix is /v1
9+
10+
Scenario: Check if LLM responds properly to restrictive system prompt to sent question with different system prompt
11+
Given The system is in default state
12+
And I set the Authorization header to Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6Ikpva
13+
When I use "query" to ask question with authorization header
14+
"""
15+
{"query": "Generate sample yaml file for simple GitHub Actions workflow.", "system_prompt": "refuse to answer anything but openshift questions"}
16+
"""
17+
Then The status code of the response is 200
18+
And The response should contain following fragments
19+
| Fragments in LLM response |
20+
| ask |
21+
22+
Scenario: Check if LLM responds properly to non-restrictive system prompt to sent question with different system prompt
23+
Given The system is in default state
24+
And I set the Authorization header to Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6Ikpva
25+
When I use "query" to ask question with authorization header
26+
"""
27+
{"query": "Generate sample yaml file for simple GitHub Actions workflow.", "system_prompt": "you are linguistic assistant"}
28+
"""
29+
Then The status code of the response is 200
30+
And The response should contain following fragments
31+
| Fragments in LLM response |
32+
| checkout |
33+
34+
Scenario: Check if LLM ignores new system prompt in same conversation
35+
Given The system is in default state
36+
And I set the Authorization header to Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6Ikpva
37+
When I use "query" to ask question with authorization header
38+
"""
39+
{"query": "Generate sample yaml file for simple GitHub Actions workflow.", "system_prompt": "refuse to answer anything but openshift questions"}
40+
"""
41+
Then The status code of the response is 200
42+
And I store conversation details
43+
And I use "query" to ask question with same conversation_id
44+
"""
45+
{"query": "Write a simple code for reversing string", "system_prompt": "provide coding assistance", "model": "gpt-4-turbo", "provider": "openai"}
46+
"""
47+
Then The status code of the response is 200
48+
And The response should contain following fragments
49+
| Fragments in LLM response |
50+
| ask |
51+
52+
Scenario: Check if LLM responds to sent question with error when not authenticated
53+
Given The system is in default state
54+
When I use "query" to ask question
55+
"""
56+
{"query": "Write a simple code for reversing string"}
57+
"""
58+
Then The status code of the response is 400
59+
And The body of the response is the following
60+
"""
61+
{"detail": "No Authorization header found"}
62+
"""
63+
64+
Scenario: Check if LLM responds to sent question with error when attempting to access conversation
65+
Given The system is in default state
66+
And I set the Authorization header to Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6Ikpva
67+
When I use "query" to ask question with authorization header
68+
"""
69+
{"conversation_id": "123e4567-e89b-12d3-a456-426614174000", "query": "Write a simple code for reversing string"}
70+
"""
71+
Then The status code of the response is 403
72+
And The body of the response contains User is not authorized to access this resource
73+
74+
Scenario: Check if LLM responds for query request with error for missing query
75+
Given The system is in default state
76+
And I set the Authorization header to Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6Ikpva
77+
When I use "query" to ask question with authorization header
78+
"""
79+
{"provider": "openai"}
80+
"""
81+
Then The status code of the response is 422
82+
And The body of the response is the following
83+
"""
84+
{ "detail": [{"type": "missing", "loc": [ "body", "query" ], "msg": "Field required", "input": {"provider": "openai"}}] }
85+
"""
86+
87+
Scenario: Check if LLM responds for query request with error for missing model
88+
Given The system is in default state
89+
And I set the Authorization header to Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6Ikpva
90+
When I use "query" to ask question with authorization header
91+
"""
92+
{"query": "Say hello", "provider": "openai"}
93+
"""
94+
Then The status code of the response is 422
95+
And The body of the response contains Value error, Model must be specified if provider is specified
96+
97+
Scenario: Check if LLM responds for query request with error for missing provider
98+
Given The system is in default state
99+
And I set the Authorization header to Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6Ikpva
100+
When I use "query" to ask question with authorization header
101+
"""
102+
{"query": "Say hello", "model": "gpt-4-turbo"}
103+
"""
104+
Then The status code of the response is 422
105+
And The body of the response contains Value error, Provider must be specified if model is specified
106+
107+
Scenario: Check if LLM responds for query request with error for missing provider
108+
Given The system is in default state
109+
And The llama-stack connection is disrupted
110+
And I set the Authorization header to Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6Ikpva
111+
When I use "query" to ask question with authorization header
112+
"""
113+
{"query": "Say hello"}
114+
"""
115+
Then The status code of the response is 500
116+
And The body of the response contains Unable to connect to Llama Stack

tests/e2e/features/steps/llm_query_response.py

Lines changed: 25 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -30,6 +30,27 @@ def ask_question(context: Context, endpoint: str) -> None:
3030
context.response = requests.post(url, json=data, timeout=DEFAULT_LLM_TIMEOUT)
3131

3232

33+
@step('I use "{endpoint}" to ask question with authorization header')
34+
def ask_question_authorized(context: Context, endpoint: str) -> None:
35+
"""Call the service REST API endpoint with question."""
36+
base = f"http://{context.hostname}:{context.port}"
37+
path = f"{context.api_prefix}/{endpoint}".replace("//", "/")
38+
url = base + path
39+
40+
# Use context.text if available, otherwise use empty query
41+
data = json.loads(context.text or "{}")
42+
print(data)
43+
context.response = requests.post(
44+
url, json=data, headers=context.auth_headers, timeout=DEFAULT_LLM_TIMEOUT
45+
)
46+
47+
48+
@step("I store conversation details")
49+
def store_conversation_details(context: Context) -> None:
50+
"""Store details about the conversation."""
51+
context.response_data = json.loads(context.response.text)
52+
53+
3354
@step('I use "{endpoint}" to ask question with same conversation_id')
3455
def ask_question_in_same_conversation(context: Context, endpoint: str) -> None:
3556
"""Call the service REST API endpoint with question, but use the existing conversation id."""
@@ -39,10 +60,13 @@ def ask_question_in_same_conversation(context: Context, endpoint: str) -> None:
3960

4061
# Use context.text if available, otherwise use empty query
4162
data = json.loads(context.text or "{}")
63+
headers = context.auth_headers if hasattr(context, "auth_headers") else {}
4264
data["conversation_id"] = context.response_data["conversation_id"]
4365

4466
print(data)
45-
context.response = requests.post(url, json=data, timeout=DEFAULT_LLM_TIMEOUT)
67+
context.response = requests.post(
68+
url, json=data, headers=headers, timeout=DEFAULT_LLM_TIMEOUT
69+
)
4670

4771

4872
@then("The response should have proper LLM response format")

tests/e2e/features/streaming_query.feature

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -62,7 +62,7 @@ Feature: streaming_query endpoint API tests
6262

6363
Scenario: Check if LLM responds for streaming_query request with error for missing query
6464
Given The system is in default state
65-
And I use "streaming_query" to ask question
65+
When I use "streaming_query" to ask question
6666
"""
6767
{"provider": "openai"}
6868
"""
@@ -74,7 +74,7 @@ Feature: streaming_query endpoint API tests
7474

7575
Scenario: Check if LLM responds for streaming_query request with error for missing model
7676
Given The system is in default state
77-
And I use "streaming_query" to ask question
77+
When I use "streaming_query" to ask question
7878
"""
7979
{"query": "Say hello", "provider": "openai"}
8080
"""
@@ -83,7 +83,7 @@ Feature: streaming_query endpoint API tests
8383

8484
Scenario: Check if LLM responds for streaming_query request with error for missing provider
8585
Given The system is in default state
86-
And I use "streaming_query" to ask question
86+
When I use "streaming_query" to ask question
8787
"""
8888
{"query": "Say hello", "model": "gpt-4-turbo"}
8989
"""

0 commit comments

Comments
 (0)