Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
163 changes: 158 additions & 5 deletions docs/e2e_scenarios.md
Original file line number Diff line number Diff line change
@@ -1,16 +1,169 @@
# List of scenarios

## [`smoketests.feature`](https://github.com/lightspeed-core/lightspeed-stack/blob/main/tests/e2e/features/smoketests.feature)
## [`authorized_noop.feature`](https://github.com/lightspeed-core/lightspeed-stack/blob/main/tests/e2e/features/authorized_noop.feature)

* Check if the main endpoint is reachable
* Check if the authorized endpoint works fine when user_id and auth header are not provided
* Check if the authorized endpoint works when auth token is not provided
* Check if the authorized endpoint works when user_id is not provided
* Check if the authorized endpoint works when providing empty user_id
* Check if the authorized endpoint works when providing proper user_id

## [`rest_api.feature`](https://github.com/lightspeed-core/lightspeed-stack/blob/main/tests/e2e/features/rest_api.feature)
## [`authorized_noop_token.feature`](https://github.com/lightspeed-core/lightspeed-stack/blob/main/tests/e2e/features/authorized_noop_token.feature)

* Check if the authorized endpoint fails when user_id and auth header are not provided
* Check if the authorized endpoint works when user_id is not provided
* Check if the authorized endpoint works when providing empty user_id
* Check if the authorized endpoint works when providing proper user_id
* Check if the authorized endpoint works with proper user_id but bearer token is not present
* Check if the authorized endpoint works when auth token is malformed

## [`authorized_rh_identity.feature`](https://github.com/lightspeed-core/lightspeed-stack/blob/main/tests/e2e/features/authorized_rh_identity.feature)

* Request fails when x-rh-identity header is missing
* Request fails when identity field is missing
* Request succeeds with valid User identity and required entitlements
* Request succeeds with valid System identity and required entitlements
* Request fails when required entitlement is missing
* Request fails when entitlement exists but is_entitled is false
* Request fails when User identity is missing user_id
* Request fails when User identity is missing username
* Request fails when System identity is missing cn

## [`conversation_cache_v2.feature`](https://github.com/lightspeed-core/lightspeed-stack/blob/main/tests/e2e/features/conversation_cache_v2.feature)

* V2 conversations endpoint WITHOUT no_tools (known bug - empty vector DB)
* V2 conversations endpoint finds the correct conversation when it exists
* V2 conversations endpoint fails when auth header is not present
* V2 conversations/{conversation_id} endpoint finds conversation with full metadata
* V2 conversations/{conversation_id} endpoint fails when auth header is not present
* V2 conversations/{conversation_id} GET endpoint fails when conversation_id is malformed
* V2 conversations/{conversation_id} GET endpoint fails when conversation does not exist
* Check conversations/{conversation_id} works when llama-stack is down
* Check conversations/{conversation_id} fails when cache not configured
* V2 conversations DELETE endpoint removes the correct conversation
* V2 conversations/{conversation_id} DELETE endpoint fails when auth header is not present
* V2 conversations/{conversation_id} DELETE endpoint fails when conversation_id is malformed
* V2 conversations DELETE endpoint fails when the conversation does not exist
* V2 conversations DELETE endpoint works even when llama-stack is down
* V2 conversations PUT endpoint successfully updates topic summary
* V2 conversations PUT endpoint fails when auth header is not present
* V2 conversations PUT endpoint fails when conversation_id is malformed
* V2 conversations PUT endpoint fails when conversation does not exist
* V2 conversations PUT endpoint fails with empty topic summary (422)

## [`conversations.feature`](https://github.com/lightspeed-core/lightspeed-stack/blob/main/tests/e2e/features/conversations.feature)

* Check if conversations endpoint finds the correct conversation when it exists
* Check if conversations endpoint fails when the auth header is not present
* Check if conversations/{conversation_id} endpoint finds the correct conversation when it exists
* Check if conversations/{conversation_id} endpoint fails when the auth header is not present
* Check if conversations/{conversation_id} GET endpoint fails when conversation_id is malformed
* Check if conversations/{conversation_id} GET endpoint fails when llama-stack is unavailable
* Check if conversations DELETE endpoint removes the correct conversation
* Check if conversations/{conversation_id} DELETE endpoint fails when conversation_id is malformed
* Check if conversations DELETE endpoint fails when the conversation does not exist
* Check if conversations/{conversation_id} DELETE endpoint fails when llama-stack is unavailable

## [`faiss.feature`](https://github.com/lightspeed-core/lightspeed-stack/blob/main/tests/e2e/features/faiss.feature)

* check if vector store is registered
* Check if rags endpoint fails when llama-stack is unavailable
* Check if rags endpoints responds with error when not authenticated
* Query vector db using the file_search tool

## [`feedback.feature`](https://github.com/lightspeed-core/lightspeed-stack/blob/main/tests/e2e/features/feedback.feature)

* Check if enabling the feedback is working
* Check if disabling the feedback is working
* Check if toggling the feedback with incorrect attribute name fails
* Check if getting feedback status returns true when feedback is enabled
* Check if getting feedback status returns false when feedback is disabled
* Check if feedback endpoint is not working when feedback is disabled
* Check if feedback endpoint fails when required fields are not specified
* Check if feedback endpoint is working when sentiment is negative
* Check if feedback endpoint is working when sentiment is positive
* Check if feedback submittion fails when invald sentiment is passed
Copy link
Contributor

@coderabbitai coderabbitai bot Jan 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix typos in scenario description.

Two typos found:

  • "submittion" should be "submission"
  • "invald" should be "invalid"
📝 Proposed fix
-* Check if feedback submittion fails when invald sentiment is passed
+* Check if feedback submission fails when invalid sentiment is passed
🤖 Prompt for AI Agents
In `@docs/e2e_scenarios.md` at line 85, Update the scenario description line "*
Check if feedback submittion fails when invald sentiment is passed" to correct
the typos: change "submittion" to "submission" and "invald" to "invalid" so the
line reads "* Check if feedback submission fails when invalid sentiment is
passed"; locate and edit that exact text in docs/e2e_scenarios.md (the scenario
description string) and commit the corrected sentence.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@coderabbitai create issue

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧩 Analysis chain

@tisnik, I'll create an issue to track the typos found in the e2e scenarios documentation.


🏁 Script executed:

#!/bin/bash
# Create a GitHub issue for the typos in e2e scenarios documentation
gh issue create \
  --repo lightspeed-core/lightspeed-stack \
  --title "Fix typos in e2e scenarios documentation" \
  --body "Two typos were found in the e2e scenarios documentation:

- Line 85: \"submittion\" should be \"submission\"
- Line 85: \"invald\" should be \"invalid\"

**References:**
- Pull Request: https://github.com/lightspeed-core/lightspeed-stack/pull/1077
- Comment: https://github.com/lightspeed-core/lightspeed-stack/pull/1077#discussion_r2741526848
- Requested by: `@tisnik`

**File:** docs/e2e_scenarios.md" \
  --assignee tisnik

Length of output: 63


@tisnik, I've created issue #1078 to track the typos in the e2e scenarios documentation. The issue has been assigned to you and includes references to this PR and comment.


🧠 Learnings used
Learnt from: radofuchs
Repo: lightspeed-core/lightspeed-stack PR: 485
File: tests/e2e/features/environment.py:87-95
Timestamp: 2025-09-02T11:09:40.404Z
Learning: In the lightspeed-stack e2e tests, noop authentication tests use the default lightspeed-stack.yaml configuration, while noop-with-token tests use the Authorized tag to trigger a config swap to the specialized noop-with-token configuration file.

Learnt from: radofuchs
Repo: lightspeed-core/lightspeed-stack PR: 485
File: tests/e2e/test_list.txt:2-3
Timestamp: 2025-09-02T11:15:02.411Z
Learning: In the lightspeed-stack e2e tests, the Authorized tag is intentionally omitted from noop authentication tests because they are designed to test against the default lightspeed-stack.yaml configuration rather than the specialized noop-with-token configuration.

Learnt from: CR
Repo: lightspeed-core/lightspeed-stack PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-01-26T10:13:05.213Z
Learning: Applies to tests/e2e/features/**/*.feature : Use behave (BDD) framework with Gherkin feature files for end-to-end testing

* Check if feedback submission fails when nonexisting conversation ID is passed
* Check if feedback submission fails when conversation belongs to a different user
* Check if feedback endpoint is not working when not authorized
* Check if update feedback status endpoint is not working when not authorized
* Check if feedback submittion fails when invalid feedback storage path is configured
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix typo in scenario description.

"submittion" should be "submission".

📝 Proposed fix
-* Check if feedback submittion fails when invalid feedback storage path is configured
+* Check if feedback submission fails when invalid feedback storage path is configured
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
* Check if feedback submittion fails when invalid feedback storage path is configured
* Check if feedback submission fails when invalid feedback storage path is configured
🤖 Prompt for AI Agents
In `@docs/e2e_scenarios.md` at line 90, Update the scenario description that
currently reads "Check if feedback submittion fails when invalid feedback
storage path is configured" by correcting the typo "submittion" to "submission"
so the line reads "Check if feedback submission fails when invalid feedback
storage path is configured"; locate and edit the exact string in the e2e
scenarios document (search for "Check if feedback submittion fails") and replace
the misspelled word.

* Check if feedback endpoint fails when only empty string user_feedback is provided

## [`health.feature`](https://github.com/lightspeed-core/lightspeed-stack/blob/main/tests/e2e/features/health.feature)

* Check if service report proper readiness state
* Check if service report proper liveness state
* Check if service report proper readiness state when llama stack is not available
* Check if service report proper liveness state even when llama stack is not available

## [`info.feature`](https://github.com/lightspeed-core/lightspeed-stack/blob/main/tests/e2e/features/info.feature)

* Check if the OpenAPI endpoint works as expected
* Check if info endpoint is working
* Check if info endpoint reports error when llama-stack connection is not working
* Check if models endpoint is working
* Check if models endpoint reports error when llama-stack in unreachable
Copy link
Contributor

@coderabbitai coderabbitai bot Jan 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix grammatical error in scenario descriptions.

"in unreachable" should be "is unreachable" (appears on lines 106, 108, and 110).

📝 Proposed fixes
-* Check if models endpoint reports error when llama-stack in unreachable
+* Check if models endpoint reports error when llama-stack is unreachable
-* Check if shields endpoint reports error when llama-stack in unreachable
+* Check if shields endpoint reports error when llama-stack is unreachable
-* Check if tools endpoint reports error when llama-stack in unreachable
+* Check if tools endpoint reports error when llama-stack is unreachable

Also applies to: 108-108, 110-110

🤖 Prompt for AI Agents
In `@docs/e2e_scenarios.md` at line 106, Correct the grammatical error in the
scenario descriptions by changing the phrase "when llama-stack in unreachable"
to "when llama-stack is unreachable" for the occurrences matching the exact
sentence "Check if models endpoint reports error when llama-stack in
unreachable" (and the identical lines at 108 and 110) so all three instances
read "Check if models endpoint reports error when llama-stack is unreachable".

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@coderabbitai create issue

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧩 Analysis chain

@tisnik, I'll create an issue for these grammatical errors.


🏁 Script executed:

#!/bin/bash
# Create GitHub issue for grammatical errors in e2e scenarios documentation

gh issue create \
  --repo lightspeed-core/lightspeed-stack \
  --title "Fix grammatical errors in e2e scenarios documentation" \
  --body "## Description

Three scenario descriptions in \`docs/e2e_scenarios.md\` contain a grammatical error where \"in unreachable\" should be \"is unreachable\".

**Affected lines:**
- Line 106: models endpoint
- Line 108: shields endpoint  
- Line 110: tools endpoint

The phrase \"when llama-stack in unreachable\" should be corrected to \"when llama-stack is unreachable\" in all three instances.

## References

- Pull Request: https://github.com/lightspeed-core/lightspeed-stack/pull/1077
- Review Comment: https://github.com/lightspeed-core/lightspeed-stack/pull/1077#discussion_r2741526873
- Requested by: `@tisnik`" \
  --assignee tisnik

Length of output: 63


@tisnik, I've created issue #1079 to track the grammatical errors in the e2e scenarios documentation. The issue has been assigned to you.


🧠 Learnings used
Learnt from: radofuchs
Repo: lightspeed-core/lightspeed-stack PR: 485
File: tests/e2e/features/environment.py:87-95
Timestamp: 2025-09-02T11:09:40.404Z
Learning: In the lightspeed-stack e2e tests, noop authentication tests use the default lightspeed-stack.yaml configuration, while noop-with-token tests use the Authorized tag to trigger a config swap to the specialized noop-with-token configuration file.

Learnt from: radofuchs
Repo: lightspeed-core/lightspeed-stack PR: 485
File: tests/e2e/test_list.txt:2-3
Timestamp: 2025-09-02T11:15:02.411Z
Learning: In the lightspeed-stack e2e tests, the Authorized tag is intentionally omitted from noop authentication tests because they are designed to test against the default lightspeed-stack.yaml configuration rather than the specialized noop-with-token configuration.

* Check if shields endpoint is working
* Check if shields endpoint reports error when llama-stack in unreachable
* Check if tools endpoint is working
* Check if tools endpoint reports error when llama-stack in unreachable
* Check if metrics endpoint is working
* Check if MCP client auth options endpoint is working

## [`query.feature`](https://github.com/lightspeed-core/lightspeed-stack/blob/main/tests/e2e/features/query.feature)

* Check if LLM responds properly to restrictive system prompt to sent question with different system prompt
* Check if LLM responds properly to non-restrictive system prompt to sent question with different system prompt
* Check if LLM ignores new system prompt in same conversation
* Check if LLM responds to sent question with error when not authenticated
* Check if LLM responds to sent question with error when bearer token is missing
* Check if LLM responds to sent question with error when model does not exist
* Check if LLM responds to sent question with error when attempting to access conversation
* Check if LLM responds for query request with error for missing query
* Check if LLM responds for query request for missing model and provider
* Check if LLM responds for query request with error for missing model
* Check if LLM responds for query request with error for missing provider
* Check if LLM responds for query request with error for missing provider
Comment on lines +126 to +127
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Remove duplicate scenario.

Lines 126 and 127 contain identical scenario descriptions. This appears to be a copy-paste error.

🗑️ Proposed fix
 * Check if LLM responds for query request with error for missing model
 * Check if LLM responds for query request with error for missing provider
-* Check if LLM responds for query request with error for missing provider
 * Check if LLM responds properly when XML and JSON attachments are sent
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
* Check if LLM responds for query request with error for missing provider
* Check if LLM responds for query request with error for missing provider
* Check if LLM responds for query request with error for missing provider
🤖 Prompt for AI Agents
In `@docs/e2e_scenarios.md` around lines 126 - 127, Remove the duplicate scenario
entry "Check if LLM responds for query request with error for missing provider"
so only a single copy remains; locate the two identical lines in
docs/e2e_scenarios.md (the duplicated scenario string from the diff) and delete
one of them, leaving the document otherwise unchanged.

* Check if LLM responds properly when XML and JSON attachments are sent

## [`rbac.feature`](https://github.com/lightspeed-core/lightspeed-stack/blob/main/tests/e2e/features/rbac.feature)

* Request without token returns 401
* Request with malformed Authorization header returns 401
* Admin can access query endpoint
* Admin can access models endpoint
* Admin can list conversations
* User can access query endpoint
* User can list conversations
* Viewer can list conversations
* Viewer can access info endpoint
* Viewer cannot query - returns 403
* Query-only user can query without specifying model
* Query-only user cannot override model - returns 403
* Query-only user cannot list conversations - returns 403
* No-role user can access info endpoint (everyone role)
* No-role user cannot query - returns 403
* No-role user cannot list conversations - returns 403

## [`rest_api.feature`](https://github.com/lightspeed-core/lightspeed-stack/blob/main/tests/e2e/features/rest_api.feature)

* Check if the OpenAPI endpoint works as expected

## [`smoketests.feature`](https://github.com/lightspeed-core/lightspeed-stack/blob/main/tests/e2e/features/smoketests.feature)

* Check if the main endpoint is reachable

## [`streaming_query.feature`](https://github.com/lightspeed-core/lightspeed-stack/blob/main/tests/e2e/features/streaming_query.feature)

## [`llm_interface.feature`](https://github.com/lightspeed-core/lightspeed-stack/blob/main/tests/e2e/features/llm_interface.feature)
* Check if streaming_query response in tokens matches the full response
* Check if LLM responds properly to restrictive system prompt to sent question with different system prompt
* Check if LLM responds properly to non-restrictive system prompt to sent question with different system prompt
* Check if LLM ignores new system prompt in same conversation
* Check if LLM responds for streaming_query request with error for missing query
* Check if LLM responds for streaming_query request for missing model and provider
* Check if LLM responds for streaming_query request with error for missing model
* Check if LLM responds for streaming_query request with error for missing provider
* Check if LLM responds properly when XML and JSON attachments are sent
* Check if LLM responds to sent question with error when not authenticated

* Check if LLM responds to sent question
Loading