LCORE-1280: Updated tests doc#1085
Conversation
WalkthroughThis change introduces a new benchmarking infrastructure for the project, including a comprehensive database operation benchmark suite, associated configuration files, a shell script runner, and documentation updates across multiple test directories. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~28 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 5
🤖 Fix all issues with AI agents
In `@tests/benchmarks/bb`:
- Around line 1-21: Add a bash shebang and fail-fast settings, and use the
script's directory for deterministic cd: at top of the bb script add
"#!/usr/bin/env bash" and "set -euo pipefail", compute SCRIPT_DIR with something
like SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" then replace
"pushd ../.." / "popd" with "cd "$SCRIPT_DIR/../.."" (or save/restore PWD) so
pushd/popd are not required, and keep ARTIFACT_DIR and the final pytest
invocation (tests/benchmarks/test_app_database.py) but update the initial echo
to accurately state you're running benchmarks if desired.
In `@tests/benchmarks/test_app_database.py`:
- Around line 52-61: The benchmark resets the main SQLite DB path but leaves the
conversation cache pointing at a persistent /tmp file; update the test to also
override configuration.database_configuration.conversation_cache.sqlite.db_path
to a tmp_path-based file (e.g., tmp_path / "conversation_cache.db") before
calling database.initialize_database() so the cache is isolated per run; place
this change alongside the existing assignment to
configuration.database_configuration.sqlite.db_path and before
database.initialize_database()/database.create_tables() to ensure cache
initialization uses the temp path.
- Around line 1-10: The file fails Black formatting; run Black on the test file
(or the repo) and commit the reformatted changes so CI passes. Specifically,
format tests/benchmarks/test_app_database.py (fix import grouping/spacing and
any line breaks) by running the black CLI (e.g., black <file> or black .) then
add/commit the updated file.
- Around line 258-262: The docstring for update_user_conversation incorrectly
states that the id parameter is optional and a generated suid will be used;
update the docstring for the update_user_conversation function to reflect the
actual signature by changing the Parameters entry for id to "id (str): Required
identifier for the conversation" (or equivalent wording) and remove any mention
of generating a suid when id is not provided so the doc matches the function
signature.
In `@tests/benchmarks/xxx.py`:
- Around line 2-10: Wrap the block into a proper function (e.g.,
fetch_user_conversations(user_id)) and remove the stray else and broken try by
using a normal try/except or context manager only; call get_session() inside
that function and ensure get_session, UserConversation and user_id are available
via imports or parameters. Inside the function create query =
session.query(UserConversation).filter_by(user_id=user_id), assign to
filtered_query, call filtered_query.all(), handle exceptions in except (logging
or re-raising) and return the resulting list so callers can use the data. Ensure
symbols referenced are defined or imported and that the function has a clear
callable signature.
🧹 Nitpick comments (1)
tests/configuration/benchmarks-lightspeed-stack.yaml (1)
10-18: Consider env-var substitution forapi_key.
Keeping a literal key (even a placeholder) can lead to accidental real key commits later. Prefer wiring through the configuration’s env substitution mechanism.
tests/benchmarks/bb
Outdated
| echo "Running unit tests..." | ||
| echo "Reports will be written to ${ARTIFACT_DIR}" | ||
| #python -m pytest -vv tests/unit/cache/test_postgres_cache.py --cov=ols --cov-report term-missing | ||
| #python -m pytest -vv tests/unit/cache/test_postgres_cache.py | ||
| #python -m pytest -vv tests/unit/utils/test_config.py | ||
| #python -m pytest -vv tests/unit/cache/test_cache_factory.py | ||
| #pdm run python -m pytest -vv tests/unit/quota/test_user_quota_limiter.py | ||
| #pdm run python -m pytest -vv tests/unit/app/models/test_config.py | ||
| #pdm run python -m pytest -vv tests/unit/quota/test_quota_limiter_factory.py | ||
| #pdm run python -m pytest -vv tests/unit/query_helpers/test_docs_summarizer.py | ||
| #pdm run python -m pytest -vv tests/unit/llms/ | ||
| #pdm run python -m pytest -vv tests/unit/plugins/ --cov=ols/plugins --cov-report term-missing | ||
|
|
||
| #pdm run python -m pytest -vv tests/unit/runners/ | ||
| #uv run python -m pytest -vv tests/unit/models/test_requests.py --cov=src/models --cov-report term-missing | ||
| #uv run python -m pytest -vv tests/unit/cache/test_postgres_cache.py --cov=src/cache/ --cov-report term-missing | ||
| #uv run python -m pytest -vv tests/unit/app/endpoints/test_query.py --cov=src/cache/ --cov-report term-missing | ||
| #uv run python -m pytest -vv tests/unit/app/endpoints/test_streaming_query.py --cov=src/cache/ --cov-report term-missing | ||
| pushd ../.. | ||
| uv run python -m pytest -vv tests/benchmarks/test_app_database.py | ||
| popd |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Locate and read the file
git ls-files tests/benchmarks/bbRepository: lightspeed-core/lightspeed-stack
Length of output: 95
🏁 Script executed:
# Read the file content
cat -n tests/benchmarks/bbRepository: lightspeed-core/lightspeed-stack
Length of output: 1788
🏁 Script executed:
# Check if this is an executable script
ls -la tests/benchmarks/bbRepository: lightspeed-core/lightspeed-stack
Length of output: 144
🏁 Script executed:
# Check the directory structure to understand the path relationship
ls -la tests/benchmarks/Repository: lightspeed-core/lightspeed-stack
Length of output: 430
🏁 Script executed:
# Look for any reference to ARTIFACT_DIR in the codebase
rg "ARTIFACT_DIR" --type sh --type bashRepository: lightspeed-core/lightspeed-stack
Length of output: 104
🏁 Script executed:
# Search for ARTIFACT_DIR in shell scripts with correct syntax
rg "ARTIFACT_DIR" -t bashRepository: lightspeed-core/lightspeed-stack
Length of output: 104
🏁 Script executed:
# Also search more broadly
rg "ARTIFACT_DIR"Repository: lightspeed-core/lightspeed-stack
Length of output: 1365
🏁 Script executed:
# Test if pushd/popd are available in /bin/sh vs bash
which pushd 2>/dev/null || echo "pushd not in PATH"
command -v pushd 2>/dev/null || echo "pushd not available as command"Repository: lightspeed-core/lightspeed-stack
Length of output: 99
🏁 Script executed:
# Verify the relative path - where would ../.. lead from tests/benchmarks/?
cd tests/benchmarks && realpath .. && realpath ../..Repository: lightspeed-core/lightspeed-stack
Length of output: 124
🏁 Script executed:
# Check the Makefile more carefully to understand how ARTIFACT_DIR is handled
cat Makefile | head -20Repository: lightspeed-core/lightspeed-stack
Length of output: 623
🏁 Script executed:
# Check if the bb script is typically called from a specific location
git log --oneline tests/benchmarks/bb | head -5Repository: lightspeed-core/lightspeed-stack
Length of output: 113
🏁 Script executed:
# Verify the directory depth from repo root to tests/benchmarks
cd tests/benchmarks && pwd && cd ../../ && pwd && cd -Repository: lightspeed-core/lightspeed-stack
Length of output: 166
Add a bash shebang and fail-fast mode to prevent runtime failures.
This script lacks a shebang, so executing it as ./bb invokes /bin/sh, where pushd and popd are unavailable and will fail. Additionally, it says "Running unit tests..." but executes a benchmark test, and uses a relative path that depends on the caller's working directory.
Proposed fix
+#!/usr/bin/env bash
+set -euo pipefail
+
+SCRIPT_DIR="$(cd -- "$(dirname -- "${BASH_SOURCE[0]}")" && pwd)"
+ROOT_DIR="${SCRIPT_DIR}/../.."
+: "${ARTIFACT_DIR:=${ROOT_DIR}/artifacts}"
+
-echo "Running unit tests..."
+echo "Running benchmark tests..."
echo "Reports will be written to ${ARTIFACT_DIR}"
@@
-pushd ../..
+pushd "${ROOT_DIR}" >/dev/null
uv run python -m pytest -vv tests/benchmarks/test_app_database.py
-popd
+popd >/dev/null📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| echo "Running unit tests..." | |
| echo "Reports will be written to ${ARTIFACT_DIR}" | |
| #python -m pytest -vv tests/unit/cache/test_postgres_cache.py --cov=ols --cov-report term-missing | |
| #python -m pytest -vv tests/unit/cache/test_postgres_cache.py | |
| #python -m pytest -vv tests/unit/utils/test_config.py | |
| #python -m pytest -vv tests/unit/cache/test_cache_factory.py | |
| #pdm run python -m pytest -vv tests/unit/quota/test_user_quota_limiter.py | |
| #pdm run python -m pytest -vv tests/unit/app/models/test_config.py | |
| #pdm run python -m pytest -vv tests/unit/quota/test_quota_limiter_factory.py | |
| #pdm run python -m pytest -vv tests/unit/query_helpers/test_docs_summarizer.py | |
| #pdm run python -m pytest -vv tests/unit/llms/ | |
| #pdm run python -m pytest -vv tests/unit/plugins/ --cov=ols/plugins --cov-report term-missing | |
| #pdm run python -m pytest -vv tests/unit/runners/ | |
| #uv run python -m pytest -vv tests/unit/models/test_requests.py --cov=src/models --cov-report term-missing | |
| #uv run python -m pytest -vv tests/unit/cache/test_postgres_cache.py --cov=src/cache/ --cov-report term-missing | |
| #uv run python -m pytest -vv tests/unit/app/endpoints/test_query.py --cov=src/cache/ --cov-report term-missing | |
| #uv run python -m pytest -vv tests/unit/app/endpoints/test_streaming_query.py --cov=src/cache/ --cov-report term-missing | |
| pushd ../.. | |
| uv run python -m pytest -vv tests/benchmarks/test_app_database.py | |
| popd | |
| #!/usr/bin/env bash | |
| set -euo pipefail | |
| SCRIPT_DIR="$(cd -- "$(dirname -- "${BASH_SOURCE[0]}")" && pwd)" | |
| ROOT_DIR="${SCRIPT_DIR}/../.." | |
| : "${ARTIFACT_DIR:=${ROOT_DIR}/artifacts}" | |
| echo "Running benchmark tests..." | |
| echo "Reports will be written to ${ARTIFACT_DIR}" | |
| `#python` -m pytest -vv tests/unit/cache/test_postgres_cache.py --cov=ols --cov-report term-missing | |
| `#python` -m pytest -vv tests/unit/cache/test_postgres_cache.py | |
| `#python` -m pytest -vv tests/unit/utils/test_config.py | |
| `#python` -m pytest -vv tests/unit/cache/test_cache_factory.py | |
| `#pdm` run python -m pytest -vv tests/unit/quota/test_user_quota_limiter.py | |
| `#pdm` run python -m pytest -vv tests/unit/app/models/test_config.py | |
| `#pdm` run python -m pytest -vv tests/unit/quota/test_quota_limiter_factory.py | |
| `#pdm` run python -m pytest -vv tests/unit/query_helpers/test_docs_summarizer.py | |
| `#pdm` run python -m pytest -vv tests/unit/llms/ | |
| `#pdm` run python -m pytest -vv tests/unit/plugins/ --cov=ols/plugins --cov-report term-missing | |
| `#pdm` run python -m pytest -vv tests/unit/runners/ | |
| `#uv` run python -m pytest -vv tests/unit/models/test_requests.py --cov=src/models --cov-report term-missing | |
| `#uv` run python -m pytest -vv tests/unit/cache/test_postgres_cache.py --cov=src/cache/ --cov-report term-missing | |
| `#uv` run python -m pytest -vv tests/unit/app/endpoints/test_query.py --cov=src/cache/ --cov-report term-missing | |
| `#uv` run python -m pytest -vv tests/unit/app/endpoints/test_streaming_query.py --cov=src/cache/ --cov-report term-missing | |
| pushd "${ROOT_DIR}" >/dev/null | |
| uv run python -m pytest -vv tests/benchmarks/test_app_database.py | |
| popd >/dev/null |
🤖 Prompt for AI Agents
In `@tests/benchmarks/bb` around lines 1 - 21, Add a bash shebang and fail-fast
settings, and use the script's directory for deterministic cd: at top of the bb
script add "#!/usr/bin/env bash" and "set -euo pipefail", compute SCRIPT_DIR
with something like SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
then replace "pushd ../.." / "popd" with "cd "$SCRIPT_DIR/../.."" (or
save/restore PWD) so pushd/popd are not required, and keep ARTIFACT_DIR and the
final pytest invocation (tests/benchmarks/test_app_database.py) but update the
initial echo to accurately state you're running benchmarks if desired.
| """Benchmarks for app.database module.""" | ||
|
|
||
| from typing import Optional | ||
| from pathlib import Path | ||
| import pytest | ||
| import random | ||
| from sqlalchemy.orm import Session | ||
| from pytest_benchmark.fixture import BenchmarkFixture | ||
| from datetime import UTC, datetime | ||
|
|
There was a problem hiding this comment.
Black formatting failure.
CI reports Black would reformat this file—please run Black and re-commit the results.
🧰 Tools
🪛 GitHub Actions: Black
[error] 1-1: Black formatting check would reformatted this file. Run 'black' to apply formatting fixes.
🤖 Prompt for AI Agents
In `@tests/benchmarks/test_app_database.py` around lines 1 - 10, The file fails
Black formatting; run Black on the test file (or the repo) and commit the
reformatted changes so CI passes. Specifically, format
tests/benchmarks/test_app_database.py (fix import grouping/spacing and any line
breaks) by running the black CLI (e.g., black <file> or black .) then add/commit
the updated file.
| # try to load the configuration containing SQLite database setup | ||
| configuration.load_configuration(configuration_filename) | ||
| assert configuration.database_configuration.sqlite is not None | ||
|
|
||
| # we need to start each benchmark with empty database | ||
| configuration.database_configuration.sqlite.db_path = str(tmp_path / "database.db") | ||
|
|
||
| # initialize database session and create tables | ||
| database.initialize_database() | ||
| database.create_tables() |
There was a problem hiding this comment.
Benchmark DB isolation: consider overriding conversation_cache path.
You reset the SQLite DB path, but the benchmark config still points conversation_cache at a persistent /tmp/ file. If cache initialization runs during benchmarks, this can leak state between runs. Consider setting configuration.database_configuration.conversation_cache.sqlite.db_path to a tmp_path location alongside the main DB.
🤖 Prompt for AI Agents
In `@tests/benchmarks/test_app_database.py` around lines 52 - 61, The benchmark
resets the main SQLite DB path but leaves the conversation cache pointing at a
persistent /tmp file; update the test to also override
configuration.database_configuration.conversation_cache.sqlite.db_path to a
tmp_path-based file (e.g., tmp_path / "conversation_cache.db") before calling
database.initialize_database() so the cache is isolated per run; place this
change alongside the existing assignment to
configuration.database_configuration.sqlite.db_path and before
database.initialize_database()/database.create_tables() to ensure cache
initialization uses the temp path.
| Parameters: | ||
| session (Session): SQLAlchemy session used to persist the record. | ||
| id (Optional[str]): Optional explicit ID to assign to the new conversation. | ||
| If not provided, a generated suid will be used. | ||
|
|
There was a problem hiding this comment.
Fix the docstring for update_user_conversation.
The signature requires id: str, but the docstring says it’s optional and generated.
✏️ Proposed docstring fix
- id (Optional[str]): Optional explicit ID to assign to the new conversation.
- If not provided, a generated suid will be used.
+ id (str): Explicit ID of the conversation to update.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| Parameters: | |
| session (Session): SQLAlchemy session used to persist the record. | |
| id (Optional[str]): Optional explicit ID to assign to the new conversation. | |
| If not provided, a generated suid will be used. | |
| Parameters: | |
| session (Session): SQLAlchemy session used to persist the record. | |
| id (str): Explicit ID of the conversation to update. | |
🤖 Prompt for AI Agents
In `@tests/benchmarks/test_app_database.py` around lines 258 - 262, The docstring
for update_user_conversation incorrectly states that the id parameter is
optional and a generated suid will be used; update the docstring for the
update_user_conversation function to reflect the actual signature by changing
the Parameters entry for id to "id (str): Required identifier for the
conversation" (or equivalent wording) and remove any mention of generating a
suid when id is not provided so the doc matches the function signature.
tests/benchmarks/xxx.py
Outdated
| with get_session() as session: | ||
| try: | ||
| query = session.query(UserConversation) | ||
|
|
||
| filtered_query = ( | ||
| else query.filter_by(user_id=user_id) | ||
| ) | ||
|
|
||
| user_conversations = filtered_query.all() |
There was a problem hiding this comment.
Fix invalid syntax and define a proper callable.
This file won’t parse (unexpected indentation, stray else, and try without except/finally). Also, user_id, get_session, and UserConversation are undefined here. Please restructure into a function and fix the query.
🐛 Suggested fix (structure + syntax)
+from app.database import get_session
+from models.database.conversations import UserConversation
+
+def list_user_conversations(user_id: str):
+ with get_session() as session:
+ query = session.query(UserConversation).filter_by(user_id=user_id)
+ return query.all()
-
- with get_session() as session:
- try:
- query = session.query(UserConversation)
-
- filtered_query = (
- else query.filter_by(user_id=user_id)
- )
-
- user_conversations = filtered_query.all()🧰 Tools
🪛 GitHub Actions: Ruff
[error] 2-2: invalid-syntax: Unexpected indentation at line 2.
[error] 7-7: invalid-syntax: Expected an identifier, but found a keyword else that cannot be used here.
[error] 7-7: invalid-syntax: Expected ), found name after else at line 7.
[error] 8-8: invalid-syntax: Expected a statement after line 8.
[error] 8-8: invalid-syntax: Unexpected token in expression at line 8.
🤖 Prompt for AI Agents
In `@tests/benchmarks/xxx.py` around lines 2 - 10, Wrap the block into a proper
function (e.g., fetch_user_conversations(user_id)) and remove the stray else and
broken try by using a normal try/except or context manager only; call
get_session() inside that function and ensure get_session, UserConversation and
user_id are available via imports or parameters. Inside the function create
query = session.query(UserConversation).filter_by(user_id=user_id), assign to
filtered_query, call filtered_query.all(), handle exceptions in except (logging
or re-raising) and return the resulting list so callers can use the data. Ensure
symbols referenced are defined or imported and that the function has a clear
callable signature.
Description
LCORE-1280: Updated tests doc
Type of change
Tools used to create PR
Related Tickets & Documents
Summary by CodeRabbit
Tests
Chores
✏️ Tip: You can customize this high-level summary in your review settings.