Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
|
Caution Review failedThe pull request is closed. 📝 WalkthroughSummary by CodeRabbit
WalkthroughAdds a tracked performance test suite under performance_tests/, a GitHub Actions workflow (.github/workflows/performance-tests.yml) to run the suite across Python 3.9–3.12 (matrix, schedule, and manual dispatch), and utilities to run and aggregate results locally and in CI; updates .gitignore to include the test directory. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor Trigger as Push/PR/Schedule/Dispatch
participant GH as GitHub Actions
participant Matrix as performance-test (matrix)
participant PyTest as pytest
participant Artifact as Artifacts Storage
participant Summary as test-summary
Trigger->>GH: start "Performance Tests"
GH->>Matrix: start jobs for each Python version
Matrix->>Matrix: checkout, setup Python, cache & install deps
Matrix->>PyTest: run api/perf/stability (+extended if enabled)
PyTest-->>Matrix: emit XML, JSON, benchmark files
Matrix->>Artifact: upload per-version artifacts
GH->>Summary: run summary job after matrix
Summary->>Artifact: download artifacts
Summary->>Summary: run generate_summary.py -> summary-report.md/json
alt Pull Request
Summary->>GH: post PR comment with summary
end
Summary->>Artifact: upload aggregated summary
sequenceDiagram
autonumber
actor User as Local User
participant Runner as run_tests.py
participant Pip as pip
participant PyTest as pytest
participant FS as Filesystem
User->>Runner: run (api/perf/stability/all) [--install-deps]
opt install dependencies
Runner->>Pip: pip install -r requirements.txt
Pip-->>Runner: install result
end
Runner->>PyTest: invoke selected test group(s)
PyTest-->>Runner: results (stdout, exit code, artifacts)
Runner->>Runner: aggregate -> JSON report
opt --save-report
Runner->>FS: write test-report.json
end
Runner-->>User: print summary, exit code
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60–90 minutes Possibly related PRs
Poem
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro 💡 Knowledge Base configuration:
You can enable these sources in your CodeRabbit configuration. 📒 Files selected for processing (6)
✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
PR Reviewer Guide 🔍Here are some key observations to aid the review process:
|
PR Code Suggestions ✨Latest suggestions up to ad08875
Previous suggestions✅ Suggestions up to commit a48a39d
|
||||||||||||||||||||||||||||||||
There was a problem hiding this comment.
Actionable comments posted: 13
♻️ Duplicate comments (5)
.github/workflows/performance-tests.yml (5)
5-15: Branch filters exclude master; PRs to master won’t trigger.Include master in push and pull_request.
push: - branches: [ main, develop ] + branches: [ master, main, develop ] ... pull_request: - branches: [ main, develop ] + branches: [ master, main, develop ]
65-69: setup-python v4 is deprecated; use v5.[action maintenance]
- - name: Set up Python ${{ matrix.python-version }} - uses: actions/setup-python@v4 + - name: Set up Python ${{ matrix.python-version }} + uses: actions/setup-python@v5 with: python-version: ${{ matrix.python-version }}
125-133: Upload-artifact v3 → v4.[action maintenance]
- - name: Upload Test Results - uses: actions/upload-artifact@v3 + - name: Upload Test Results + uses: actions/upload-artifact@v4
134-140: Upload-artifact v3 → v4 (benchmarks).[action maintenance]
- - name: Upload Benchmark Results - uses: actions/upload-artifact@v3 + - name: Upload Benchmark Results + uses: actions/upload-artifact@v4
160-167: Upload summary artifact v3 → v4.[action maintenance]
- - name: Upload Summary Report - uses: actions/upload-artifact@v3 + - name: Upload Summary Report + uses: actions/upload-artifact@v4
🧹 Nitpick comments (27)
performance_tests/requirements.txt (1)
1-5: Reproducible installs.Consider pinning via a constraints file (with hashes) to stabilize CI and matrix reproducibility.
performance_tests/README.md (4)
18-29: Specify code fence language (tree).Markdownlint MD040; use a language hint.
-``` +```text performance_tests/ ├── conftest.py # Pytest configuration and fixtures ...--- `184-213`: **Specify code fence language (console output).** Add language hint to fix MD040. ```diff -``` +```text 🚀 Kakashi Performance Test Suite ...--- `217-233`: **Specify code fence language (benchmark results).** Add language hint to fix MD040. ```diff -``` +```text Kakashi vs Standard Library: ...--- `139-147`: **Align docs with deps.** The README uses pytest -n and --cov; call out pytest-xdist and pytest-cov as required (or add them to requirements per my other comment). </blockquote></details> <details> <summary>performance_tests/conftest.py (2)</summary><blockquote> `26-48`: **Config keys overlap; unify memory limits.** Both performance.max_memory_usage (MB) and performance.memory_limit_mb exist; clarify or consolidate to a single key and unit to avoid drift. ```diff - "memory_limit_mb": 500, - "min_sync_throughput": 10000, + "max_memory_usage_mb": 500, + "min_sync_throughput": 10000, - "max_memory_usage": 100, "max_concurrency_penalty": 0.3And update consumers accordingly.
67-72: Nit: unused lambda arg.Rename to underscore to satisfy linters.
- logger.add(lambda msg: None, level="INFO") + logger.add(lambda _msg: None, level="INFO").github/workflows/performance-tests.yml (2)
20-36: Unused workflow_dispatch input (python_version).Input isn’t used to alter the matrix; either wire it (dynamic matrix) or drop it to avoid confusion.
184-184: Add trailing newline.File lacks a newline at EOF; some tools warn.
performance_tests/generate_summary.py (4)
1-1: Shebang not needed (script invoked via python).Either remove shebang or mark file executable. Simpler: remove.
-#!/usr/bin/env python3
40-42: Avoid blind except when loading JSON.Limit to (OSError, json.JSONDecodeError) to surface real issues.
- except Exception as e: - print(f"Warning: Could not load {json_file}: {e}") + except (OSError, json.JSONDecodeError) as e: + print(f"Warning: Could not load {json_file}: {e}")
71-74: Use UTC correctly.You label the timestamp “UTC” but use local time.
- report.append(f"**Generated:** {datetime.now().strftime('%Y-%m-%d %H:%M:%S UTC')}") + report.append(f"**Generated:** {datetime.utcnow().strftime('%Y-%m-%d %H:%M:%S UTC')}")
101-108: Display clean Python versions in report.Keys like python_311 render as “Python python_311”. Strip the prefix for readability.
- for version in analysis["python_versions"]: - if version in results: - data = results[version] + for version in analysis["python_versions"]: + display_version = version.replace("python_", "") + if version in results: + data = results[version] summary = data.get("summary", {}) - status = "✅ PASS" if summary.get("failed_tests", 0) == 0 else "❌ FAIL" - report.append(f"- **Python {version}:** {status}") + status = "✅ PASS" if summary.get("failed_tests", 0) == 0 else "❌ FAIL" + report.append(f"- **Python {display_version}:** {status}")Also update the “Detailed Results” section similarly if desired.
performance_tests/run_tests.py (2)
18-20: Define TESTS_DIR once and reuse.Avoid recomputing and hard-coding paths.
@@ -# Add parent directory to path -sys.path.insert(0, str(Path(__file__).parent.parent)) +# Add parent directory to path and cache paths +TESTS_DIR = Path(__file__).parent.resolve() +sys.path.insert(0, str(TESTS_DIR.parent))
160-171: Remove extraneous f-strings with no placeholders.Tiny cleanup for Ruff F541.
- print(f"\n📈 Summary:") + print("\n📈 Summary:") @@ - print(f"\n🐍 Environment:") + print("\n🐍 Environment:") @@ - print(f"\n📋 Detailed Results:") + print("\n📋 Detailed Results:")performance_tests/pytest.ini (2)
29-33: Use a high-resolution timer for benchmarks.time.perf_counter gives better precision for microbenchmarks.
-benchmark_timer = time.time +benchmark_timer = time.perf_counter
34-36: Ensure pytest-timeout is installed.timeout/timeout_method require pytest-timeout; otherwise pytest will error on unknown config. Confirm it’s listed in performance_tests/requirements.txt.
I can open a follow-up to add pytest-timeout to requirements if missing.
performance_tests/test_api_compatibility.py (1)
223-233: Validate expected exception type.If LogLevel.from_name uses Enum lookup, it may raise KeyError not ValueError. Verify behavior in kakashi.core.records and align the test.
If needed, I can adjust the test to accept KeyError or update from_name to raise ValueError consistently.
performance_tests/test_stability.py (6)
28-30: Remove redundant re-imports inside tests.Top-level imports already provide threading/time/psutil/gc. Local re-imports add noise and trigger F811.
- import threading - import time + # use top-level threading/time @@ - import threading + # use top-level threading @@ - import threading - import time + # use top-level threading/time @@ - import psutil - import gc + # use top-level psutil/gc @@ - import psutil - import gc + # use top-level psutil/gc @@ - import threading + # use top-level threading @@ - import threading + # use top-level threading @@ - import psutil - import gc + # use top-level psutil/gcAlso applies to: 89-89, 131-133, 184-185, 232-233, 277-277, 330-330, 383-384
68-86: Stability throughput assertion is brittle on CI.Hard-coding throughput >1000 msg/s can be flaky. Make it configurable via env or lower default.
- throughput = total_messages / total_time + throughput = total_messages / total_time @@ - assert throughput > 1000, f"Throughput too low: {throughput:.0f} msg/s" + import os + min_tp = int(os.environ.get("KAKASHI_MIN_THROUGHPUT", "200")) + assert throughput > min_tp, f"Throughput too low: {throughput:.0f} msg/s (min {min_tp})"
68-72: Use perf_counter for timing.Higher precision for throughput/duration measurements.
- start_time = time.time() + start_time = time.perf_counter() @@ - end_time = time.time() + end_time = time.perf_counter() @@ - start_time = time.time() + start_time = time.perf_counter() @@ - while time.time() - start_time < 30: # 30 seconds + while time.perf_counter() - start_time < 30: # 30 seconds @@ - end_time = time.time() + end_time = time.perf_counter()Also applies to: 392-419
346-354: Avoid unused exception variable.No need to bind e; logger.exception captures it.
- except Exception as e: + except Exception: # Log the exception kakashi_sync_logger.exception(f"Caught exception in worker {worker_id}")
73-81: Remove f-prefix where there are no placeholders.Tidy up F541.
- print(f"\nHigh Concurrency Stability Test:") + print("\nHigh Concurrency Stability Test:") @@ - print(f"\nConcurrent Logger Creation Test:") + print("\nConcurrent Logger Creation Test:") @@ - print(f"\nMixed Operations Stability Test:") + print("\nMixed Operations Stability Test:") @@ - print(f"\nMemory Leak Detection Test:") + print("\nMemory Leak Detection Test:") @@ - print(f"\nMemory Pressure Recovery Test:") + print("\nMemory Pressure Recovery Test:") @@ - print(f"\nMalformed Input Stability Test:") + print("\nMalformed Input Stability Test:") @@ - print(f"\nException Logging Stability Test:") + print("\nException Logging Stability Test:") @@ - print(f"\nExtended Logging Stability Test:") + print("\nExtended Logging Stability Test:")Also applies to: 120-124, 170-174, 219-225, 261-267, 319-323, 369-373, 427-435
94-103: Prefer factory helpers over direct class instantiation.Creating loggers via kakashi_sync_logger.class bypasses any factory-level caching/config. Use get_logger for fidelity.
- logger = kakashi_sync_logger.__class__(f"concurrent_logger_{worker_id}_{i}") + from kakashi import get_logger + logger = get_logger(f"concurrent_logger_{worker_id}_{i}") @@ - logger = kakashi_sync_logger.__class__(f"pressure_{i}") + from kakashi import get_logger + logger = get_logger(f"pressure_{i}")Also applies to: 241-249
performance_tests/test_performance.py (3)
126-131: Bind loop variables in inner functions to avoid late-binding gotchas (B023).Harmless here but silences linter and future-proofs.
@@ - def run_concurrent(): - concurrent_logging(thread_count) + def run_concurrent(count=thread_count): + concurrent_logging(count) @@ - def run_concurrent_async(): - asyncio.run(concurrent_async_logging(task_count)) + def run_concurrent_async(count=task_count): + asyncio.run(concurrent_async_logging(count)) @@ - def log_messages(): - for i in range(message_count): + def log_messages(count=message_count): + for i in range(count): kakashi_sync_logger.info(f"Scalability test message {i}") @@ - def concurrent_logging(): + def concurrent_logging(count=thread_count): def log_messages(): for i in range(50): kakashi_sync_logger.info(f"Concurrency test message {i}") threads = [] - for _ in range(thread_count): + for _ in range(count): thread = threading.Thread(target=log_messages) threads.append(thread) thread.start() for thread in threads: thread.join() @@ - total_messages = thread_count * 50 + total_messages = thread_count * 50Also applies to: 145-150, 249-256, 263-285
157-158: Drop redundant re-imports of psutil/gc inside tests.They’re already imported at module level.
- import psutil - import gc + # use top-level psutil/gc @@ - import psutil - import gc + # use top-level psutil/gcAlso applies to: 187-188
177-181: Remove f-prefix where there are no placeholders.- print(f"\nMemory Usage Benchmark:") + print("\nMemory Usage Benchmark:") @@ - print(f"\nMemory Pressure Test:") + print("\nMemory Pressure Test:")Also applies to: 212-216
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (11)
.github/workflows/performance-tests.yml(1 hunks).gitignore(1 hunks)performance_tests/README.md(1 hunks)performance_tests/conftest.py(1 hunks)performance_tests/generate_summary.py(1 hunks)performance_tests/pytest.ini(1 hunks)performance_tests/requirements.txt(1 hunks)performance_tests/run_tests.py(1 hunks)performance_tests/test_api_compatibility.py(1 hunks)performance_tests/test_performance.py(1 hunks)performance_tests/test_stability.py(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- .gitignore
🧰 Additional context used
🧬 Code graph analysis (6)
performance_tests/conftest.py (2)
kakashi/core/functional_logger.py (1)
BoundLogger(251-376)kakashi/core/interface.py (1)
get_structured_logger(86-111)
performance_tests/test_stability.py (2)
performance_tests/conftest.py (1)
kakashi_sync_logger(92-95)kakashi/core/async_backend.py (1)
start(301-315)
performance_tests/test_api_compatibility.py (6)
kakashi/core/logger.py (2)
Logger(136-225)AsyncLogger(228-297)kakashi/core/records.py (4)
LogLevel(15-31)LogRecord(96-256)LogContext(35-92)from_name(24-26)performance_tests/conftest.py (3)
kakashi_sync_logger(92-95)kakashi_async_logger(98-101)kakashi_structured_logger(104-107)kakashi/core/interface.py (5)
setup_logging(152-191)set_log_level(194-257)set_request_context(264-286)clear_request_context(320-322)set_user_context(289-302)kakashi/core/config.py (3)
get_environment_config(450-452)get_current_context(496-498)context_scope(521-543)kakashi/core/pipeline.py (2)
create_console_pipeline(356-374)create_file_pipeline(377-396)
performance_tests/test_performance.py (1)
performance_tests/conftest.py (4)
kakashi_sync_logger(92-95)comparison_loggers(51-89)kakashi_async_logger(98-101)kakashi_structured_logger(104-107)
performance_tests/generate_summary.py (1)
performance_tests/run_tests.py (1)
main(196-251)
performance_tests/run_tests.py (1)
performance_tests/generate_summary.py (1)
main(178-240)
🪛 LanguageTool
performance_tests/README.md
[grammar] ~1-~1: There might be a mistake here.
Context: # 🚀 Kakashi Performance Test Suite A comprehensive, industry-standard test ...
(QB_NEW_EN)
[grammar] ~5-~5: There might be a mistake here.
Context: ...rsions and environments. ## 📋 Overview This test suite provides: - **API Compa...
(QB_NEW_EN)
[style] ~11-~11: ‘under stress’ might be wordy. Consider a shorter alternative.
Context: ...Stability Tests**: Validate reliability under stress conditions - Cross-Python Testing: ...
(EN_WORDINESS_PREMIUM_UNDER_STRESS)
[grammar] ~16-~16: There might be a mistake here.
Context: ...performance metrics ## 🏗️ Architecture performance_tests/ ├── conftest.py # Pytest configuration and fixtures ├── test_api_compatibility.py # API compatibility tests ├── test_performance.py # Performance benchmark tests ├── test_stability.py # Stability and stress tests ├── run_tests.py # Main test runner script ├── generate_summary.py # Summary report generator ├── pytest.ini # Pytest configuration ├── requirements.txt # Test dependencies └── README.md # This file ## 🚀 Quick Start ### 1. Install Dependenc...
(QB_NEW_EN)
[grammar] ~31-~31: There might be a mistake here.
Context: ... # This file ## 🚀 Quick Start ### 1. Install Dependencies bash cd perf...
(QB_NEW_EN)
[grammar] ~65-~65: There might be a mistake here.
Context: ...--save-report ``` ## 🧪 Test Categories ### API Compatibility Tests (`test_api_compa...
(QB_NEW_EN)
[grammar] ~69-~69: There might be a mistake here.
Context: ...: Import, creation, and basic operations - Configuration: Environment setup and l...
(QB_NEW_EN)
[grammar] ~70-~70: There might be a mistake here.
Context: ...vironment setup and log level validation - Context Management: Request and user c...
(QB_NEW_EN)
[grammar] ~71-~71: There might be a mistake here.
Context: ...ent**: Request and user context handling - Pipeline System: Console and file pipe...
(QB_NEW_EN)
[grammar] ~72-~72: There might be a mistake here.
Context: ...em**: Console and file pipeline creation - Error Handling: Malformed inputs and e...
(QB_NEW_EN)
[grammar] ~77-~77: There might be a mistake here.
Context: ...ages per second under various conditions - Concurrency: Multi-threaded and async ...
(QB_NEW_EN)
[grammar] ~78-~78: There might be a mistake here.
Context: ...**: Multi-threaded and async performance - Memory Usage: Memory consumption and l...
(QB_NEW_EN)
[grammar] ~79-~79: There might be a mistake here.
Context: ...*: Memory consumption and leak detection - Latency: Single message and batch proc...
(QB_NEW_EN)
[grammar] ~80-~80: There might be a mistake here.
Context: ...ingle message and batch processing times - Scalability: Performance scaling with ...
(QB_NEW_EN)
[grammar] ~85-~85: There might be a mistake here.
Context: ...tability**: High-concurrency reliability - Memory Stability: Memory leak detectio...
(QB_NEW_EN)
[grammar] ~86-~86: There might be a mistake here.
Context: ...ty**: Memory leak detection and recovery - Error Handling: Stability under error ...
(QB_NEW_EN)
[grammar] ~87-~87: There might be a mistake here.
Context: ...ling**: Stability under error conditions - Long-Running: Extended session stabili...
(QB_NEW_EN)
[grammar] ~90-~90: There might be a mistake here.
Context: ...ion stability ## 📊 Performance Metrics The test suite measures and validates: ...
(QB_NEW_EN)
[grammar] ~94-~94: There might be a mistake here.
Context: ...ates: | Metric | Target | Description | |--------|--------|-------------| | Sync...
(QB_NEW_EN)
[grammar] ~95-~95: There might be a mistake here.
Context: ...tion | |--------|--------|-------------| | Sync Throughput | >10K logs/sec | Sync...
(QB_NEW_EN)
[grammar] ~96-~96: There might be a mistake here.
Context: .../sec | Synchronous logging performance | | Async Throughput | >50K logs/sec | Asy...
(QB_NEW_EN)
[grammar] ~97-~97: There might be a mistake here.
Context: ...sec | Asynchronous logging performance | | Memory Usage | <100MB | Memory consump...
(QB_NEW_EN)
[grammar] ~98-~98: There might be a mistake here.
Context: ...<100MB | Memory consumption under load | | Concurrency Scaling | >1.0x | Performa...
(QB_NEW_EN)
[grammar] ~99-~99: There might be a mistake here.
Context: ...| Performance improvement with threads | | Error Rate | <1% | Stability under str...
(QB_NEW_EN)
[style] ~100-~100: ‘under stress’ might be wordy. Consider a shorter alternative.
Context: ...hreads | | Error Rate | <1% | Stability under stress | ## 🔧 Configuration ### Pytest Conf...
(EN_WORDINESS_PREMIUM_UNDER_STRESS)
[grammar] ~102-~102: There might be a mistake here.
Context: ...lity under stress | ## 🔧 Configuration ### Pytest Configuration (pytest.ini) - *...
(QB_NEW_EN)
[grammar] ~106-~106: There might be a mistake here.
Context: ...ettings**: 5 minimum rounds, auto warmup - Output Format: Verbose with short trac...
(QB_NEW_EN)
[grammar] ~107-~107: There might be a mistake here.
Context: ... Format**: Verbose with short tracebacks - Timeout: 300 seconds per test - **Mark...
(QB_NEW_EN)
[grammar] ~108-~108: There might be a mistake here.
Context: ...acks - Timeout: 300 seconds per test - Markers: Organized test categorization...
(QB_NEW_EN)
[grammar] ~113-~113: There might be a mistake here.
Context: ...olds**: Configurable performance targets - Stability Parameters: Configurable str...
(QB_NEW_EN)
[grammar] ~114-~114: There might be a mistake here.
Context: ...s**: Configurable stress test parameters - Comparison Loggers: Loguru, Structlog,...
(QB_NEW_EN)
[grammar] ~115-~115: There might be a mistake here.
Context: ...s**: Loguru, Structlog, Standard Library - Fixtures: Reusable test components ##...
(QB_NEW_EN)
[grammar] ~118-~118: There might be a mistake here.
Context: ...test components ## 🚀 CI/CD Integration ### GitHub Actions Workflow The `.github/wo...
(QB_NEW_EN)
[grammar] ~124-~124: There might be a mistake here.
Context: ... Testing**: Python 3.9, 3.10, 3.11, 3.12 - Automated Execution: Push, PR, and sch...
(QB_NEW_EN)
[grammar] ~125-~125: There might be a mistake here.
Context: ...xecution**: Push, PR, and scheduled runs - Artifact Management: Test results and ...
(QB_NEW_EN)
[grammar] ~126-~126: There might be a mistake here.
Context: ...ement**: Test results and benchmark data - PR Integration: Automatic comments wit...
(QB_NEW_EN)
[grammar] ~127-~127: There might be a mistake here.
Context: ...**: Automatic comments with test results - Failure Handling: Comprehensive error ...
(QB_NEW_EN)
[grammar] ~132-~132: There might be a mistake here.
Context: ...s - Push: Main and develop branches - Pull Request: Any PR affecting Kakashi...
(QB_NEW_EN)
[grammar] ~133-~133: There might be a mistake here.
Context: ...est**: Any PR affecting Kakashi or tests - Schedule: Weekly runs on Sundays - **M...
(QB_NEW_EN)
[grammar] ~134-~134: There might be a mistake here.
Context: ...s - Schedule: Weekly runs on Sundays - Manual: Workflow dispatch with custom ...
(QB_NEW_EN)
[grammar] ~137-~137: There might be a mistake here.
Context: ... parameters ## 📈 Running Tests Locally ### Prerequisites ```bash # Install Kakashi...
(QB_NEW_EN)
[grammar] ~180-~180: There might be a mistake here.
Context: ...rt=html ## 📊 Understanding Results ### Test Output 🚀 Kakashi Performance ...
(QB_NEW_EN)
[grammar] ~234-~234: There might be a mistake here.
Context: ...edup: 4.61x ``` ## 🐛 Troubleshooting ### Common Issues 1. Import Errors `...
(QB_NEW_EN)
[grammar] ~279-~279: There might be a mistake here.
Context: ...t_benchmark -v ## 📚 Advanced Usage ### Custom Test Configuration python # I...
(QB_NEW_EN)
[grammar] ~324-~324: There might be a mistake here.
Context: ...mark(): pass ``` ## 🤝 Contributing ### Adding New Tests 1. **Follow Naming Con...
(QB_NEW_EN)
[grammar] ~328-~328: There might be a mistake here.
Context: ...w Naming Convention**: test_*.py files 2. Use Appropriate Markers: `@pytest.mark...
(QB_NEW_EN)
[grammar] ~329-~329: There might be a mistake here.
Context: ...te Markers**: @pytest.mark.benchmark, @pytest.mark.stability 3. Include Assertions: Validate performan...
(QB_NEW_EN)
[grammar] ~330-~330: There might be a mistake here.
Context: ...e performance and stability requirements 4. Add Documentation: Clear docstrings ex...
(QB_NEW_EN)
[grammar] ~352-~352: There might be a mistake here.
Context: ...py performance_tests/ ``` ## 📄 License This test suite is part of the Kakashi p...
(QB_NEW_EN)
[grammar] ~356-~356: There might be a mistake here.
Context: ...s the same license terms. ## 🆘 Support For issues with the test suite: 1. Chec...
(QB_NEW_EN)
[grammar] ~367-~367: There might be a mistake here.
Context: ...Python version --- Happy Testing! 🚀
(QB_NEW_EN)
🪛 markdownlint-cli2 (0.17.2)
performance_tests/README.md
18-18: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
184-184: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
217-217: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
367-367: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
🪛 Ruff (0.12.2)
performance_tests/conftest.py
63-63: Do not catch blind exception: Exception
(BLE001)
70-70: Unused lambda argument: msg
(ARG005)
72-72: Do not catch blind exception: Exception
(BLE001)
86-86: Do not catch blind exception: Exception
(BLE001)
performance_tests/test_stability.py
28-28: Redefinition of unused threading from line 10
Remove definition: threading
(F811)
45-45: Do not catch blind exception: Exception
(BLE001)
73-73: f-string without any placeholders
Remove extraneous f prefix
(F541)
83-83: Use of assert detected
(S101)
84-84: Use of assert detected
(S101)
85-85: Use of assert detected
(S101)
89-89: Redefinition of unused threading from line 10
Remove definition: threading
(F811)
104-104: Do not catch blind exception: Exception
(BLE001)
120-120: f-string without any placeholders
Remove extraneous f prefix
(F541)
126-126: Use of assert detected
(S101)
127-127: Use of assert detected
(S101)
131-131: Redefinition of unused threading from line 10
Remove definition: threading
(F811)
154-154: Do not catch blind exception: Exception
(BLE001)
170-170: f-string without any placeholders
Remove extraneous f prefix
(F541)
176-176: Use of assert detected
(S101)
177-177: Use of assert detected
(S101)
184-184: Redefinition of unused psutil from line 15
Remove definition: psutil
(F811)
185-185: Redefinition of unused gc from line 14
Remove definition: gc
(F811)
219-219: f-string without any placeholders
Remove extraneous f prefix
(F541)
227-227: Use of assert detected
(S101)
228-228: Use of assert detected
(S101)
232-232: Redefinition of unused psutil from line 15
Remove definition: psutil
(F811)
233-233: Redefinition of unused gc from line 14
Remove definition: gc
(F811)
261-261: f-string without any placeholders
Remove extraneous f prefix
(F541)
269-269: Use of assert detected
(S101)
270-270: Use of assert detected
(S101)
277-277: Redefinition of unused threading from line 10
Remove definition: threading
(F811)
303-303: Do not catch blind exception: Exception
(BLE001)
319-319: f-string without any placeholders
Remove extraneous f prefix
(F541)
325-325: Use of assert detected
(S101)
326-326: Use of assert detected
(S101)
330-330: Redefinition of unused threading from line 10
Remove definition: threading
(F811)
341-341: Abstract raise to an inner function
(TRY301)
341-341: Avoid specifying long messages outside the exception class
(TRY003)
343-343: Abstract raise to an inner function
(TRY301)
343-343: Avoid specifying long messages outside the exception class
(TRY003)
345-345: Abstract raise to an inner function
(TRY301)
345-345: Avoid specifying long messages outside the exception class
(TRY003)
346-346: Local variable e is assigned to but never used
Remove assignment to unused variable e
(F841)
353-353: Do not catch blind exception: Exception
(BLE001)
369-369: f-string without any placeholders
Remove extraneous f prefix
(F541)
375-375: Use of assert detected
(S101)
376-376: Use of assert detected
(S101)
383-383: Redefinition of unused psutil from line 15
Remove definition: psutil
(F811)
384-384: Redefinition of unused gc from line 14
Remove definition: gc
(F811)
410-410: Do not catch blind exception: Exception
(BLE001)
427-427: f-string without any placeholders
Remove extraneous f prefix
(F541)
437-437: Use of assert detected
(S101)
438-438: Use of assert detected
(S101)
439-439: Use of assert detected
(S101)
440-440: Use of assert detected
(S101)
performance_tests/test_api_compatibility.py
27-27: Use of assert detected
(S101)
28-28: Use of assert detected
(S101)
29-29: Use of assert detected
(S101)
30-30: Use of assert detected
(S101)
31-31: Use of assert detected
(S101)
32-32: Use of assert detected
(S101)
33-33: Use of assert detected
(S101)
38-38: Use of assert detected
(S101)
39-39: Use of assert detected
(S101)
40-40: Use of assert detected
(S101)
41-41: Use of assert detected
(S101)
42-42: Use of assert detected
(S101)
43-43: Use of assert detected
(S101)
48-48: Use of assert detected
(S101)
49-49: Use of assert detected
(S101)
50-50: Use of assert detected
(S101)
51-51: Use of assert detected
(S101)
52-52: Use of assert detected
(S101)
53-53: Use of assert detected
(S101)
71-71: Abstract raise to an inner function
(TRY301)
71-71: Avoid specifying long messages outside the exception class
(TRY003)
78-78: Use of assert detected
(S101)
96-96: Use of assert detected
(S101)
104-104: Use of assert detected
(S101)
111-111: Use of assert detected
(S101)
112-112: Use of assert detected
(S101)
113-113: Use of assert detected
(S101)
114-114: Use of assert detected
(S101)
115-115: Use of assert detected
(S101)
118-118: Use of assert detected
(S101)
119-119: Use of assert detected
(S101)
120-120: Use of assert detected
(S101)
121-121: Use of assert detected
(S101)
122-122: Use of assert detected
(S101)
137-137: Use of assert detected
(S101)
138-138: Use of assert detected
(S101)
139-139: Use of assert detected
(S101)
146-146: Use of assert detected
(S101)
158-158: Use of assert detected
(S101)
159-159: Use of assert detected
(S101)
160-160: Use of assert detected
(S101)
169-169: Use of assert detected
(S101)
170-170: Use of assert detected
(S101)
174-174: Use of assert detected
(S101)
186-186: Use of assert detected
(S101)
187-187: Use of assert detected
(S101)
192-192: Use of assert detected
(S101)
193-193: Use of assert detected
(S101)
218-218: Use of assert detected
(S101)
271-271: Use of assert detected
(S101)
performance_tests/test_performance.py
54-54: f-string without any placeholders
Remove extraneous f prefix
(F541)
61-61: f-string without any placeholders
Remove extraneous f prefix
(F541)
68-68: f-string without any placeholders
Remove extraneous f prefix
(F541)
128-128: Function definition does not bind loop variable thread_count
(B023)
147-147: Function definition does not bind loop variable task_count
(B023)
157-157: Redefinition of unused psutil from line 12
Remove definition: psutil
(F811)
158-158: Redefinition of unused gc from line 13
Remove definition: gc
(F811)
177-177: f-string without any placeholders
Remove extraneous f prefix
(F541)
183-183: Use of assert detected
(S101)
187-187: Redefinition of unused psutil from line 12
Remove definition: psutil
(F811)
188-188: Redefinition of unused gc from line 13
Remove definition: gc
(F811)
212-212: f-string without any placeholders
Remove extraneous f prefix
(F541)
218-218: Use of assert detected
(S101)
251-251: Function definition does not bind loop variable message_count
(B023)
270-270: Function definition does not bind loop variable thread_count
(B023)
performance_tests/generate_summary.py
1-1: Shebang is present but file is not executable
(EXE001)
40-40: Do not catch blind exception: Exception
(BLE001)
performance_tests/run_tests.py
1-1: Shebang is present but file is not executable
(EXE001)
28-28: subprocess call: check for execution of untrusted input
(S603)
53-53: Do not catch blind exception: Exception
(BLE001)
160-160: f-string without any placeholders
Remove extraneous f prefix
(F541)
167-167: f-string without any placeholders
Remove extraneous f prefix
(F541)
171-171: f-string without any placeholders
Remove extraneous f prefix
(F541)
249-249: Do not catch blind exception: Exception
(BLE001)
🪛 actionlint (1.7.7)
.github/workflows/performance-tests.yml
66-66: the runner of "actions/setup-python@v4" action is too old to run on GitHub Actions. update the action's version to fix this issue
(action)
71-71: the runner of "actions/cache@v3" action is too old to run on GitHub Actions. update the action's version to fix this issue
(action)
126-126: the runner of "actions/upload-artifact@v3" action is too old to run on GitHub Actions. update the action's version to fix this issue
(action)
135-135: the runner of "actions/upload-artifact@v3" action is too old to run on GitHub Actions. update the action's version to fix this issue
(action)
151-151: the runner of "actions/download-artifact@v3" action is too old to run on GitHub Actions. update the action's version to fix this issue
(action)
161-161: the runner of "actions/upload-artifact@v3" action is too old to run on GitHub Actions. update the action's version to fix this issue
(action)
170-170: the runner of "actions/github-script@v6" action is too old to run on GitHub Actions. update the action's version to fix this issue
(action)
🪛 YAMLlint (1.37.1)
.github/workflows/performance-tests.yml
[warning] 60-60: wrong indentation: expected 6 but found 4
(indentation)
[warning] 147-147: wrong indentation: expected 6 but found 4
(indentation)
[error] 175-175: trailing spaces
(trailing-spaces)
[error] 184-184: no new line character at the end of file
(new-line-at-end-of-file)
🔇 Additional comments (2)
performance_tests/run_tests.py (1)
1-1: Script is now executable
performance_tests/run_tests.pyhas execute permissions (-rwxr-xr-x), so the shebang is honored.performance_tests/test_performance.py (1)
136-151: Fix async concurrency benchmark: don’t await non-coroutines; schedule tasks explicitly.@@ - async def concurrent_async_logging(task_count: int): - async def log_messages(): - for i in range(100): - await kakashi_async_logger.info(f"Async task message {i}") - - tasks = [log_messages() for _ in range(task_count)] - await asyncio.gather(*tasks) + async def concurrent_async_logging(task_count: int): + async def log_messages(): + for i in range(100): + kakashi_async_logger.info(f"Async task message {i}") + if i % 50 == 0: + await asyncio.sleep(0) + + tasks = [asyncio.create_task(log_messages()) for _ in range(task_count)] + await asyncio.gather(*tasks)Likely an incorrect or invalid review comment.
| @pytest.fixture(scope="session") | ||
| def comparison_loggers(): | ||
| """Set up comparison logging libraries.""" | ||
| loggers = {} | ||
|
|
||
| try: | ||
| # Standard library logging | ||
| import logging | ||
| logging.basicConfig(level=logging.INFO) | ||
| std_logger = logging.getLogger("stdlib_test") | ||
| std_logger.addHandler(logging.NullHandler()) | ||
| std_logger.propagate = False | ||
| loggers["standard_library"] = std_logger | ||
| except Exception as e: | ||
| pytest.skip(f"Standard library logging not available: {e}") | ||
|
|
||
| try: | ||
| # Loguru | ||
| from loguru import logger | ||
| logger.remove() | ||
| logger.add(lambda msg: None, level="INFO") | ||
| loggers["loguru"] = logger | ||
| except Exception as e: | ||
| pytest.skip(f"Loguru not available: {e}") | ||
|
|
||
| try: | ||
| # Structlog | ||
| import structlog | ||
| structlog.configure( | ||
| processors=[structlog.processors.JSONRenderer()], | ||
| wrapper_class=structlog.BoundLogger, | ||
| context_class=dict, | ||
| logger_factory=structlog.PrintLoggerFactory(), | ||
| cache_logger_on_first_use=True, | ||
| ) | ||
| loggers["structlog"] = structlog.get_logger("structlog_test") | ||
| except Exception as e: | ||
| pytest.skip(f"Structlog not available: {e}") | ||
|
|
||
| return loggers |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Don’t skip the entire fixture when an optional backend is missing; return partial mapping.
Catching Exception and calling pytest.skip aborts the fixture; tests can still run with available backends. Also avoid blind excepts.
def comparison_loggers():
"""Set up comparison logging libraries."""
loggers = {}
-
- try:
+ try:
# Standard library logging
import logging
- logging.basicConfig(level=logging.INFO)
+ logging.basicConfig(level=logging.WARNING)
std_logger = logging.getLogger("stdlib_test")
std_logger.addHandler(logging.NullHandler())
std_logger.propagate = False
loggers["standard_library"] = std_logger
- except Exception as e:
- pytest.skip(f"Standard library logging not available: {e}")
+ except ImportError:
+ pass
try:
# Loguru
from loguru import logger
logger.remove()
- logger.add(lambda msg: None, level="INFO")
+ logger.add(lambda _msg: None, level="INFO")
loggers["loguru"] = logger
- except Exception as e:
- pytest.skip(f"Loguru not available: {e}")
+ except ImportError:
+ pass
try:
# Structlog
- import structlog
- structlog.configure(
- processors=[structlog.processors.JSONRenderer()],
- wrapper_class=structlog.BoundLogger,
- context_class=dict,
- logger_factory=structlog.PrintLoggerFactory(),
- cache_logger_on_first_use=True,
- )
- loggers["structlog"] = structlog.get_logger("structlog_test")
- except Exception as e:
- pytest.skip(f"Structlog not available: {e}")
+ import structlog, logging, os, sys
+ structlog.configure(
+ processors=[], # keep overhead minimal for perf comparisons
+ wrapper_class=structlog.make_filtering_bound_logger(logging.CRITICAL + 1),
+ context_class=dict,
+ logger_factory=structlog.PrintLoggerFactory(file=open(os.devnull, "w")),
+ cache_logger_on_first_use=True,
+ )
+ loggers["structlog"] = structlog.get_logger("structlog_test")
+ except ImportError:
+ pass
return loggers📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| @pytest.fixture(scope="session") | |
| def comparison_loggers(): | |
| """Set up comparison logging libraries.""" | |
| loggers = {} | |
| try: | |
| # Standard library logging | |
| import logging | |
| logging.basicConfig(level=logging.INFO) | |
| std_logger = logging.getLogger("stdlib_test") | |
| std_logger.addHandler(logging.NullHandler()) | |
| std_logger.propagate = False | |
| loggers["standard_library"] = std_logger | |
| except Exception as e: | |
| pytest.skip(f"Standard library logging not available: {e}") | |
| try: | |
| # Loguru | |
| from loguru import logger | |
| logger.remove() | |
| logger.add(lambda msg: None, level="INFO") | |
| loggers["loguru"] = logger | |
| except Exception as e: | |
| pytest.skip(f"Loguru not available: {e}") | |
| try: | |
| # Structlog | |
| import structlog | |
| structlog.configure( | |
| processors=[structlog.processors.JSONRenderer()], | |
| wrapper_class=structlog.BoundLogger, | |
| context_class=dict, | |
| logger_factory=structlog.PrintLoggerFactory(), | |
| cache_logger_on_first_use=True, | |
| ) | |
| loggers["structlog"] = structlog.get_logger("structlog_test") | |
| except Exception as e: | |
| pytest.skip(f"Structlog not available: {e}") | |
| return loggers | |
| @pytest.fixture(scope="session") | |
| def comparison_loggers(): | |
| """Set up comparison logging libraries.""" | |
| loggers = {} | |
| try: | |
| # Standard library logging | |
| import logging | |
| logging.basicConfig(level=logging.WARNING) | |
| std_logger = logging.getLogger("stdlib_test") | |
| std_logger.addHandler(logging.NullHandler()) | |
| std_logger.propagate = False | |
| loggers["standard_library"] = std_logger | |
| except ImportError: | |
| pass | |
| try: | |
| # Loguru | |
| from loguru import logger | |
| logger.remove() | |
| logger.add(lambda _msg: None, level="INFO") | |
| loggers["loguru"] = logger | |
| except ImportError: | |
| pass | |
| try: | |
| # Structlog | |
| import structlog, logging, os, sys | |
| structlog.configure( | |
| processors=[], # keep overhead minimal for perf comparisons | |
| wrapper_class=structlog.make_filtering_bound_logger(logging.CRITICAL + 1), | |
| context_class=dict, | |
| logger_factory=structlog.PrintLoggerFactory(file=open(os.devnull, "w")), | |
| cache_logger_on_first_use=True, | |
| ) | |
| loggers["structlog"] = structlog.get_logger("structlog_test") | |
| except ImportError: | |
| pass | |
| return loggers |
🧰 Tools
🪛 Ruff (0.12.2)
63-63: Do not catch blind exception: Exception
(BLE001)
70-70: Unused lambda argument: msg
(ARG005)
72-72: Do not catch blind exception: Exception
(BLE001)
86-86: Do not catch blind exception: Exception
(BLE001)
🤖 Prompt for AI Agents
In performance_tests/conftest.py around lines 50 to 89, the fixture currently
catches broad Exceptions and calls pytest.skip which aborts the entire fixture
when one optional backend is missing; change it to catch only ImportError (and
optionally ModuleNotFoundError), do NOT call pytest.skip for missing optional
libraries, and instead continue so the fixture returns a partial loggers mapping
of whatever backends loaded successfully; optionally log or pytest.warns about
the skipped backend for visibility and remove the broad except blocks so other
unexpected errors still surface.
| def test_pipeline_processing(self): | ||
| """Test pipeline processing functionality.""" | ||
| from kakashi.core.pipeline import create_console_pipeline | ||
| from kakashi.core.records import LogRecord, LogLevel | ||
| from kakashi.core.records import LogContext | ||
|
|
||
| pipeline = create_console_pipeline(min_level=LogLevel.INFO) | ||
|
|
||
| # Create a test record | ||
| context = LogContext() | ||
| record = LogRecord( | ||
| level=LogLevel.INFO, | ||
| message="Test message", | ||
| context=context, | ||
| timestamp=1234567890.0 | ||
| ) | ||
|
|
||
| # Process the record | ||
| result = pipeline.process(record) | ||
| assert result is not None | ||
|
|
There was a problem hiding this comment.
Construct LogRecord with required fields.
LogRecord typically requires logger_name; current instantiation risks TypeError.
@@
- pipeline = create_console_pipeline(min_level=LogLevel.INFO)
+ pipeline = create_console_pipeline(min_level=LogLevel.INFO)
@@
- context = LogContext()
- record = LogRecord(
- level=LogLevel.INFO,
- message="Test message",
- context=context,
- timestamp=1234567890.0
- )
+ context = LogContext()
+ record = LogRecord(
+ level=LogLevel.INFO,
+ logger_name="test_pipeline",
+ message="Test message",
+ context=context,
+ timestamp=1234567890.0,
+ fields={}
+ )📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def test_pipeline_processing(self): | |
| """Test pipeline processing functionality.""" | |
| from kakashi.core.pipeline import create_console_pipeline | |
| from kakashi.core.records import LogRecord, LogLevel | |
| from kakashi.core.records import LogContext | |
| pipeline = create_console_pipeline(min_level=LogLevel.INFO) | |
| # Create a test record | |
| context = LogContext() | |
| record = LogRecord( | |
| level=LogLevel.INFO, | |
| message="Test message", | |
| context=context, | |
| timestamp=1234567890.0 | |
| ) | |
| # Process the record | |
| result = pipeline.process(record) | |
| assert result is not None | |
| def test_pipeline_processing(self): | |
| """Test pipeline processing functionality.""" | |
| from kakashi.core.pipeline import create_console_pipeline | |
| from kakashi.core.records import LogRecord, LogLevel | |
| from kakashi.core.records import LogContext | |
| pipeline = create_console_pipeline(min_level=LogLevel.INFO) | |
| # Create a test record | |
| context = LogContext() | |
| record = LogRecord( | |
| level=LogLevel.INFO, | |
| logger_name="test_pipeline", | |
| message="Test message", | |
| context=context, | |
| timestamp=1234567890.0, | |
| fields={} | |
| ) | |
| # Process the record | |
| result = pipeline.process(record) | |
| assert result is not None |
🧰 Tools
🪛 Ruff (0.12.2)
218-218: Use of assert detected
(S101)
🤖 Prompt for AI Agents
In performance_tests/test_api_compatibility.py around lines 199 to 219, the
LogRecord is instantiated without the required logger_name which can raise a
TypeError; update the test to pass a valid logger_name when constructing
LogRecord (for example "test" or "test_logger") while keeping the other fields
the same so the record creation succeeds and the pipeline.process call can run.
| def test_async_throughput_benchmark(self, benchmark, kakashi_async_logger): | ||
| """Benchmark async logging throughput.""" | ||
|
|
||
| async def kakashi_async_logging(): | ||
| tasks = [] | ||
| for i in range(1000): | ||
| task = kakashi_async_logger.info(f"Async benchmark message {i}") | ||
| tasks.append(task) | ||
| await asyncio.gather(*tasks) | ||
|
|
||
| def run_async_benchmark(): | ||
| asyncio.run(kakashi_async_logging()) | ||
|
|
||
| result = benchmark(run_async_benchmark) | ||
| print(f"\nKakashi Async Throughput: {result.stats.mean:.6f}s") | ||
|
|
There was a problem hiding this comment.
Async throughput test treats non-awaitable logger methods as coroutines.
AsyncLogger.info likely isn’t awaitable; gathering None will fail. Call it directly and yield to the loop periodically.
@@
- async def kakashi_async_logging():
- tasks = []
- for i in range(1000):
- task = kakashi_async_logger.info(f"Async benchmark message {i}")
- tasks.append(task)
- await asyncio.gather(*tasks)
+ async def kakashi_async_logging():
+ for i in range(1000):
+ kakashi_async_logger.info(f"Async benchmark message {i}")
+ if i % 100 == 0:
+ await asyncio.sleep(0) # yield to event loop📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def test_async_throughput_benchmark(self, benchmark, kakashi_async_logger): | |
| """Benchmark async logging throughput.""" | |
| async def kakashi_async_logging(): | |
| tasks = [] | |
| for i in range(1000): | |
| task = kakashi_async_logger.info(f"Async benchmark message {i}") | |
| tasks.append(task) | |
| await asyncio.gather(*tasks) | |
| def run_async_benchmark(): | |
| asyncio.run(kakashi_async_logging()) | |
| result = benchmark(run_async_benchmark) | |
| print(f"\nKakashi Async Throughput: {result.stats.mean:.6f}s") | |
| def test_async_throughput_benchmark(self, benchmark, kakashi_async_logger): | |
| """Benchmark async logging throughput.""" | |
| async def kakashi_async_logging(): | |
| for i in range(1000): | |
| kakashi_async_logger.info(f"Async benchmark message {i}") | |
| if i % 100 == 0: | |
| await asyncio.sleep(0) # yield to event loop | |
| def run_async_benchmark(): | |
| asyncio.run(kakashi_async_logging()) | |
| result = benchmark(run_async_benchmark) | |
| print(f"\nKakashi Async Throughput: {result.stats.mean:.6f}s") |
🤖 Prompt for AI Agents
In performance_tests/test_performance.py around lines 73 to 88, the benchmark
tries to gather non-awaitable return values from kakashi_async_logger.info as if
they were coroutines; instead call kakashi_async_logger.info synchronously
inside the loop and avoid creating tasks, and to prevent starving the event loop
await briefly (e.g. await asyncio.sleep(0) or await asyncio.sleep(0.001)) every
N iterations (e.g. every 50–100 messages) so the loop can run other tasks; keep
run_async_benchmark using asyncio.run to execute the async function.
PR Type
Tests, Enhancement
Description
Add comprehensive CI/CD performance testing workflow
Configure multi-Python version testing (3.9-3.12)
Implement automated performance benchmarking and reporting
Set up test configuration with thresholds and parameters
Diagram Walkthrough
File Walkthrough
performance-tests.yml
Comprehensive performance testing CI workflow.github/workflows/performance-tests.yml
test-config.yml
Test configuration parameters and thresholds.github/workflows/test-config.yml