Coldstart Implementation Prompt: Fix Code Quality Issues from Code Review
Priority: P1
Repository: agentready (https://github.com/redhat/agentready)
Branch Strategy: Create feature branch from main
Context
You are implementing a feature for AgentReady, a repository quality assessment tool for AI-assisted development.
Repository Structure
agentready/
├── src/agentready/ # Source code
│ ├── models/ # Data models
│ ├── services/ # Scanner orchestration
│ ├── assessors/ # Attribute assessments
│ ├── reporters/ # Report generation (HTML, Markdown, JSON)
│ ├── templates/ # Jinja2 templates
│ └── cli/ # Click-based CLI
├── tests/ # Test suite (unit + integration)
├── examples/ # Example reports
└── specs/ # Feature specifications
Key Technologies
- Python 3.11+
- Click (CLI framework)
- Jinja2 (templating)
- Pytest (testing)
- Black, isort, ruff (code quality)
Development Workflow
- Create feature branch:
git checkout -b NNN-feature-name
- Implement changes with tests
- Run linters:
black . && isort . && ruff check .
- Run tests:
pytest
- Commit with conventional commits
- Create PR to main
Feature Requirements
Fix Code Quality Issues from Code Review
Priority: P1 (High - Quality & Reliability)
Description: Address P1 issues discovered in code review that affect reliability, accuracy, and code quality.
Issues to Fix:
-
TOCTOU (Time-of-Check-Time-of-Use) in File Operations
- Location: Multiple assessors (
documentation.py:46-50, documentation.py:174-191)
- Problem: Check if file exists, then read in separate operation - file could be deleted in between
- Impact: Crashes instead of graceful degradation
- Fix: Use try-except around file reads instead of existence checks
# BEFORE:
if claude_md_path.exists():
size = claude_md_path.stat().st_size
# AFTER:
try:
with open(claude_md_path, "r") as f:
size = len(f.read())
except FileNotFoundError:
return Finding(...status="fail"...)
except OSError as e:
return Finding.error(self.attribute, f"Could not read: {e}")
-
Inaccurate Type Annotation Detection
- Location:
src/agentready/assessors/code_quality.py:98-102
- Problem: Regex-based detection has false positives (string literals, dict literals)
- Impact: Inflated type annotation coverage scores
- Fix: Use AST parsing instead of regex:
import ast
tree = ast.parse(content)
for node in ast.walk(tree):
if isinstance(node, ast.FunctionDef):
total_functions += 1
has_annotations = (node.returns is not None or
any(arg.annotation for arg in node.args.args))
if has_annotations:
typed_functions += 1
-
Assessment Validation Semantic Confusion
- Location:
src/agentready/models/assessment.py:54-59
- Problem: Field named
attributes_skipped but includes error and not_applicable statuses
- Impact: Confusing API, unclear semantics
- Fix: Rename to
attributes_not_assessed OR add separate counters
Acceptance Criteria:
Priority Justification: These affect reliability and measurement accuracy - critical for a quality assessment tool.
Related: Testing improvements, code quality
Implementation Checklist
Before you begin:
Implementation steps:
Code quality requirements:
Key Files to Review
Based on this feature, you should review:
src/agentready/models/ - Understand Assessment, Finding, Attribute models
src/agentready/services/scanner.py - Scanner orchestration
src/agentready/assessors/base.py - BaseAssessor pattern
src/agentready/reporters/ - Report generation
CLAUDE.md - Project overview and guidelines
BACKLOG.md - Full context of this feature
Testing Strategy
For this feature, ensure:
- Unit tests for core logic (80%+ coverage)
- Integration tests for end-to-end workflows
- Edge case tests (empty inputs, missing files, errors)
- Error handling tests (graceful degradation)
Run tests:
# All tests
pytest
# With coverage
pytest --cov=src/agentready --cov-report=html
# Specific test file
pytest tests/unit/test_feature.py -v
Success Criteria
This feature is complete when:
- ✅ All acceptance criteria from feature description are met
- ✅ Tests passing with >80% coverage for new code
- ✅ All linters passing (black, isort, ruff)
- ✅ Documentation updated
- ✅ PR created with clear description
- ✅ Self-tested end-to-end
Questions to Clarify (if needed)
If anything is unclear during implementation:
- Check CLAUDE.md for project patterns
- Review similar existing features
- Ask for clarification in PR comments
- Reference the original backlog item
Getting Started
# Clone and setup
git clone https://github.com/redhat/agentready.git
cd agentready
# Create virtual environment
uv venv && source .venv/bin/activate
# Install dependencies
uv pip install -e .
uv pip install pytest black isort ruff
# Create feature branch
git checkout -b 011-fix-code-quality-issues-from-code-review
# Start implementing!
Note: This is a coldstart prompt. You have all context needed to implement this feature independently. Read the linked files, follow the patterns, and deliver high-quality code with tests.
Coldstart Implementation Prompt: Fix Code Quality Issues from Code Review
Priority: P1
Repository: agentready (https://github.com/redhat/agentready)
Branch Strategy: Create feature branch from main
Context
You are implementing a feature for AgentReady, a repository quality assessment tool for AI-assisted development.
Repository Structure
Key Technologies
Development Workflow
git checkout -b NNN-feature-nameblack . && isort . && ruff check .pytestFeature Requirements
Fix Code Quality Issues from Code Review
Priority: P1 (High - Quality & Reliability)
Description: Address P1 issues discovered in code review that affect reliability, accuracy, and code quality.
Issues to Fix:
TOCTOU (Time-of-Check-Time-of-Use) in File Operations
documentation.py:46-50,documentation.py:174-191)Inaccurate Type Annotation Detection
src/agentready/assessors/code_quality.py:98-102Assessment Validation Semantic Confusion
src/agentready/models/assessment.py:54-59attributes_skippedbut includeserrorandnot_applicablestatusesattributes_not_assessedOR add separate countersAcceptance Criteria:
Priority Justification: These affect reliability and measurement accuracy - critical for a quality assessment tool.
Related: Testing improvements, code quality
Implementation Checklist
Before you begin:
Implementation steps:
Code quality requirements:
Key Files to Review
Based on this feature, you should review:
src/agentready/models/- Understand Assessment, Finding, Attribute modelssrc/agentready/services/scanner.py- Scanner orchestrationsrc/agentready/assessors/base.py- BaseAssessor patternsrc/agentready/reporters/- Report generationCLAUDE.md- Project overview and guidelinesBACKLOG.md- Full context of this featureTesting Strategy
For this feature, ensure:
Run tests:
Success Criteria
This feature is complete when:
Questions to Clarify (if needed)
If anything is unclear during implementation:
Getting Started
Note: This is a coldstart prompt. You have all context needed to implement this feature independently. Read the linked files, follow the patterns, and deliver high-quality code with tests.