feat: Implement ArchitectureDecisionsAssessor
Attribute Definition
Attribute ID: architecture_decisions (Attribute #20 - Tier 3)
Definition: Lightweight documents capturing architectural decisions with context, decision, and consequences (ADRs).
Why It Matters: ADRs provide historical context for "why" decisions were made. When AI encounters patterns or constraints, ADRs explain rationale, preventing counter-productive suggestions.
Impact on Agent Behavior:
- Understanding project evolution and design philosophy
- Avoiding proposing previously rejected alternatives
- Aligning suggestions with established architectural principles
- Better context for refactoring recommendations
Measurable Criteria:
- Store in
docs/adr/ or .adr/ directory
- Use consistent template (Michael Nygard or MADR)
- Each ADR includes: Title, Status, Context, Decision, Consequences
- Status values: Proposed, Accepted, Deprecated, Superseded
- One decision per ADR
- Sequential numbering (ADR-001, ADR-002...)
Implementation Requirements
File Location: src/agentready/assessors/documentation.py
Class Name: ArchitectureDecisionsAssessor
Tier: 3 (Important)
Default Weight: 0.015 (1.5% of total score)
Assessment Logic
Scoring Approach: Check for ADR directory and validate ADR format
Evidence to Check (score components):
-
ADR directory exists (40%)
- Check for: docs/adr/, .adr/, adr/, decisions/
- Must contain at least one .md file
-
ADR count and quality (40%)
- Count ADR files
- Check for consistent naming (0001-.md, ADR-001-.md)
- Verify Markdown format
-
ADR template compliance (20%)
- Sample ADRs for required sections
- Check for: Title, Status, Context, Decision, Consequences
- Verify status field (Accepted, Proposed, etc.)
Scoring Logic:
if adr_dir_exists and adr_count > 0:
dir_score = 40
# Score based on ADR count
if adr_count >= 5:
count_score = 40
else:
count_score = adr_count * 8 # 8 points per ADR, up to 5
# Sample ADRs for template compliance
template_score = validate_adr_templates(sample_adrs)
total_score = dir_score + count_score + template_score
else:
total_score = 0
status = "pass" if total_score >= 75 else "fail"
Code Pattern to Follow
Reference: PreCommitHooksAssessor for directory/file existence check
Pattern:
- Check for ADR directory in common locations
- Count .md files in ADR directory
- Sample 2-3 ADRs to validate template compliance
- Check for required sections in ADR content
- Calculate score and provide remediation
Example Finding Responses
Pass (Score: 95)
Finding(
attribute=self.attribute,
status="pass",
score=95.0,
measured_value="8 ADRs",
threshold="≥3 ADRs with template",
evidence=[
"ADR directory found: docs/adr/",
"8 architecture decision records",
"Consistent naming: 0001-use-postgresql.md",
"Sampled 3 ADRs: all follow template",
"Required sections present: Status, Context, Decision, Consequences",
],
remediation=None,
error_message=None,
)
Fail (Score: 40)
Finding(
attribute=self.attribute,
status="fail",
score=40.0,
measured_value="1 ADR (incomplete)",
threshold="≥3 ADRs with template",
evidence=[
"ADR directory found: docs/adr/",
"Only 1 ADR file",
"ADR missing required sections (no Consequences)",
"Inconsistent format",
],
remediation=self._create_remediation(),
error_message=None,
)
Not Applicable
Finding.not_applicable(
self.attribute,
reason="Small project (<1000 lines) may not need formal ADRs"
)
Registration
Add to src/agentready/services/scanner.py in create_all_assessors():
from ..assessors.documentation import (
CLAUDEmdAssessor,
READMEAssessor,
OpenAPISpecsAssessor,
ArchitectureDecisionsAssessor, # Add this import
)
def create_all_assessors() -> List[BaseAssessor]:
return [
# ... existing assessors ...
ArchitectureDecisionsAssessor(), # Add this line
]
Testing Guidance
Test File: tests/unit/test_assessors_documentation.py
Test Cases to Add:
test_adr_pass_multiple_records: Repository with 5+ well-formatted ADRs
test_adr_fail_no_directory: No ADR directory found
test_adr_partial_incomplete_template: ADRs exist but missing sections
test_adr_fail_empty_directory: ADR directory exists but empty
test_adr_not_applicable: Very small project (<1000 lines)
Note: AgentReady doesn't have ADRs yet (opportunity for improvement), will likely fail.
Dependencies
External Tools: None (file system and Markdown parsing)
Python Standard Library:
pathlib.Path for directory/file operations
re for parsing ADR sections
Remediation Steps
def _create_remediation(self) -> Remediation:
return Remediation(
summary="Create architecture decision records for major design choices",
steps=[
"Create docs/adr/ directory",
"Start with ADR template (Michael Nygard or MADR format)",
"Document significant architectural decisions",
"Include: Status, Context, Decision, Consequences",
"Use sequential numbering: 0001-decision-name.md",
"Update ADRs when decisions change (add Superseded status)",
],
tools=["adr-tools", "log4brains"],
commands=[
"# Install ADR tools",
"npm install -g adr-log",
"",
"# Initialize ADR directory",
"mkdir -p docs/adr",
"",
"# Create first ADR",
"adr new 'Use PostgreSQL for primary database'",
],
examples=[
"""# docs/adr/0001-use-postgresql.md
# ADR-001: Use PostgreSQL for Primary Database
**Status**: Accepted
**Date**: 2025-01-15
## Context
Need persistent storage supporting ACID transactions, complex queries, and JSON data.
Considered alternatives:
- MongoDB (NoSQL, flexible schema)
- MySQL (relational, widely supported)
- PostgreSQL (relational, advanced features)
## Decision
Use PostgreSQL 14+ as primary database.
## Consequences
**Positive**:
- Strong ACID guarantees
- Rich query capabilities (joins, window functions, CTEs)
- JSON support via jsonb for semi-structured data
- Excellent ecosystem and tooling
- Proven at scale
**Negative**:
- More operational complexity than managed NoSQL
- Requires schema migration planning
- Horizontal scaling more complex than cloud-native databases
**Neutral**:
- Team needs PostgreSQL training
- Development environment requires PostgreSQL installation
""",
],
citations=[
Citation(
source="AWS Prescriptive Guidance",
title="Architecture Decision Records",
url="https://docs.aws.amazon.com/prescriptive-guidance/latest/architectural-decision-records/",
relevance="Guide to creating and maintaining ADRs",
),
Citation(
source="GitHub",
title="ADR Template Collection",
url="https://github.com/joelparkerhenderson/architecture-decision-record",
relevance="Templates and examples for ADRs",
),
],
)
Implementation Notes
- Directory Detection: Check docs/adr/, .adr/, adr/, decisions/
- File Counting: Use
.glob("*.md") to find ADR files
- Naming Validation: Regex for sequential numbering:
r'^\d{4}-.*\.md$' or r'^ADR-\d{3}-.*\.md$'
- Template Validation: Search for keywords in content: "Status:", "Context", "Decision", "Consequences"
- Sampling: Read 2-3 ADRs to validate format (not all, for performance)
- Not Applicable: Small projects (<1000 LOC) don't need formal ADRs
- Edge Cases: Empty ADR directory scores 0, not not_applicable
feat: Implement ArchitectureDecisionsAssessor
Attribute Definition
Attribute ID:
architecture_decisions(Attribute #20 - Tier 3)Definition: Lightweight documents capturing architectural decisions with context, decision, and consequences (ADRs).
Why It Matters: ADRs provide historical context for "why" decisions were made. When AI encounters patterns or constraints, ADRs explain rationale, preventing counter-productive suggestions.
Impact on Agent Behavior:
Measurable Criteria:
docs/adr/or.adr/directoryImplementation Requirements
File Location:
src/agentready/assessors/documentation.pyClass Name:
ArchitectureDecisionsAssessorTier: 3 (Important)
Default Weight: 0.015 (1.5% of total score)
Assessment Logic
Scoring Approach: Check for ADR directory and validate ADR format
Evidence to Check (score components):
ADR directory exists (40%)
ADR count and quality (40%)
ADR template compliance (20%)
Scoring Logic:
Code Pattern to Follow
Reference:
PreCommitHooksAssessorfor directory/file existence checkPattern:
Example Finding Responses
Pass (Score: 95)
Fail (Score: 40)
Not Applicable
Registration
Add to
src/agentready/services/scanner.pyincreate_all_assessors():Testing Guidance
Test File:
tests/unit/test_assessors_documentation.pyTest Cases to Add:
test_adr_pass_multiple_records: Repository with 5+ well-formatted ADRstest_adr_fail_no_directory: No ADR directory foundtest_adr_partial_incomplete_template: ADRs exist but missing sectionstest_adr_fail_empty_directory: ADR directory exists but emptytest_adr_not_applicable: Very small project (<1000 lines)Note: AgentReady doesn't have ADRs yet (opportunity for improvement), will likely fail.
Dependencies
External Tools: None (file system and Markdown parsing)
Python Standard Library:
pathlib.Pathfor directory/file operationsrefor parsing ADR sectionsRemediation Steps
Implementation Notes
.glob("*.md")to find ADR filesr'^\d{4}-.*\.md$'orr'^ADR-\d{3}-.*\.md$'