Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 4 additions & 2 deletions .claude/CLAUDE.md
Original file line number Diff line number Diff line change
Expand Up @@ -207,7 +207,7 @@ New code must have >95% test coverage. Run `make coverage` to check.
### Naming

- Reduction tests: `test_<source>_to_<target>_closed_loop`
- Model tests: `test_<model>_basic`, `test_<model>_serialization`
- Model tests: descriptive names — e.g., `test_<model>_creation`, `test_<model>_evaluate_*`, `test_<model>_direction`, `test_<model>_solver`, `test_<model>_serialization`. Use whichever are relevant; there is no fixed per-model naming set.
- Solver tests: `test_<solver>_<problem>`

### Key Testing Patterns
Expand All @@ -218,6 +218,8 @@ See Key Patterns above for solver API signatures. Follow the reference files for

Unit tests in `src/unit_tests/` linked via `#[path]` (see Core Modules above). Integration tests in `tests/suites/`, consolidated through `tests/main.rs`. Canonical example-db coverage lives in `src/unit_tests/example_db.rs`.

Model review automation checks for a dedicated test file under `src/unit_tests/models/...` with at least 3 test functions. The exact split of coverage is judged per model during review.

## Documentation Locations
- `README.md` — Project overview and quickstart
- `.claude/` — Claude Code instructions and skills
Expand Down Expand Up @@ -258,7 +260,7 @@ Also add to the `display-name` dictionary:
]
```

Every directed reduction in the graph needs its own `reduction-rule` entry. The paper auto-checks completeness against `reduction_graph.json`.
Every directed reduction in the graph needs its own `reduction-rule` entry. The paper auto-checks completeness against the generated `reduction_graph.json` export.

## Complexity Verification Requirements

Expand Down
34 changes: 12 additions & 22 deletions .claude/skills/add-model/SKILL.md
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,6 @@ Before implementing, make sure the plan explicitly covers these items that struc
- CLI discovery and `pred create <ProblemName>` support are included where applicable
- A canonical model example is registered for example-db / `pred create --example`
- `docs/paper/reductions.typ` adds both the display-name dictionary entry and the `problem-def(...)`
- `src/unit_tests/trait_consistency.rs` is updated

## Step 1: Determine the category

Expand Down Expand Up @@ -196,32 +195,24 @@ This example is now the canonical source for:

Create `src/unit_tests/models/<category>/<name>.rs`:

Required tests:
- `test_<name>_creation` -- construct an instance, verify dimensions
- `test_<name>_evaluation` -- verify `evaluate()` on valid and invalid configs
- `test_<name>_direction` -- verify optimization direction (if optimization problem)
- `test_<name>_serialization` -- round-trip serde test (optional but recommended)
- `test_<name>_solver` -- verify brute-force solver finds correct solutions
- `test_<name>_paper_example` -- **use the same instance from the paper example** (Step 6), verify the issue's expected outcome is valid/optimal and the solution count matches
Every model needs **at least 3 test functions** (the structural reviewer enforces this). Choose from the coverage areas below — pick whichever are relevant to the model:

The `test_<name>_paper_example` test is critical for consistency between code and paper. It must:
1. Construct the exact same instance shown in the paper's example figure
- **Creation/basic** — exercise constructor inputs, key accessors, `dims()` / `num_variables()`.
- **Evaluation** — valid and invalid configs so the feasibility boundary is explicit.
- **Direction** — verify optimization direction (optimization problems only).
- **Solver** — brute-force solver finds correct solutions (when the model is small enough).
- **Serialization** — round-trip serde (when the model is used in CLI/example-db flows).
- **Paper example** — verify the worked example from the paper entry (see below).

When you add `test_<name>_paper_example`, it should:
1. Construct the same instance shown in the paper's example figure
2. Evaluate the solution from the issue's **Expected Outcome** section as shown in the paper and assert it is valid (and optimal for optimization problems)
3. Use `BruteForce` to find all optimal/satisfying solutions and assert the count matches the paper's claim
3. Use `BruteForce` to confirm the claimed optimum/satisfying solution count when the instance is small enough for unit tests

This test should be written **after** Step 6 (paper entry), once the example instance and expected outcome are finalized. If writing tests before the paper, use the issue's Example Instance + Expected Outcome as the source of truth and come back to verify consistency.
This test is usually written **after** Step 6 (paper entry), once the example instance and expected outcome are finalized. If writing tests before the paper, use the issue's Example Instance + Expected Outcome as the source of truth and come back to verify consistency.

Link the test file via `#[cfg(test)] #[path = "..."] mod tests;` at the bottom of the model file.

## Step 5.5: Add trait_consistency entry

Add the new problem to `src/unit_tests/trait_consistency.rs`:

1. **`test_all_problems_implement_trait_correctly`** — add a `check_problem_trait(...)` call with a small instance
2. **`test_direction`** (optimization problems only) — add an `assert_eq!(...direction(), Direction::Minimize/Maximize)` entry

This is **required** for every new model — it ensures the Problem trait implementation is well-formed.

## Step 6: Document in paper

Write a `problem-def` entry in `docs/paper/reductions.typ`. **Reference example:** search for `problem-def("MaximumIndependentSet")` to see the gold-standard entry — use it as a template.
Expand Down Expand Up @@ -294,5 +285,4 @@ If running standalone (not inside `make run-plan`), invoke [review-implementatio
| Missing from CLI help table | Must add entry to "Flags by problem type" table in `cli.rs` `after_help` |
| Schema lists derived fields | Schema should list constructor params, not internal fields (e.g., `matrix, k` not `matrix, m, n, k`) |
| Missing canonical model example | Add a builder in `src/example_db/model_builders.rs` and keep it aligned with paper/example workflows |
| Forgetting trait_consistency | Must add entry in `test_all_problems_implement_trait_correctly` (and `test_direction` for optimization) in `src/unit_tests/trait_consistency.rs` |
| Paper example not tested | Must include `test_<name>_paper_example` that verifies the exact instance, solution, and solution count shown in the paper |
4 changes: 2 additions & 2 deletions .claude/skills/add-rule/SKILL.md
Original file line number Diff line number Diff line change
Expand Up @@ -170,8 +170,8 @@ Checklist: notation self-contained, complexity cited, overhead consistent, examp
## Step 6: Regenerate exports and verify

```bash
cargo run --example export_graph # Update reduction_graph.json
cargo run --example export_schemas # Update problem schemas
cargo run --example export_graph # Generate reduction_graph.json for docs/paper builds
cargo run --example export_schemas # Generate problem schemas for docs/paper builds
make test clippy # Must pass
```

Expand Down
1 change: 0 additions & 1 deletion .claude/skills/final-review/SKILL.md
Original file line number Diff line number Diff line change
Expand Up @@ -184,7 +184,6 @@ Verify the PR includes all required components. Check:
- [ ] Canonical model example function in the model file
- [ ] Paper section in `docs/paper/reductions.typ` (`problem-def` entry)
- [ ] `display-name` entry in paper
- [ ] `trait_consistency.rs` entry in `src/unit_tests/trait_consistency.rs` (`test_all_problems_implement_trait_correctly`, plus `test_direction` for optimization)
- [ ] Aliases: if provided, verify they are standard literature abbreviations (not made up); if empty, confirm no well-known abbreviation is missing; check no conflict with existing aliases

**For [Rule] PRs:**
Expand Down
7 changes: 3 additions & 4 deletions .claude/skills/issue-to-pr/SKILL.md
Original file line number Diff line number Diff line change
Expand Up @@ -97,7 +97,7 @@ The plan MUST reference the appropriate implementation skill and follow its step
Include the concrete details from the issue (problem definition, reduction algorithm, example, etc.) mapped onto each step.

**Plan batching:** The paper writing step (add-model Step 6 / add-rule Step 5) MUST be in a **separate batch** from the implementation steps, so it gets its own subagent with fresh context. It depends on the implementation being complete (needs exports). Example batch structure for a `[Model]` plan:
- Batch 1: Steps 1-5.5 (implement model, register, CLI, tests, trait_consistency)
- Batch 1: Steps 1-5.5 (implement model, register, CLI, tests)
- Batch 2: Step 6 (write paper entry — depends on batch 1 for exports)

**Solver rules:**
Expand Down Expand Up @@ -223,12 +223,11 @@ EOF
python3 scripts/pipeline_pr.py comment --repo "$REPO" --pr "$PR" --body-file "$COMMENT_FILE"
rm -f "$COMMENT_FILE"

# Repo verification may regenerate tracked exports (notably after `make paper`).
# Repo verification may regenerate ignored doc exports (notably after `make paper`).
# Inspect the tree once more before pushing.
git status --short

# If expected generated exports changed, stage them before push.
git add docs/src/reductions/problem_schemas.json docs/src/reductions/reduction_graph.json
# Generated doc exports under docs/src/reductions/ are ignored; do not stage them.

# The issue plan file must be gone before push.
test ! -e docs/plans/<plan-file>.md
Expand Down
20 changes: 9 additions & 11 deletions .claude/skills/review-implementation/structural-reviewer-prompt.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,17 +33,15 @@ Given: problem name `P` = `{PROBLEM_NAME}`, category `C` = `{CATEGORY}`, file st
| 5 | `OptimizationProblem` or `SatisfactionProblem` impl | `Grep("(OptimizationProblem|SatisfactionProblem).*for.*{P}", file)` |
| 6 | `#[cfg(test)]` + `#[path = "..."]` test link | `Grep("#\\[path =", file)` |
| 7 | Test file exists | `Glob("src/unit_tests/models/{C}/{F}.rs")` |
| 8 | Test has creation test | `Grep("fn test_.*creation|fn test_{F}.*basic", test_file)` |
| 9 | Test has evaluation test | `Grep("fn test_.*evaluat", test_file)` |
| 10 | Registered in `{C}/mod.rs` | `Grep("mod {F}", "src/models/{C}/mod.rs")` |
| 11 | Re-exported in `models/mod.rs` | `Grep("{P}", "src/models/mod.rs")` |
| 12 | `declare_variants!` entry exists | `Grep("declare_variants!|default opt|default sat|opt {P}|sat {P}", file)` |
| 13 | CLI `resolve_alias` entry | `Grep("{P}", "problemreductions-cli/src/problem_name.rs")` |
| 14 | CLI `create` support | `Grep('"{P}"', "problemreductions-cli/src/commands/create.rs")` |
| 15 | Canonical model example registered | `Grep("{P}", "src/example_db/model_builders.rs")` |
| 16 | Paper `display-name` entry | `Grep('"{P}"', "docs/paper/reductions.typ")` |
| 17 | Paper `problem-def` block | `Grep('problem-def.*"{P}"', "docs/paper/reductions.typ")` |
| 18 | `trait_consistency` entry | `Grep("{P}", "src/unit_tests/trait_consistency.rs")` |
| 8 | Test file has >= 3 test functions | `Grep("fn test_", test_file)` — count matches, FAIL if < 3 |
| 9 | Registered in `{C}/mod.rs` | `Grep("mod {F}", "src/models/{C}/mod.rs")` |
| 10 | Re-exported in `models/mod.rs` | `Grep("{P}", "src/models/mod.rs")` |
| 11 | `declare_variants!` entry exists | `Grep("declare_variants!|default opt|default sat|opt {P}|sat {P}", file)` |
| 12 | CLI `resolve_alias` entry | `Grep("{P}", "problemreductions-cli/src/problem_name.rs")` |
| 13 | CLI `create` support | `Grep('"{P}"', "problemreductions-cli/src/commands/create.rs")` |
| 14 | Canonical model example registered | `Grep("{P}", "src/example_db/model_builders.rs")` |
| 15 | Paper `display-name` entry | `Grep('"{P}"', "docs/paper/reductions.typ")` |
| 16 | Paper `problem-def` block | `Grep('problem-def.*"{P}"', "docs/paper/reductions.typ")` |

## Rule Checklist

Expand Down
2 changes: 1 addition & 1 deletion .claude/skills/review-pipeline/SKILL.md
Original file line number Diff line number Diff line change
Expand Up @@ -329,7 +329,7 @@ Completed: 2/2 | All moved to Final review
| Guessing on an issue card with multiple linked repo PRs | Stop, show options to the user, and recommend the most likely correct OPEN PR |
| Picking a PR before Copilot has reviewed | Inspect the checked-out diff and PR body first. If the PR is incomplete, comment and move it back to Ready. If it is review-ready, request Copilot review and switch to another item instead of waiting |
| Missing project scopes | Run `gh auth refresh -s read:project,project` |
| Skipping review-implementation | Always run structural completeness check in Step 2b — it catches gaps Copilot misses (paper entries, CLI registration, trait_consistency) |
| Skipping review-implementation | Always run structural completeness check in Step 2b — it catches gaps Copilot misses (paper entries, CLI registration, example-db wiring) |
| Skipping agentic tests | Always run test-feature even if CI is green |
| Not checking out the right branch | Use `gh pr view` to get the exact branch name |
| Waiting idle for Copilot | Request the review, leave the PR in Review pool, and keep triaging other items in the same run |
Expand Down
3 changes: 2 additions & 1 deletion docs/src/design.md
Original file line number Diff line number Diff line change
Expand Up @@ -230,7 +230,8 @@ Exported files:
- [reduction_graph.json](reductions/reduction_graph.json) — all problem variants and reduction edges
- [problem_schemas.json](reductions/problem_schemas.json) — field definitions for each problem type

Regenerate with `cargo run --example export_graph` and `cargo run --example export_schemas`.
These JSON assets are generated during `make doc`, `make mdbook`, and `make paper`; they are build artifacts, not committed source files.
Generate them manually with `cargo run --example export_graph` and `cargo run --example export_schemas` when you need the raw exports locally.

### Path finding

Expand Down
1 change: 1 addition & 0 deletions docs/src/getting-started.md
Original file line number Diff line number Diff line change
Expand Up @@ -207,6 +207,7 @@ problemreductions = { version = "0.2", default-features = false }

The library exports machine-readable metadata useful for tooling and research:

These files are generated when you build the docs locally.
- [reduction_graph.json](reductions/reduction_graph.json) lists all problem variants and reduction edges
- [problem_schemas.json](reductions/problem_schemas.json) lists field definitions for each problem type

Expand Down
Loading
Loading