Skip to content

feat: add supermodel restore command#18

Merged
greynewell merged 4 commits intomainfrom
feat/restore
Apr 2, 2026
Merged

feat: add supermodel restore command#18
greynewell merged 4 commits intomainfrom
feat/restore

Conversation

@greynewell
Copy link
Copy Markdown
Contributor

@greynewell greynewell commented Apr 2, 2026

Summary

  • Ports the "context bomb" functionality from supermodeltools/Uncompact into the supermodel CLI as supermodel restore
  • Adds AnalyzeDomains() API method that captures the full SupermodelIR response — semantic domains, responsibilities, subdomains, key files, and dependency relationships (previously Analyze() only captured the inner node/edge graph)
  • New internal/restore package: ProjectGraph model, FromSupermodelIR() conversion, local file-tree graph builder, and Markdown renderer with token budgeting

How it works

API mode (default when authenticated):

supermodel restore

Uploads a git archive to /v1/graphs/supermodel, parses the full structured response into a ProjectGraph with semantic domains and critical files, renders to Markdown.

Local mode (no API key, or --local):

supermodel restore --local

Pure file-tree scan — groups files by directory, detects language and up to 15 external deps from go.mod / package.json / requirements.txt / Cargo.toml / Gemfile / pyproject.toml. No network calls.

Flags: --local, --max-tokens N (default 2000), --dir PATH

New packages / files

Path Description
cmd/restore.go Cobra command, zip creation, API/local dispatch
internal/restore/types.go ProjectGraph model + FromSupermodelIR()
internal/restore/local.go Local file-tree graph builder
internal/restore/render.go Markdown renderer with progressive token truncation
internal/api/types.go SupermodelIR + IR* raw response types
internal/api/client.go AnalyzeDomains(), pollUntilComplete() helper

Test plan

  • supermodel restore --local — verify Markdown output on any repo
  • supermodel restore with API key — verify domain map rendered correctly
  • supermodel restore --max-tokens 200 — verify truncation kicks in and budget is respected
  • No API key configured → falls back to local mode automatically (no error)
  • Architecture check passes (internal/restore only imports internal/api, a sharedKernel package)

🤖 Generated with Claude Code

Summary by CodeRabbit

  • New Features

    • Adds a new CLI command to generate a Markdown “context bomb” project summary (supports --local, --max-tokens, --dir).
    • Uses remote analysis when available and automatically falls back to a local repo scan; can package the repo for remote processing.
    • Configurable token budget with intelligent truncation and optional inclusion of CLAUDE.md instructions when present.
  • Documentation

    • Added package-level docs describing the restore command and output format.

Ports the "Uncompact" context-bomb functionality from
github.com/supermodeltools/Uncompact into the supermodel CLI as
`supermodel restore`.

The command generates a high-level project summary (domains, critical
files, tech stack, stats) and writes it to stdout so Claude can
re-establish codebase understanding after a context compaction event.

Two modes:
- API mode (default when authenticated): calls /v1/graphs/supermodel
  via the new AnalyzeDomains() method which returns the full
  SupermodelIR — semantic domains, responsibilities, subdomains, and
  dependency relationships — then converts to a ProjectGraph.
- Local mode (--local or no API key): scans the repo file tree,
  groups files by directory, detects language and external deps from
  manifest files (go.mod, package.json, requirements.txt, Cargo.toml,
  Gemfile, pyproject.toml), no network calls required.

New packages:
- internal/restore: ProjectGraph model, FromSupermodelIR() conversion,
  local graph builder, Markdown renderer with configurable token budget
  (default 2000) and progressive truncation strategy.

New API types (internal/api/types.go):
- SupermodelIR and IR* raw response types for /v1/graphs/supermodel.

New API method (internal/api/client.go):
- AnalyzeDomains() — shares polling infrastructure with Analyze() via
  the extracted pollUntilComplete() helper.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 2, 2026

Walkthrough

Adds a new supermodel restore CLI command that builds a project “context bomb” by either uploading a repo ZIP to a supermodel API (parsing a SupermodelIR) or scanning the repo locally; it then converts data into a ProjectGraph and renders token-budgeted Markdown to stdout.

Changes

Cohort / File(s) Summary
CLI command
cmd/restore.go
New Cobra subcommand supermodel restore: resolves project dir, finds git root, reads CLAUDE.md, creates repo ZIP (tries git archive, falls back to walking files), chooses remote vs local mode (flags + API key), calls AnalyzeDomains with timeout, falls back to local scan on failure, and prints rendered Markdown to stdout.
API client & job polling
internal/api/client.go
Added pollUntilComplete helper; Analyze reuses it; new AnalyzeDomains(ctx, zipPath, idempotencyKey) (*SupermodelIR, error) submits ZIP and polls until job completes, returning full SupermodelIR.
API IR types
internal/api/types.go
New exported IR types modeling /v1/graphs/supermodel response: SupermodelIR, IRMetadata, IRGraph, IRNode, IRRelationship, IRDomain, IRSubdomain.
Restore types & conversion
internal/restore/types.go
New render model types: ProjectGraph, Domain, Subdomain, CriticalFile, Stats, CircularDependencyCycle; added FromSupermodelIR(ir, projectName) *ProjectGraph to convert API IR into ProjectGraph.
Local repository analysis
internal/restore/local.go
New local analysis: BuildProjectGraph(ctx, rootDir, projectName), ReadClaudeMD(rootDir), DetectExternalDeps(rootDir). Walks repo (skip caches, dotfiles, symlinks, >10MB files), counts extensions/languages, extracts README snippet, selects domains/key files, computes critical files and external deps.
Rendering & token budget
internal/restore/render.go
New RenderOptions, Render(graph, projectName, opts), and CountTokens. Template-based Markdown rendering with token budgeting and truncateToTokenBudget fallback to fit MaxTokens.
Package docs
internal/restore/doc.go
Added package documentation describing the restore command, API vs local sourcing, and rendering/token-budget behavior.

Sequence Diagram

sequenceDiagram
    participant User
    participant CLI as "cmd/restore"
    participant API as "internal/api/client"
    participant Local as "internal/restore (local)"
    participant Render as "internal/restore (render)"

    User->>CLI: run 'supermodel restore'
    alt --local flag OR no API key
        CLI->>Local: BuildProjectGraph(rootDir)
        Local-->>CLI: ProjectGraph
    else Remote mode
        CLI->>CLI: restoreCreateZip(rootDir)
        CLI->>API: AnalyzeDomains(zipPath, idempotencyKey)
        API->>API: postZip + pollUntilComplete
        API-->>CLI: SupermodelIR
        alt API returns IR
            CLI->>CLI: FromSupermodelIR → ProjectGraph
        else API fails
            CLI->>Local: BuildProjectGraph(rootDir)
            Local-->>CLI: ProjectGraph
        end
    end
    CLI->>Render: Render(projectGraph, projectName, opts)
    Render->>Render: CountTokens + template rendering
    alt tokens > MaxTokens
        Render->>Render: truncateToTokenBudget
    end
    Render-->>CLI: Markdown output
    CLI-->>User: stdout
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Poem

A tiny CLI zips up the tree or reads the files nearby,
Calls the supermodel or scans locally to pry,
Counts tokens, trims the parts that won't fit,
Renders domains and key files in a tidy writ,
Pushes Markdown to stdout — context revived. ✨

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 54.17% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and specifically describes the main change: adding a new supermodel restore command.
Description check ✅ Passed The description is comprehensive, covering What (the feature ported), Why (context from the original source), and includes a detailed How section with command examples, flags, files modified, and a thorough test plan with specific test cases.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/restore

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (4)
internal/restore/local.go (1)

285-303: npm dependencies might get deprioritized unexpectedly.

Here's what happens:

  1. Line 285-288: deps is sorted and capped to 15
  2. Line 290-295: npmRuntime deps are added (if not already in seen)
  3. Line 296-302: npmDev deps are added (if not already in seen)

The issue is that if you have, say, a go.mod AND a package.json, the Go deps will fill up slots first, then npm runtime deps won't make it in because deps is already at 15.

I think you meant for npm runtime to have priority over other manifests' deps? If so, you might want to restructure this:

💡 Suggested approach

One way to fix this:

// Collect all deps first, then prioritize
sort.Strings(npmRuntime)
sort.Strings(npmDev)

// Add npm runtime first (highest priority)
for _, name := range npmRuntime {
    add(name)
}
// Then other deps
sort.Strings(deps)
for _, name := range deps {
    add(name)
}
// Then npm dev
for _, name := range npmDev {
    add(name)
}

// Now cap
if len(deps) > maxDeps {
    deps = deps[:maxDeps]
}

Or if the current behavior is intentional (npm deps as supplementary), just add a comment explaining the priority order.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/restore/local.go` around lines 285 - 303, The current logic sorts
and caps deps (deps, maxDeps) before attempting to add npmRuntime and npmDev,
which lets earlier manifest deps prefill the cap and deprioritize npm
runtime/dev; change the flow in the function that builds deps so npmRuntime is
added first (call add(name) for each name in sorted npmRuntime), then add the
existing deps list (sorted), then npmDev, and only after all prioritized
additions apply the cap (truncate deps to maxDeps); reference the existing
symbols deps, npmRuntime, npmDev, add, and maxDeps when making this change (or
alternatively document the current priority explicitly if the behavior is
intentional).
cmd/restore.go (2)

134-157: Consider reusing existing zip creation logic.

There's already a createZip function in internal/analyze/zip.go (see the relevant code snippet) that does essentially the same thing: git archive with directory-walk fallback. You might want to export that and reuse it here to avoid maintaining two copies.

That said, I get that you might want the restore-specific version to have slightly different behavior (like the 10MB file limit). If that's intentional, maybe add a quick comment explaining why this is separate from the analyze version.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cmd/restore.go` around lines 134 - 157, The restoreCreateZip function
duplicates logic already implemented by createZip in internal/analyze/zip.go;
either import and call that createZip (export it if needed) or, if
restoreCreateZip must keep restore-specific behavior (e.g., the 10MB file
limit), add a short comment above restoreCreateZip explaining the intentional
differences and why it cannot reuse createZip; if choosing to reuse, replace
restoreCreateZip body with a call to createZip(dir) and remove restoreWalkZip
duplication, ensuring any restore-specific limits are handled by parameters or a
thin wrapper around createZip.

176-204: Minor: restoreWalkZip silently swallows some errors.

Lines 177-179 and 180-183 return nil on errors, which means walk errors and filepath.Rel failures are silently ignored. This is probably intentional for robustness (you don't want one bad file to kill the whole zip), but worth noting.

Also, there's a small edge case: you skip hidden directories (line 185 via skipDirs) and check file size (line 190), but you don't explicitly skip hidden files in this function (unlike collectFiles in local.go which does at line 345). Not a big deal since git archive handles this in the happy path, and hidden files in the fallback path might actually be wanted.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cmd/restore.go` around lines 176 - 204, The filepath.Walk callback inside
restoreWalkZip is silently swallowing errors from the walk and filepath.Rel
calls and also doesn't skip hidden files; update the anonymous walk function
used by filepath.Walk so that it returns the encountered errors (or wraps them
with context) instead of returning nil for the errors from the incoming err
parameter and filepath.Rel, and add the same hidden-file check used by
collectFiles (i.e., skip files whose base name starts with '.' before
processing) so hidden files are treated consistently; refer to the anonymous
function passed to filepath.Walk, filepath.Rel, skipDirs and the info.IsDir/size
checks to locate where to change the returns and add the hidden-file guard.
internal/restore/types.go (1)

136-136: Static analysis: Consider using index-based iteration.

The linter flags that iterating over domains copies 128 bytes per iteration. You can use index-based iteration to avoid this:

🔧 Proposed fix
-	for _, d := range domains {
-		seen := make(map[string]struct{}, len(d.KeyFiles))
-		for _, f := range d.KeyFiles {
+	for i := range domains {
+		seen := make(map[string]struct{}, len(domains[i].KeyFiles))
+		for _, f := range domains[i].KeyFiles {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/restore/types.go` at line 136, The range loop iterates over domains
by value causing a ~128-byte copy each iteration; change the iteration to
index-based to avoid copying (replace the `for _, d := range domains {` loop
with an index loop and take a pointer or reference to the element, e.g. use `for
i := range domains { d := &domains[i] }`), and update any subsequent uses of `d`
in the loop to use the pointer/reference form (or the indexed value) so
semantics remain identical while eliminating the per-iteration copy.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@internal/restore/local.go`:
- Around line 499-526: The function localTopFiles is iterating over a slice of
Domain by value which copies the 128-byte Domain struct each iteration; change
the outer loop to an index-based iteration (e.g., for i := range domains) and
reference domains[i] (or take its pointer) instead of for _, d := range domains
to avoid large copies when accessing Domain.KeyFiles; keep the inner loop over
KeyFiles unchanged and preserve the rest of the sorting/truncation logic.

In `@internal/restore/render.go`:
- Around line 150-170: Replace allocations from WriteString(fmt.Sprintf(...)) by
writing formatted output directly into the strings.Builder: use
fmt.Fprintf(&hdr, "...", args...) for the header lines that currently call
hdr.WriteString(fmt.Sprintf(...)) (replace both the project header and the final
formatted line that references graph.Language, graph.Stats.TotalFiles and
graph.Stats.TotalFunctions), and replace hdr.WriteString(banner + "\n") with
fmt.Fprintln(&hdr, banner) or fmt.Fprintf(&hdr, "%s\n", banner); reference
symbols: hdr (strings.Builder), fmt.Sprintf calls, fmt.Fprintf, fmt.Fprintln,
and the graph/opts variables used in those formatted strings.
- Around line 248-274: The function buildDomainSection currently takes Domain by
value causing 128-byte copies; change its signature to accept *Domain and update
all call sites to pass a pointer (e.g., &d or &domains[i]) instead of copying
the struct, and in callers that iterate over slices of Domain switch to
index-based loops (for i := range domains { d := &domains[i]; ... }) to avoid
per-iteration copies; also update the call in truncateToTokenBudget to pass a
*Domain. Ensure all references inside buildDomainSection use the pointer
receiver (d.Field -> d.Field) accordingly.

---

Nitpick comments:
In `@cmd/restore.go`:
- Around line 134-157: The restoreCreateZip function duplicates logic already
implemented by createZip in internal/analyze/zip.go; either import and call that
createZip (export it if needed) or, if restoreCreateZip must keep
restore-specific behavior (e.g., the 10MB file limit), add a short comment above
restoreCreateZip explaining the intentional differences and why it cannot reuse
createZip; if choosing to reuse, replace restoreCreateZip body with a call to
createZip(dir) and remove restoreWalkZip duplication, ensuring any
restore-specific limits are handled by parameters or a thin wrapper around
createZip.
- Around line 176-204: The filepath.Walk callback inside restoreWalkZip is
silently swallowing errors from the walk and filepath.Rel calls and also doesn't
skip hidden files; update the anonymous walk function used by filepath.Walk so
that it returns the encountered errors (or wraps them with context) instead of
returning nil for the errors from the incoming err parameter and filepath.Rel,
and add the same hidden-file check used by collectFiles (i.e., skip files whose
base name starts with '.' before processing) so hidden files are treated
consistently; refer to the anonymous function passed to filepath.Walk,
filepath.Rel, skipDirs and the info.IsDir/size checks to locate where to change
the returns and add the hidden-file guard.

In `@internal/restore/local.go`:
- Around line 285-303: The current logic sorts and caps deps (deps, maxDeps)
before attempting to add npmRuntime and npmDev, which lets earlier manifest deps
prefill the cap and deprioritize npm runtime/dev; change the flow in the
function that builds deps so npmRuntime is added first (call add(name) for each
name in sorted npmRuntime), then add the existing deps list (sorted), then
npmDev, and only after all prioritized additions apply the cap (truncate deps to
maxDeps); reference the existing symbols deps, npmRuntime, npmDev, add, and
maxDeps when making this change (or alternatively document the current priority
explicitly if the behavior is intentional).

In `@internal/restore/types.go`:
- Line 136: The range loop iterates over domains by value causing a ~128-byte
copy each iteration; change the iteration to index-based to avoid copying
(replace the `for _, d := range domains {` loop with an index loop and take a
pointer or reference to the element, e.g. use `for i := range domains { d :=
&domains[i] }`), and update any subsequent uses of `d` in the loop to use the
pointer/reference form (or the indexed value) so semantics remain identical
while eliminating the per-iteration copy.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: e3ca8da6-cdfd-45dd-870f-dc3edf3c7761

📥 Commits

Reviewing files that changed from the base of the PR and between 53b4918 and 7f3c412.

📒 Files selected for processing (7)
  • cmd/restore.go
  • internal/api/client.go
  • internal/api/types.go
  • internal/restore/doc.go
  • internal/restore/local.go
  • internal/restore/render.go
  • internal/restore/types.go

greynewell and others added 2 commits April 2, 2026 13:58
- Add named returns to Render() and truncateToTokenBudget() (unnamedResult)
- Replace sb.WriteString(fmt.Sprintf(...)) with fmt.Fprintf (QF1012)
- Change buildDomainSection to accept *Domain pointer (hugeParam)
- Fix rangeValCopy in computeCriticalFiles, localTopFiles, truncateToTokenBudget

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- restoreWalkZip: propagate walk/Rel errors instead of swallowing them
- restoreWalkZip: skip hidden files (dot-prefixed) consistent with analyze/zip.go
- restoreCreateZip: add comment explaining per-slice zip ownership pattern
- DetectExternalDeps: add comment clarifying npm dep priority order

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (1)
internal/restore/types.go (1)

84-111: Deduplicate relationship/dependency lists before storing them.

DependsOn (Line 86) and ExternalDeps (Line 109) currently preserve duplicates from IR input. That can bloat rendered output and waste token budget. Consider deduping while preserving first-seen order.

♻️ Suggested patch
 	// Build domain → dependsOn map from DOMAIN_RELATES edges.
 	dependsOn := make(map[string][]string)
+	depSeen := make(map[string]map[string]struct{})
 	for _, rel := range ir.Graph.Relationships {
 		if rel.Type == "DOMAIN_RELATES" && rel.Source != "" && rel.Target != "" {
-			dependsOn[rel.Source] = append(dependsOn[rel.Source], rel.Target)
+			if depSeen[rel.Source] == nil {
+				depSeen[rel.Source] = make(map[string]struct{})
+			}
+			if _, ok := depSeen[rel.Source][rel.Target]; ok {
+				continue
+			}
+			depSeen[rel.Source][rel.Target] = struct{}{}
+			dependsOn[rel.Source] = append(dependsOn[rel.Source], rel.Target)
 		}
 	}
@@
-	var extDeps []string
+	var extDeps []string
+	extSeen := make(map[string]struct{})
 	for _, node := range ir.Graph.Nodes {
 		if node.Type == "ExternalDependency" && node.Name != "" {
+			if _, ok := extSeen[node.Name]; ok {
+				continue
+			}
+			extSeen[node.Name] = struct{}{}
 			extDeps = append(extDeps, node.Name)
 		}
 	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/restore/types.go` around lines 84 - 111, The DependsOn slices built
from ir.Graph.Relationships and the extDeps slice from ir.Graph.Nodes can
contain duplicates; iterate each source's dependency list and extDeps with a
small seen map to filter out duplicates while preserving first-seen order before
assigning to Domain.DependsOn and before returning/using extDeps; specifically
dedupe entries coming from ir.Graph.Relationships when you populate
dependsOn[rel.Source] and dedupe when building extDeps from nodes (use the
unique symbols dependsOn, Domain.DependsOn, extDeps, ir.Graph.Relationships,
ir.Graph.Nodes to locate where to insert the seen-map filtering).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@internal/restore/local.go`:
- Around line 321-324: The current filepath.WalkDir invocation assigns walkErr
but the walk callback swallows errors by returning nil when werr is non-nil,
which hides traversal failures; update the anonymous walk function passed to
filepath.WalkDir (the closure that accepts path, d fs.DirEntry, werr error) to
propagate the error instead of returning nil (e.g., return werr or wrap it with
context) so walkErr receives the real failure and the caller can handle/log it
appropriately for rootDir traversal.

In `@internal/restore/render.go`:
- Around line 88-99: The Render function currently dereferences graph
(ProjectGraph) without checking for nil; add an early guard at the start of
Render that returns a clear error when graph == nil (e.g., fmt.Errorf("nil graph
passed to Render") or a package-specific error) so callers get a controlled
error instead of a panic; update the top of Render (before using graph.Cycles or
other fields) to perform this nil check and return immediately with an
appropriate error and zeroed output/tokens.

---

Nitpick comments:
In `@internal/restore/types.go`:
- Around line 84-111: The DependsOn slices built from ir.Graph.Relationships and
the extDeps slice from ir.Graph.Nodes can contain duplicates; iterate each
source's dependency list and extDeps with a small seen map to filter out
duplicates while preserving first-seen order before assigning to
Domain.DependsOn and before returning/using extDeps; specifically dedupe entries
coming from ir.Graph.Relationships when you populate dependsOn[rel.Source] and
dedupe when building extDeps from nodes (use the unique symbols dependsOn,
Domain.DependsOn, extDeps, ir.Graph.Relationships, ir.Graph.Nodes to locate
where to insert the seen-map filtering).
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 1062f35e-2a58-45f9-bc5a-6918491d6901

📥 Commits

Reviewing files that changed from the base of the PR and between 7f3c412 and 3286bc1.

📒 Files selected for processing (3)
  • internal/restore/local.go
  • internal/restore/render.go
  • internal/restore/types.go

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (3)
cmd/restore.go (3)

148-152: Minor: stderr output inconsistency.

Line 149 sends git archive errors to os.Stderr directly, but the rest of the command uses cmd.ErrOrStderr() (e.g., line 90, 125). In tests where stderr is captured/redirected, git errors would leak to the real stderr.

Probably fine for now since git failures trigger the fallback anyway and aren't user-facing errors, but worth knowing if you ever want to test this path.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cmd/restore.go` around lines 148 - 152, The git archive invocation sets
cmd.Stderr = os.Stderr which leaks errors to the real stderr; change it to use
the same helper as elsewhere by assigning cmd.Stderr = cmd.ErrOrStderr() (i.e.,
update the cmd variable created in the git archive block so it uses
cmd.ErrOrStderr() instead of os.Stderr to keep stderr capture consistent with
the other uses on lines referencing cmd.ErrOrStderr()).

83-84: Consider logging config load errors at debug level.

Right now, if config.Load() fails because of a corrupted config file (say, invalid JSON), the user just silently falls back to local mode with no indication. That's probably fine for "config doesn't exist" but could be confusing for "config exists but is broken."

Not a blocker — the current behavior is safe — but a debug-level log here could save someone 20 minutes of head-scratching someday.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cmd/restore.go` around lines 83 - 84, Capture and check the error returned
from config.Load() instead of discarding it (replace "cfg, _ := config.Load()"
with "cfg, err := config.Load()"), and if err != nil log the error at debug
level (e.g., logger.Debugf or the existing debug logging facility) with context
like "failed to load config" so corrupt-config cases are visible; keep the
subsequent hasAPIKey logic (uses cfg and hasAPIKey) unchanged.

131-132: Heads up: Description field will be empty in API mode but populated in local mode.

When you use FromSupermodelIR here (API mode), the Description field stays empty. But in local mode, BuildProjectGraph reads it from the README or package.json. Since the render template uses this field (see the template on line 45 and line 250), your rendered output will look different between the two modes.

For example:

  • API mode: No description shown
  • Local mode: Description appears below the project name

If this is intentional (API relies on other domain data instead), you're good. Just wanted to flag it in case you expected consistency.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cmd/restore.go` around lines 131 - 132, The graph produced by
restore.FromSupermodelIR(ir, projectName) leaves the Description field empty in
API mode, causing inconsistent rendering versus BuildProjectGraph (local mode)
which populates Description from README/package.json; update the restore flow to
either populate graph.Description after calling FromSupermodelIR (e.g., copy
from ir.Description or lookup README/package.json for the project) or add a
fallback before rendering so templates use graph.Description || ir.Description
|| metadata README content; touch the code around FromSupermodelIR and the
rendering entrypoint to ensure Description is consistently set in API mode.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@cmd/restore.go`:
- Around line 200-206: The Walk callback opens files with src, defers
src.Close(), and returns after io.Copy, which leaks file descriptors because
defers run only when Walk returns; change to explicitly close the file
immediately after copying (call src.Close() right after io.Copy) or move the
open/copy/close logic into a helper like copyFileToZip(w io.Writer, path string)
or an inline immediately-invoked closure so the defer runs per file; update the
code paths that call os.Open, src, and io.Copy accordingly to ensure files are
closed promptly.
- Line 126: The idempotency key passed to client.AnalyzeDomains is currently
constructed as "restore-"+projectName which can return cached results across
code changes; change the key generation used in the call to AnalyzeDomains
(where ir, err := client.AnalyzeDomains(ctx, zipPath, "restore-"+projectName) is
invoked) to include a deterministic repo state identifier such as the current
git commit hash (or a content hash of the repo) so the key changes when code
changes (e.g., build or fetch the git commit SHA and use
"restore-"+projectName+"-"+commitSHA or similar) to prevent stale cached
analysis.

---

Nitpick comments:
In `@cmd/restore.go`:
- Around line 148-152: The git archive invocation sets cmd.Stderr = os.Stderr
which leaks errors to the real stderr; change it to use the same helper as
elsewhere by assigning cmd.Stderr = cmd.ErrOrStderr() (i.e., update the cmd
variable created in the git archive block so it uses cmd.ErrOrStderr() instead
of os.Stderr to keep stderr capture consistent with the other uses on lines
referencing cmd.ErrOrStderr()).
- Around line 83-84: Capture and check the error returned from config.Load()
instead of discarding it (replace "cfg, _ := config.Load()" with "cfg, err :=
config.Load()"), and if err != nil log the error at debug level (e.g.,
logger.Debugf or the existing debug logging facility) with context like "failed
to load config" so corrupt-config cases are visible; keep the subsequent
hasAPIKey logic (uses cfg and hasAPIKey) unchanged.
- Around line 131-132: The graph produced by restore.FromSupermodelIR(ir,
projectName) leaves the Description field empty in API mode, causing
inconsistent rendering versus BuildProjectGraph (local mode) which populates
Description from README/package.json; update the restore flow to either populate
graph.Description after calling FromSupermodelIR (e.g., copy from ir.Description
or lookup README/package.json for the project) or add a fallback before
rendering so templates use graph.Description || ir.Description || metadata
README content; touch the code around FromSupermodelIR and the rendering
entrypoint to ensure Description is consistently set in API mode.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 548d4d03-0791-4b81-85f9-659604bfe0eb

📥 Commits

Reviewing files that changed from the base of the PR and between 3286bc1 and 01aa995.

📒 Files selected for processing (2)
  • cmd/restore.go
  • internal/restore/local.go
✅ Files skipped from review due to trivial changes (1)
  • internal/restore/local.go

Copy link
Copy Markdown
Contributor

@jonathanpopham jonathanpopham left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tested & Approved

Pulled the branch, built it, ran the full test suite, linter, and exercised the command manually.

Verification results:

  • Builds clean — all existing tests pass, go vet and golangci-lint are green
  • Vertical slice is cleaninternal/restore only imports internal/api (shared kernel), no cross-slice contamination
  • Nothing broken — no regressions in existing packages
  • Feature works — local mode, API mode, token truncation, fallback, all flags behave correctly

It's good to merge. The shortcomings I found (no unit tests, arch-docs-output noise in local mode on large repos, edge case handling for non-existent --dir) are all "fix later" material, not blockers. The slice is self-contained so improving it later won't touch anything else.

Nice touches:

  • Graceful API → local fallback with stderr warning
  • Progressive token truncation drops content in priority order
  • pollUntilComplete extraction in client.go is a clean refactor of shared infra
  • The intentional zip duplication comment at restoreCreateZip is appreciated — makes the vertical slice rationale explicit

- local.go: propagate WalkDir errors instead of silently ignoring them
- render.go: guard against nil graph in Render() to avoid panic
- cmd/restore.go: use ZIP content hash as idempotency key (consistent with
  other commands; avoids stale cached results across code changes)
- cmd/restore.go: fix file handle leak in restoreWalkZip — extract
  copyFileIntoZip helper so src.Close() runs per-file, not at Walk return

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (3)
internal/restore/local.go (2)

72-86: Minor: truncation message says "chars" but counts runes.

Super small nit, but Line 83 says "first 3000 chars" while you're actually counting runes (which is the correct behavior for multi-byte characters like emoji or CJK). Consider updating the message to say "runes" or "characters" for accuracy, though this won't affect functionality.

✏️ Suggested tweak
 	if len(runes) > maxRunes {
-		content = string(runes[:maxRunes]) + "\n\n*(CLAUDE.md truncated — showing first 3000 chars)*"
+		content = string(runes[:maxRunes]) + "\n\n*(CLAUDE.md truncated — showing first 3000 characters)*"
 	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/restore/local.go` around lines 72 - 86, The truncation message in
ReadClaudeMD currently says "first 3000 chars" while the code slices by runes
(maxRunes); update the message string used when truncating (the literal appended
after string(runes[:maxRunes])) to accurately describe what is being counted
(e.g., "first 3000 runes" or "first 3000 characters") so it matches the behavior
controlled by maxRunes in ReadClaudeMD.

88-143: Dependency name collision risk with go.mod parsing.

Hey, quick heads up on Line 143 - you're extracting just the last segment of Go module paths. So github.com/user/zap and github.com/other/zap would both become just "zap". This is probably fine for a high-level summary, but worth being aware of if you ever need to disambiguate.

segs := strings.Split(mod, "/")
add(segs[len(segs)-1])  // "github.com/uber-go/zap" → "zap"

Not blocking - just something to keep in mind.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/restore/local.go` around lines 88 - 143, DetectExternalDeps
currently reduces Go module imports to only their last path segment
(segs[len(segs)-1]) which causes collisions like github.com/user/zap and
github.com/other/zap; update the Go parsing branch to add the full module path
(use mod) instead of only the final segment so the add function records the full
module identifier (refer to DetectExternalDeps, the local add closure, and the
mod/segs usage) — alternatively, if shorter names are desired, normalize to a
longer disambiguating slice (e.g., last two path segments) before calling add.
cmd/restore.go (1)

84-85: Config load errors are silently ignored.

Hey, just noting that cfg, _ := config.Load() ignores any errors from loading config. This means if there's a corrupted config file or permission issue, the user won't know why API mode isn't working - they'll just silently get local mode.

This might be intentional (treat config errors as "no config"), but consider at least logging a debug message for troubleshooting purposes.

💡 Optional: log config load errors
-	cfg, _ := config.Load()
+	cfg, cfgErr := config.Load()
+	if cfgErr != nil {
+		// Config errors are non-fatal; we'll fall back to local mode
+		fmt.Fprintf(cmd.ErrOrStderr(), "debug: config load failed (%v), using local mode\n", cfgErr)
+	}
 	hasAPIKey := cfg != nil && cfg.APIKey != ""
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cmd/restore.go` around lines 84 - 85, The code currently ignores errors from
config.Load() which can hide why API mode isn't enabled; change the call from
cfg, _ := config.Load() to capture the error (e.g., cfg, err := config.Load()),
and if err != nil log a debug or warning message (using the existing logger in
this package) before continuing so operators can see config load failures; keep
the existing hasAPIKey logic (hasAPIKey := cfg != nil && cfg.APIKey != "") so
behavior is unchanged, but ensure the error is surfaced via logging for
troubleshooting.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@cmd/restore.go`:
- Around line 84-85: The code currently ignores errors from config.Load() which
can hide why API mode isn't enabled; change the call from cfg, _ :=
config.Load() to capture the error (e.g., cfg, err := config.Load()), and if err
!= nil log a debug or warning message (using the existing logger in this
package) before continuing so operators can see config load failures; keep the
existing hasAPIKey logic (hasAPIKey := cfg != nil && cfg.APIKey != "") so
behavior is unchanged, but ensure the error is surfaced via logging for
troubleshooting.

In `@internal/restore/local.go`:
- Around line 72-86: The truncation message in ReadClaudeMD currently says
"first 3000 chars" while the code slices by runes (maxRunes); update the message
string used when truncating (the literal appended after
string(runes[:maxRunes])) to accurately describe what is being counted (e.g.,
"first 3000 runes" or "first 3000 characters") so it matches the behavior
controlled by maxRunes in ReadClaudeMD.
- Around line 88-143: DetectExternalDeps currently reduces Go module imports to
only their last path segment (segs[len(segs)-1]) which causes collisions like
github.com/user/zap and github.com/other/zap; update the Go parsing branch to
add the full module path (use mod) instead of only the final segment so the add
function records the full module identifier (refer to DetectExternalDeps, the
local add closure, and the mod/segs usage) — alternatively, if shorter names are
desired, normalize to a longer disambiguating slice (e.g., last two path
segments) before calling add.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: d5b2440c-89ae-4e51-85f2-24b65c9e9856

📥 Commits

Reviewing files that changed from the base of the PR and between 01aa995 and 6028eb2.

📒 Files selected for processing (3)
  • cmd/restore.go
  • internal/restore/local.go
  • internal/restore/render.go

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants