Skip to content

Fix MCP server: use analyze.GetGraph and add fingerprint caching#69

Merged
greynewell merged 1 commit intomainfrom
fix/mcp-server-caching
Apr 8, 2026
Merged

Fix MCP server: use analyze.GetGraph and add fingerprint caching#69
greynewell merged 1 commit intomainfrom
fix/mcp-server-caching

Conversation

@greynewell
Copy link
Copy Markdown
Contributor

@greynewell greynewell commented Apr 8, 2026

Summary

  • getOrAnalyze now delegates to analyze.GetGraph() instead of creating a zip and calling the stale client.Analyze endpoint — consistent with focus, find, and other commands that were fixed in fix(focus,find): fix relationship type case, empty CalledBy/Imports, stale API #68
  • toolDeadCode and toolBlastRadius now check/write the fingerprint cache before uploading, sharing cached results with the equivalent CLI commands (supermodel dead-code, supermodel blast-radius)

Test plan

  • go build ./... passes
  • go test ./internal/mcp/... passes
  • Manual: start MCP session, call analyze, verify it uses AnalyzeShards endpoint (not stale Analyze)
  • Manual: call dead_code twice — second call should return from cache without re-uploading

🤖 Generated with Claude Code

Summary by CodeRabbit

  • Refactor
    • Enhanced caching of analysis results to improve performance and reduce redundant processing on repeated queries.

- `getOrAnalyze` now delegates to `analyze.GetGraph()` instead of
  creating a zip and calling the stale `client.Analyze` endpoint,
  consistent with all other commands
- `toolDeadCode` and `toolBlastRadius` now check/write the fingerprint
  cache before uploading, sharing results with the CLI commands

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 8, 2026

Walkthrough

Refactored internal/mcp/server.go to add fingerprint-based JSON caching with build version keys to toolDeadCode and toolBlastRadius, and consolidated graph-retrieval logic in getOrAnalyze to use a shared analyze.GetGraph() call. Replaced internal/ui import with internal/analyze and internal/build imports.

Changes

Cohort / File(s) Summary
MCP Server Refactoring
internal/mcp/server.go
Introduced fingerprint-based JSON cache lookup (keyed by dead-code:<min_confidence>:<limit> and impact[:<target>] plus build.Version) to toolDeadCode and toolBlastRadius; moved parameter extraction before cache checks; refactored getOrAnalyze to delegate graph retrieval to analyze.GetGraph() instead of local zip/hash logic; replaced internal/ui import with internal/analyze and internal/build.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly related PRs

Poem

🗂️ Cache keys whisper of fingerprints and builds,
Old zips retire as analyze fulfills,
Each handler now cleaner, each lookup more swift,
Dead code and blast radius get a fast lift! ⚡

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and concisely summarizes the main changes: switching to analyze.GetGraph and adding fingerprint caching to the MCP server.
Description check ✅ Passed The description covers the key changes (getOrAnalyze refactor, fingerprint caching) and includes a test plan with checkmarks, though one manual test is incomplete.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch fix/mcp-server-caching

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (2)
internal/mcp/server.go (2)

254-278: Small code duplication in cache key computation.

The cache key is computed identically on lines 256 and 276. If someone changes one and forgets the other, you'd get cache write/read mismatches (writes would go to a different key than reads).

Consider computing the key once and reusing it:

♻️ Suggested refactor to compute key once
 func (s *server) toolDeadCode(ctx context.Context, args map[string]any) (string, error) {
 	minConfidence, _ := args["min_confidence"].(string)
 	limit := intArg(args, "limit")
 
-	// Check fingerprint cache (shared with `supermodel dead-code` CLI command).
-	if fp, err := cache.RepoFingerprint(s.dir); err == nil {
-		key := cache.AnalysisKey(fp, fmt.Sprintf("dead-code:%s:%d", minConfidence, limit), build.Version)
+	// Compute fingerprint and cache key once for both read and write.
+	fp, fpErr := cache.RepoFingerprint(s.dir)
+	var cacheKey string
+	if fpErr == nil {
+		cacheKey = cache.AnalysisKey(fp, fmt.Sprintf("dead-code:%s:%d", minConfidence, limit), build.Version)
 		var cached api.DeadCodeResult
-		if hit, _ := cache.GetJSON(key, &cached); hit {
+		if hit, _ := cache.GetJSON(cacheKey, &cached); hit {
 			return formatDeadCode(&cached), nil
 		}
 	}
 
 	zipPath, hash, err := s.ensureZip()
 	// ... rest of function ...
 
-	if fp, err := cache.RepoFingerprint(s.dir); err == nil {
-		key := cache.AnalysisKey(fp, fmt.Sprintf("dead-code:%s:%d", minConfidence, limit), build.Version)
-		_ = cache.PutJSON(key, result)
+	if cacheKey != "" {
+		_ = cache.PutJSON(cacheKey, result)
 	}
 
 	return formatDeadCode(result), nil
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/mcp/server.go` around lines 254 - 278, The cache key for dead-code
analysis is computed twice (using cache.RepoFingerprint(...) and
cache.AnalysisKey(...)) which risks read/write mismatches; compute the
fingerprint and the key once before calling ensureZip()/client.DeadCode and
reuse that same key for cache.GetJSON and cache.PutJSON (referencing functions
RepoFingerprint, AnalysisKey, cache.GetJSON, cache.PutJSON and the surrounding
dead-code flow that calls ensureZip and client.DeadCode) so the read and write
use the identical key.

287-324: Same duplication pattern here - consider extracting the cache key logic.

Lines 289-293 and 318-322 build the analysisType string identically. Same suggestion applies as for toolDeadCode - compute once and reuse.

♻️ Suggested refactor
 func (s *server) toolBlastRadius(ctx context.Context, args map[string]any) (string, error) {
 	target, _ := args["file"].(string)
 
-	// Check fingerprint cache (shared with `supermodel blast-radius` CLI command).
-	if fp, err := cache.RepoFingerprint(s.dir); err == nil {
-		analysisType := "impact"
-		if target != "" {
-			analysisType += ":" + target
-		}
-		key := cache.AnalysisKey(fp, analysisType, build.Version)
+	// Compute fingerprint and cache key once for both read and write.
+	fp, fpErr := cache.RepoFingerprint(s.dir)
+	var cacheKey string
+	if fpErr == nil {
+		analysisType := "impact"
+		if target != "" {
+			analysisType += ":" + target
+		}
+		cacheKey = cache.AnalysisKey(fp, analysisType, build.Version)
 		var cached api.ImpactResult
-		if hit, _ := cache.GetJSON(key, &cached); hit {
+		if hit, _ := cache.GetJSON(cacheKey, &cached); hit {
 			return formatImpact(&cached), nil
 		}
 	}
 
 	// ... zip and API call ...
 
-	if fp, err := cache.RepoFingerprint(s.dir); err == nil {
-		analysisType := "impact"
-		if target != "" {
-			analysisType += ":" + target
-		}
-		key := cache.AnalysisKey(fp, analysisType, build.Version)
-		_ = cache.PutJSON(key, result)
+	if cacheKey != "" {
+		_ = cache.PutJSON(cacheKey, result)
 	}
 
 	return formatImpact(result), nil
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/mcp/server.go` around lines 287 - 324, Extract the duplicated
analysisType/key construction into a single variable before the first
fingerprint cache check and reuse it for both cache.GetJSON and cache.PutJSON;
specifically compute analysisType (with the target suffix) once, then derive key
via cache.AnalysisKey(fp, analysisType, build.Version) and use that key in the
initial cache hit branch and again after client.Impact returns; update
references around cache.RepoFingerprint, cache.AnalysisKey, and the
idempotencyKey logic so the analysisType computation is not duplicated.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@internal/mcp/server.go`:
- Around line 254-278: The cache key for dead-code analysis is computed twice
(using cache.RepoFingerprint(...) and cache.AnalysisKey(...)) which risks
read/write mismatches; compute the fingerprint and the key once before calling
ensureZip()/client.DeadCode and reuse that same key for cache.GetJSON and
cache.PutJSON (referencing functions RepoFingerprint, AnalysisKey,
cache.GetJSON, cache.PutJSON and the surrounding dead-code flow that calls
ensureZip and client.DeadCode) so the read and write use the identical key.
- Around line 287-324: Extract the duplicated analysisType/key construction into
a single variable before the first fingerprint cache check and reuse it for both
cache.GetJSON and cache.PutJSON; specifically compute analysisType (with the
target suffix) once, then derive key via cache.AnalysisKey(fp, analysisType,
build.Version) and use that key in the initial cache hit branch and again after
client.Impact returns; update references around cache.RepoFingerprint,
cache.AnalysisKey, and the idempotencyKey logic so the analysisType computation
is not duplicated.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: c787f380-b847-4d8c-8f5f-8fc55c0f91c2

📥 Commits

Reviewing files that changed from the base of the PR and between 525fe68 and 1bf55b8.

📒 Files selected for processing (1)
  • internal/mcp/server.go

@greynewell greynewell merged commit e6e4961 into main Apr 8, 2026
7 checks passed
@greynewell greynewell deleted the fix/mcp-server-caching branch April 8, 2026 19:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant