Fix MCP server: use analyze.GetGraph and add fingerprint caching#69
Fix MCP server: use analyze.GetGraph and add fingerprint caching#69greynewell merged 1 commit intomainfrom
Conversation
- `getOrAnalyze` now delegates to `analyze.GetGraph()` instead of creating a zip and calling the stale `client.Analyze` endpoint, consistent with all other commands - `toolDeadCode` and `toolBlastRadius` now check/write the fingerprint cache before uploading, sharing results with the CLI commands Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
WalkthroughRefactored Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
🧹 Nitpick comments (2)
internal/mcp/server.go (2)
254-278: Small code duplication in cache key computation.The cache key is computed identically on lines 256 and 276. If someone changes one and forgets the other, you'd get cache write/read mismatches (writes would go to a different key than reads).
Consider computing the key once and reusing it:
♻️ Suggested refactor to compute key once
func (s *server) toolDeadCode(ctx context.Context, args map[string]any) (string, error) { minConfidence, _ := args["min_confidence"].(string) limit := intArg(args, "limit") - // Check fingerprint cache (shared with `supermodel dead-code` CLI command). - if fp, err := cache.RepoFingerprint(s.dir); err == nil { - key := cache.AnalysisKey(fp, fmt.Sprintf("dead-code:%s:%d", minConfidence, limit), build.Version) + // Compute fingerprint and cache key once for both read and write. + fp, fpErr := cache.RepoFingerprint(s.dir) + var cacheKey string + if fpErr == nil { + cacheKey = cache.AnalysisKey(fp, fmt.Sprintf("dead-code:%s:%d", minConfidence, limit), build.Version) var cached api.DeadCodeResult - if hit, _ := cache.GetJSON(key, &cached); hit { + if hit, _ := cache.GetJSON(cacheKey, &cached); hit { return formatDeadCode(&cached), nil } } zipPath, hash, err := s.ensureZip() // ... rest of function ... - if fp, err := cache.RepoFingerprint(s.dir); err == nil { - key := cache.AnalysisKey(fp, fmt.Sprintf("dead-code:%s:%d", minConfidence, limit), build.Version) - _ = cache.PutJSON(key, result) + if cacheKey != "" { + _ = cache.PutJSON(cacheKey, result) } return formatDeadCode(result), nil }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/mcp/server.go` around lines 254 - 278, The cache key for dead-code analysis is computed twice (using cache.RepoFingerprint(...) and cache.AnalysisKey(...)) which risks read/write mismatches; compute the fingerprint and the key once before calling ensureZip()/client.DeadCode and reuse that same key for cache.GetJSON and cache.PutJSON (referencing functions RepoFingerprint, AnalysisKey, cache.GetJSON, cache.PutJSON and the surrounding dead-code flow that calls ensureZip and client.DeadCode) so the read and write use the identical key.
287-324: Same duplication pattern here - consider extracting the cache key logic.Lines 289-293 and 318-322 build the
analysisTypestring identically. Same suggestion applies as fortoolDeadCode- compute once and reuse.♻️ Suggested refactor
func (s *server) toolBlastRadius(ctx context.Context, args map[string]any) (string, error) { target, _ := args["file"].(string) - // Check fingerprint cache (shared with `supermodel blast-radius` CLI command). - if fp, err := cache.RepoFingerprint(s.dir); err == nil { - analysisType := "impact" - if target != "" { - analysisType += ":" + target - } - key := cache.AnalysisKey(fp, analysisType, build.Version) + // Compute fingerprint and cache key once for both read and write. + fp, fpErr := cache.RepoFingerprint(s.dir) + var cacheKey string + if fpErr == nil { + analysisType := "impact" + if target != "" { + analysisType += ":" + target + } + cacheKey = cache.AnalysisKey(fp, analysisType, build.Version) var cached api.ImpactResult - if hit, _ := cache.GetJSON(key, &cached); hit { + if hit, _ := cache.GetJSON(cacheKey, &cached); hit { return formatImpact(&cached), nil } } // ... zip and API call ... - if fp, err := cache.RepoFingerprint(s.dir); err == nil { - analysisType := "impact" - if target != "" { - analysisType += ":" + target - } - key := cache.AnalysisKey(fp, analysisType, build.Version) - _ = cache.PutJSON(key, result) + if cacheKey != "" { + _ = cache.PutJSON(cacheKey, result) } return formatImpact(result), nil }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/mcp/server.go` around lines 287 - 324, Extract the duplicated analysisType/key construction into a single variable before the first fingerprint cache check and reuse it for both cache.GetJSON and cache.PutJSON; specifically compute analysisType (with the target suffix) once, then derive key via cache.AnalysisKey(fp, analysisType, build.Version) and use that key in the initial cache hit branch and again after client.Impact returns; update references around cache.RepoFingerprint, cache.AnalysisKey, and the idempotencyKey logic so the analysisType computation is not duplicated.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@internal/mcp/server.go`:
- Around line 254-278: The cache key for dead-code analysis is computed twice
(using cache.RepoFingerprint(...) and cache.AnalysisKey(...)) which risks
read/write mismatches; compute the fingerprint and the key once before calling
ensureZip()/client.DeadCode and reuse that same key for cache.GetJSON and
cache.PutJSON (referencing functions RepoFingerprint, AnalysisKey,
cache.GetJSON, cache.PutJSON and the surrounding dead-code flow that calls
ensureZip and client.DeadCode) so the read and write use the identical key.
- Around line 287-324: Extract the duplicated analysisType/key construction into
a single variable before the first fingerprint cache check and reuse it for both
cache.GetJSON and cache.PutJSON; specifically compute analysisType (with the
target suffix) once, then derive key via cache.AnalysisKey(fp, analysisType,
build.Version) and use that key in the initial cache hit branch and again after
client.Impact returns; update references around cache.RepoFingerprint,
cache.AnalysisKey, and the idempotencyKey logic so the analysisType computation
is not duplicated.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: c787f380-b847-4d8c-8f5f-8fc55c0f91c2
📒 Files selected for processing (1)
internal/mcp/server.go
Summary
getOrAnalyzenow delegates toanalyze.GetGraph()instead of creating a zip and calling the staleclient.Analyzeendpoint — consistent withfocus,find, and other commands that were fixed in fix(focus,find): fix relationship type case, empty CalledBy/Imports, stale API #68toolDeadCodeandtoolBlastRadiusnow check/write the fingerprint cache before uploading, sharing cached results with the equivalent CLI commands (supermodel dead-code,supermodel blast-radius)Test plan
go build ./...passesgo test ./internal/mcp/...passesanalyze, verify it usesAnalyzeShardsendpoint (not staleAnalyze)dead_codetwice — second call should return from cache without re-uploading🤖 Generated with Claude Code
Summary by CodeRabbit