Problem
The `watch` daemon cannot load the shard cache written by `analyze`. Two symptoms observed during production E2E testing of API v0.12.0:
Symptom 1: Daemon loads 1 node from a 2008-node cache
```bash
$ supermodel analyze --force
✓ Wrote 126 shards for 134 source files (2008 nodes, 4490 relationships)
$ python3 -c "import json; r=json.load(open('.supermodel/shards.json')); print(len(r['graph']['nodes']))"
2008 # cache file has 2008 nodes
$ supermodel watch
[supermodel] Loaded existing cache (1 nodes, 0 relationships)
[supermodel] Rendered 0 shards for 0 source files
```
Go JSON unmarshalling of the same file works correctly (tested with a standalone Go program — reads 2008 nodes). The daemon's `loadOrGenerate` reads the file and reports 1 node.
Symptom 2: Daemon's own full generate returns minimal graph
When started with no cache, the daemon's `fullGenerate` calls `AnalyzeShards` against production and receives a graph with only 1 node and 0 relationships, while `analyze` hitting the same endpoint returns 2008 nodes.
What works
- `supermodel analyze` against production v0.12.0 — returns 2008 nodes, 4490 relationships, 5 domains, 126 shards rendered correctly.
- Shards have correct deps, calls, and impact sections.
- The `//go:build ignore` tag is present on all `.graph.go` files.
- `go build` compiles cleanly with shards present.
What's blocked
The `watch` daemon can't establish a valid cache, which means:
- Incremental updates can't be tested against production
- The full E2E flow (analyze → watch → hook → incremental → merge → render) can't be validated
Environment
- CLI: built from latest `main` (commit `161c66e`)
- API: production v0.12.0
- Repo: `supermodeltools/cli` (~130 Go files)
Investigation so far
- The `.supermodel/shards.json` file written by `analyze` contains valid JSON with 2008 nodes (verified with Python and standalone Go)
- The daemon's `loadOrGenerate` uses `json.Unmarshal` into `api.ShardIR` — same type `analyze` uses
- The `ShardIR.Graph.Nodes` field is `[]Node` and the JSON uses `labels` arrays (not `type` field) — consistent with the type definition
- The path resolution (`filepath.Abs(".")`) appears correct — both `analyze` and `watch` should resolve to the same directory
Possible causes
- The daemon's `fullGenerate` creates a different zip than `analyze` (different file filtering, different archive method)
- The daemon overwrites the cache with its own 1-node result before the log message
- There's a deserialization difference between how `Generate` and `loadOrGenerate` parse `ShardIR`
- The cache file path resolves differently at runtime (a `./watch/` subdirectory was observed in one test run)
Problem
The `watch` daemon cannot load the shard cache written by `analyze`. Two symptoms observed during production E2E testing of API v0.12.0:
Symptom 1: Daemon loads 1 node from a 2008-node cache
```bash
$ supermodel analyze --force
✓ Wrote 126 shards for 134 source files (2008 nodes, 4490 relationships)
$ python3 -c "import json; r=json.load(open('.supermodel/shards.json')); print(len(r['graph']['nodes']))"
2008 # cache file has 2008 nodes
$ supermodel watch
[supermodel] Loaded existing cache (1 nodes, 0 relationships)
[supermodel] Rendered 0 shards for 0 source files
```
Go JSON unmarshalling of the same file works correctly (tested with a standalone Go program — reads 2008 nodes). The daemon's `loadOrGenerate` reads the file and reports 1 node.
Symptom 2: Daemon's own full generate returns minimal graph
When started with no cache, the daemon's `fullGenerate` calls `AnalyzeShards` against production and receives a graph with only 1 node and 0 relationships, while `analyze` hitting the same endpoint returns 2008 nodes.
What works
What's blocked
The `watch` daemon can't establish a valid cache, which means:
Environment
Investigation so far
Possible causes