Bug
filterCompactedEffect reads every message from the DB (via paginated SQL) at the start of each runLoop iteration, then discards pre-compaction messages. For compacted sessions with hundreds of messages, this means re-reading and discarding the same pre-compaction messages on every tool call round-trip.
Details
In prompt.ts:1314, filterCompactedEffect(sessionID) is called every loop step. This calls stream(sessionID) which does ceil(N/50) paginated SQL queries for all N messages in the session. Then filterCompacted() scans forward to find the last compaction boundary and discards everything before it.
For a session with 500 messages (300 pre-compaction, 200 post-compaction), each iteration does ~10 SQL queries but only uses ~200 messages. The pre-compaction messages are read and discarded every time.
Suggested Fix
Cache the compaction boundary (first post-compaction message ID/timestamp) per session and pass it as a since parameter to stream() so it adds a WHERE clause to skip older rows. The cache invalidates when a new compaction is created.
This would reduce DB reads by ~60% per iteration for compacted sessions.
Bug
filterCompactedEffectreads every message from the DB (via paginated SQL) at the start of each runLoop iteration, then discards pre-compaction messages. For compacted sessions with hundreds of messages, this means re-reading and discarding the same pre-compaction messages on every tool call round-trip.Details
In
prompt.ts:1314,filterCompactedEffect(sessionID)is called every loop step. This callsstream(sessionID)which doesceil(N/50)paginated SQL queries for all N messages in the session. ThenfilterCompacted()scans forward to find the last compaction boundary and discards everything before it.For a session with 500 messages (300 pre-compaction, 200 post-compaction), each iteration does ~10 SQL queries but only uses ~200 messages. The pre-compaction messages are read and discarded every time.
Suggested Fix
Cache the compaction boundary (first post-compaction message ID/timestamp) per session and pass it as a
sinceparameter tostream()so it adds aWHEREclause to skip older rows. The cache invalidates when a new compaction is created.This would reduce DB reads by ~60% per iteration for compacted sessions.