Skip to content

fix(anthropic): preserve thinking blocks across tool-call turns#336

Closed
imsherrill wants to merge 1 commit intoTanStack:mainfrom
imsherrill:anthropic-thinking-tool-loop-context
Closed

fix(anthropic): preserve thinking blocks across tool-call turns#336
imsherrill wants to merge 1 commit intoTanStack:mainfrom
imsherrill:anthropic-thinking-tool-loop-context

Conversation

@imsherrill
Copy link
Contributor

@imsherrill imsherrill commented Mar 5, 2026

🎯 Changes

Preserves Anthropic thinking blocks (including signatures) across assistant tool-call turns in the text adapter so multi-turn loops keep required thinking context.

Captures completed thinking blocks during streaming, stores them for the tool-call batch in that turn, and rehydrates them when formatting the corresponding assistant tool-call message.

Example (before vs after)

Before
Turn 1: [thinking#1] -> tool_use
Turn 2: [no preserved thinking] -> tool_use
Turn 3: [no preserved thinking] -> final answer
After
Turn 1: [thinking#1] -> tool_use
Turn 2: [thinking#2] -> tool_use
Turn 3: [thinking#3] -> final answer

In short: before, loops could effectively keep ~1 thinking block; after, thinking continuity carries across all tool-call iterations.

✅ Checklist

  • I have followed the steps in the Contributing guide.
  • I have tested this code locally with pnpm run test:pr.

🚀 Release Impact

  • This change affects published code, and I have generated a changeset.
  • This change is docs/CI/dev-only (no release).

Carry thinking blocks (with signatures) forward for assistant tool-call batches so subsequent Anthropic loop iterations retain required context.
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 5, 2026

📝 Walkthrough

Walkthrough

This PR introduces a thinking-by-tool-call tracking mechanism to the Anthropic text adapter. It captures thinking blocks generated during streaming, indexes them by tool-call batches, and replays them when formatting assistant messages to preserve reasoning context alongside tool invocations.

Changes

Cohort / File(s) Summary
Thinking-by-Tool-Call Mechanism
packages/typescript/ai-anthropic/src/adapters/text.ts
Adds AnthropicThinkingBlock and ThinkingByToolCallBatchKey types. Propagates thinking tracking through streaming and message formatting flows. Accumulates thinking deltas during streaming, indexes completed thinking blocks by tool-call batch ID, and replays preserved thinking blocks into Anthropic content before tool_use blocks when formatting assistant messages.

Sequence Diagram(s)

sequenceDiagram
    participant Stream as Streaming Input
    participant Adapter as Text Adapter
    participant ThinkingStore as Thinking Store<br/>(Map by Batch)
    participant Formatter as Message Formatter
    
    Stream->>Adapter: Emit thinking deltas
    Adapter->>Adapter: Accumulate into currentThinkingBlock
    Stream->>Adapter: Emit tool_use content (tool-call batch)
    Adapter->>Adapter: Mark batch complete
    Adapter->>ThinkingStore: Store completed thinking blocks<br/>(indexed by batch key)
    Note over ThinkingStore: Map[batchKey] = [thinking blocks]
    
    Formatter->>Formatter: Prepare assistant message<br/>with tool calls
    Formatter->>ThinkingStore: Lookup thinking blocks by batch
    ThinkingStore-->>Formatter: Return preserved thinking blocks
    Formatter->>Formatter: Inject thinking blocks<br/>before tool_use in content
    Formatter-->>Formatter: Return formatted message
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Poem

🐰 Whiskers twitching with delight...

A rabbit's thoughts now safely keep,
Before each tool they call and leap,
Indexed by batch, preserved with care—
Thinking threads woven through the air! 🧵✨

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
Title check ✅ Passed The title clearly and specifically describes the main change: preserving Anthropic thinking blocks across tool-call turns, which directly aligns with the primary objective of the changeset.
Description check ✅ Passed The description covers all required template sections with substantive content: Changes section explains the feature with clear before/after examples, and the Checklist and Release Impact sections are completed, though one item (local testing) remains unchecked.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@imsherrill imsherrill changed the title Preserve Anthropic thinking blocks across tool-call turns fix(anthropic): preserve thinking blocks across tool-call turns Mar 5, 2026
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (1)
packages/typescript/ai-anthropic/src/adapters/text.ts (1)

561-846: Consider centralizing stream state management with StreamProcessor.

The current implementation maintains multiple mutable flags (hasEmittedRunStarted, hasEmittedTextMessageStart, hasEmittedRunFinished) and manual state tracking (toolCallsMap, currentToolIndex, thinking blocks). StreamProcessor is designed to handle this pattern—it centralizes chunk parsing, state transitions, and provides built-in support for partial JSON parsing and tool call lifecycle management. Refactoring to use StreamProcessor.process() would reduce manual flag management and consolidate the state machine logic.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-anthropic/src/adapters/text.ts` around lines 561 -
846, The processAnthropicStream function duplicates manual streaming state and
flags; refactor it to use StreamProcessor.process to centralize state
transitions and partial JSON/tool-call handling: replace the for-await loop and
manual flags (hasEmittedRunStarted, hasEmittedTextMessageStart,
hasEmittedRunFinished), toolCallsMap/currentToolIndex, and thinking block logic
with StreamProcessor.process handlers that emit the same chunk types
(RUN_STARTED, TEXT_MESSAGE_START/CONTENT/END, TOOL_CALL_START/ARGS/END,
STEP_STARTED/FINISHED, RUN_FINISHED, RUN_ERROR) and preserve behavior around
thinkingByToolCallBatchKey by calling this.getToolCallBatchKey(toolCallIds) and
setting the batch when a tool_use finish occurs; ensure error handling maps to
the same RUN_ERROR shape and reuse genId(), model, messageId/runId/stepId
generation as before.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@packages/typescript/ai-anthropic/src/adapters/text.ts`:
- Around line 314-316: The current getToolCallBatchKey uses a simple join which
is order- and delimiter-sensitive; change getToolCallBatchKey to produce a
deterministic, collision-resistant key by first sorting the toolCallIds array,
then computing a stable representation (e.g., JSON.stringify(sortedIds)) and
returning a cryptographic digest (e.g., SHA-256 hex) of that representation;
update all call sites that build or compare batch keys (the capture and replay
usages referenced alongside getToolCallBatchKey) to use this function so lookups
are order-independent and collision-safe.
- Around line 124-128: The thinkingByToolCallBatchKey Map is being created
per-method-call so cached thinking is lost between turns; instead make it a
persistent field on the adapter (e.g., this.thinkingByToolCallBatchKey:
ThinkingByToolCallBatchKey initialized once in the class constructor) and remove
local new Map() creations in methods that call mapCommonOptionsToAnthropic,
rehydrateThinking, and wherever it’s currently redeclared; update
mapCommonOptionsToAnthropic (and any callers) to use the instance field (or
accept an optional Map) so writes at the former write site (the code that
updates thinkingByToolCallBatchKey) persist across tool-loop turns and can be
read later by rehydrate logic.

---

Nitpick comments:
In `@packages/typescript/ai-anthropic/src/adapters/text.ts`:
- Around line 561-846: The processAnthropicStream function duplicates manual
streaming state and flags; refactor it to use StreamProcessor.process to
centralize state transitions and partial JSON/tool-call handling: replace the
for-await loop and manual flags (hasEmittedRunStarted,
hasEmittedTextMessageStart, hasEmittedRunFinished),
toolCallsMap/currentToolIndex, and thinking block logic with
StreamProcessor.process handlers that emit the same chunk types (RUN_STARTED,
TEXT_MESSAGE_START/CONTENT/END, TOOL_CALL_START/ARGS/END, STEP_STARTED/FINISHED,
RUN_FINISHED, RUN_ERROR) and preserve behavior around thinkingByToolCallBatchKey
by calling this.getToolCallBatchKey(toolCallIds) and setting the batch when a
tool_use finish occurs; ensure error handling maps to the same RUN_ERROR shape
and reuse genId(), model, messageId/runId/stepId generation as before.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 2ad9173b-13c2-44c3-885b-bbfe6167437b

📥 Commits

Reviewing files that changed from the base of the PR and between 228e0e3 and 86b37cb.

📒 Files selected for processing (1)
  • packages/typescript/ai-anthropic/src/adapters/text.ts

Comment on lines +124 to +128
const thinkingByToolCallBatchKey: ThinkingByToolCallBatchKey = new Map()
const requestParams = this.mapCommonOptionsToAnthropic(
options,
thinkingByToolCallBatchKey,
)
Copy link
Contributor

@coderabbitai coderabbitai bot Mar 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Persist thinking cache across turns, not per method call.

Line [124] and Line [168] create a fresh thinkingByToolCallBatchKey each invocation, so data written at Line [777] is dropped before the next tool-loop turn. Then Line [413] can’t rehydrate prior thinking for assistant tool-call messages.

💡 Proposed fix
 export class AnthropicTextAdapter<
@@
 > extends BaseTextAdapter<
@@
 > {
   readonly kind = 'text' as const
   readonly name = 'anthropic' as const

   private client: Anthropic_SDK
+  private thinkingByToolCallBatchKey: ThinkingByToolCallBatchKey = new Map()

@@
   async *chatStream(
     options: TextOptions<AnthropicTextProviderOptions>,
   ): AsyncIterable<StreamChunk> {
     try {
-      const thinkingByToolCallBatchKey: ThinkingByToolCallBatchKey = new Map()
       const requestParams = this.mapCommonOptionsToAnthropic(
         options,
-        thinkingByToolCallBatchKey,
+        this.thinkingByToolCallBatchKey,
       )
@@
       yield* this.processAnthropicStream(
         stream,
         options.model,
         () => generateId(this.name),
-        thinkingByToolCallBatchKey,
+        this.thinkingByToolCallBatchKey,
       )
@@
   async structuredOutput(
     options: StructuredOutputOptions<AnthropicTextProviderOptions>,
   ): Promise<StructuredOutputResult<unknown>> {
     const { chatOptions, outputSchema } = options
-    const thinkingByToolCallBatchKey: ThinkingByToolCallBatchKey = new Map()

     const requestParams = this.mapCommonOptionsToAnthropic(
       chatOptions,
-      thinkingByToolCallBatchKey,
+      this.thinkingByToolCallBatchKey,
     )

Also applies to: 168-173, 413-417, 777-781

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-anthropic/src/adapters/text.ts` around lines 124 -
128, The thinkingByToolCallBatchKey Map is being created per-method-call so
cached thinking is lost between turns; instead make it a persistent field on the
adapter (e.g., this.thinkingByToolCallBatchKey: ThinkingByToolCallBatchKey
initialized once in the class constructor) and remove local new Map() creations
in methods that call mapCommonOptionsToAnthropic, rehydrateThinking, and
wherever it’s currently redeclared; update mapCommonOptionsToAnthropic (and any
callers) to use the instance field (or accept an optional Map) so writes at the
former write site (the code that updates thinkingByToolCallBatchKey) persist
across tool-loop turns and can be read later by rehydrate logic.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hm youre right. i looked into it, im not positve what the right way to do this is

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@imsherrill, happy to walk through the tradeoffs. Here's the core problem and the cleanest solution:


Why the instance field is the right approach

The multi-turn tool loop calls chatStream multiple times on the same adapter instance, each time passing the full accumulated message history. The flow looks like:

chatStream(turn 1)  ─→  emits tool_use  ─→  stores thinking in local Map  ─→  Map goes out of scope ❌
chatStream(turn 2)  ─→  formatMessages sees assistant+tool_calls msg  ─→  new empty Map  ─→  lookup fails ❌

Since ModelMessage has no field for thinking-block metadata, the blocks can't travel inside the message objects themselves. The map has to survive between calls, and the adapter instance is the natural home for it.


Revised fix

 export class AnthropicTextAdapter<...> extends BaseTextAdapter<...> {
   readonly kind = 'text' as const
   readonly name = 'anthropic' as const

   private client: Anthropic_SDK
+  private thinkingByToolCallBatchKey: ThinkingByToolCallBatchKey = new Map()

   // chatStream — remove local map creation
   async *chatStream(options): AsyncIterable<StreamChunk> {
     try {
-      const thinkingByToolCallBatchKey: ThinkingByToolCallBatchKey = new Map()
       const requestParams = this.mapCommonOptionsToAnthropic(
         options,
-        thinkingByToolCallBatchKey,
+        this.thinkingByToolCallBatchKey,
       )
       // ...
       yield* this.processAnthropicStream(
         stream,
         options.model,
         () => generateId(this.name),
-        thinkingByToolCallBatchKey,
+        this.thinkingByToolCallBatchKey,
       )
     }
   }

   // structuredOutput — remove local map creation
   async structuredOutput(options): Promise<StructuredOutputResult<unknown>> {
     const { chatOptions, outputSchema } = options
-    const thinkingByToolCallBatchKey: ThinkingByToolCallBatchKey = new Map()
     const requestParams = this.mapCommonOptionsToAnthropic(
       chatOptions,
-      thinkingByToolCallBatchKey,
+      this.thinkingByToolCallBatchKey,
     )
     // ...
   }
 }

Why the alternatives don't work as well

Approach Issue
Local Map (current) Lost between calls — the bug
Pass map from caller Requires API change; caller shouldn't need to know about Anthropic internals
Encode into ModelMessage Would require changes to the shared ModelMessage type across the whole @tanstack/ai package
Instance field (proposed) ✅ Zero API change, natural lifetime for a single conversation

One thing worth noting: if the same adapter instance is reused across completely independent conversations, stale entries could accumulate. You could add a clearThinkingCache() method or clear the map at the start of a new conversation if that's a concern — but for the typical single-conversation lifecycle this is fine.

  • Move thinkingByToolCallBatchKey to a private instance field initialized to new Map()
  • Remove the local new Map() declarations in chatStream and structuredOutput
  • Update all references to use this.thinkingByToolCallBatchKey

🧠 Learnings used
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to packages/typescript/*/src/adapters/*.ts : Create individual adapter implementations for each provider capability (text, embed, summarize, image) with separate exports to enable tree-shaking

Learnt from: tombeckenham
Repo: TanStack/ai PR: 237
File: packages/typescript/ai-fal/src/adapters/image.ts:39-44
Timestamp: 2026-02-04T01:39:19.784Z
Learning: In the TanStack/ai-fal package, when creating adapter type mappings for 600+ fal.ai models, use mapped types over the known EndpointTypeMap keys with a Record fallback for unknown models: `{ [K in keyof EndpointTypeMap]: ProviderOptions<K> } & Record<string, any>` instead of `Record<FalModel, ProviderOptions<TModel>>` to preserve per-model type safety.

Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to **/*.{ts,tsx} : Implement stream processing with StreamProcessor for handling chunked responses and support partial JSON parsing for streaming AI responses

Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to **/*.{ts,tsx} : Use tree-shakeable adapter architecture for provider implementations - export specialized adapters (text, embedding, summarize, image) as separate imports from `/adapters` subpath rather than monolithic adapters

💡 Use @coderabbitai plan to create a plan for code changes

Comment on lines +314 to +316
private getToolCallBatchKey(toolCallIds: Array<string>): string {
return toolCallIds.join('|')
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Use a deterministic and collision-safe tool-call batch key.

Line [315] uses join('|'), which is delimiter/order sensitive. If tool-call order differs between capture (Line [770]) and replay (Line [414]), thinking lookup will miss.

💡 Proposed fix
   private getToolCallBatchKey(toolCallIds: Array<string>): string {
-    return toolCallIds.join('|')
+    return JSON.stringify([...toolCallIds].sort())
   }

Also applies to: 413-417, 770-779

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-anthropic/src/adapters/text.ts` around lines 314 -
316, The current getToolCallBatchKey uses a simple join which is order- and
delimiter-sensitive; change getToolCallBatchKey to produce a deterministic,
collision-resistant key by first sorting the toolCallIds array, then computing a
stable representation (e.g., JSON.stringify(sortedIds)) and returning a
cryptographic digest (e.g., SHA-256 hex) of that representation; update all call
sites that build or compare batch keys (the capture and replay usages referenced
alongside getToolCallBatchKey) to use this function so lookups are
order-independent and collision-safe.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant