Skip to content
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
78 changes: 73 additions & 5 deletions packages/typescript/ai-anthropic/src/adapters/text.ts
Original file line number Diff line number Diff line change
Expand Up @@ -61,6 +61,8 @@ type AnthropicContentBlocks =
: never
type AnthropicContentBlock =
AnthropicContentBlocks extends Array<infer Block> ? Block : never
type AnthropicThinkingBlock = { thinking: string; signature: string }
type ThinkingByToolCallBatchKey = Map<string, Array<AnthropicThinkingBlock>>

// ===========================
// Type Resolution Helpers
Expand Down Expand Up @@ -119,7 +121,11 @@ export class AnthropicTextAdapter<
options: TextOptions<AnthropicTextProviderOptions>,
): AsyncIterable<StreamChunk> {
try {
const requestParams = this.mapCommonOptionsToAnthropic(options)
const thinkingByToolCallBatchKey: ThinkingByToolCallBatchKey = new Map()
const requestParams = this.mapCommonOptionsToAnthropic(
options,
thinkingByToolCallBatchKey,
)
Comment on lines +124 to +128
Copy link
Contributor

@coderabbitai coderabbitai bot Mar 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Persist thinking cache across turns, not per method call.

Line [124] and Line [168] create a fresh thinkingByToolCallBatchKey each invocation, so data written at Line [777] is dropped before the next tool-loop turn. Then Line [413] can’t rehydrate prior thinking for assistant tool-call messages.

💡 Proposed fix
 export class AnthropicTextAdapter<
@@
 > extends BaseTextAdapter<
@@
 > {
   readonly kind = 'text' as const
   readonly name = 'anthropic' as const

   private client: Anthropic_SDK
+  private thinkingByToolCallBatchKey: ThinkingByToolCallBatchKey = new Map()

@@
   async *chatStream(
     options: TextOptions<AnthropicTextProviderOptions>,
   ): AsyncIterable<StreamChunk> {
     try {
-      const thinkingByToolCallBatchKey: ThinkingByToolCallBatchKey = new Map()
       const requestParams = this.mapCommonOptionsToAnthropic(
         options,
-        thinkingByToolCallBatchKey,
+        this.thinkingByToolCallBatchKey,
       )
@@
       yield* this.processAnthropicStream(
         stream,
         options.model,
         () => generateId(this.name),
-        thinkingByToolCallBatchKey,
+        this.thinkingByToolCallBatchKey,
       )
@@
   async structuredOutput(
     options: StructuredOutputOptions<AnthropicTextProviderOptions>,
   ): Promise<StructuredOutputResult<unknown>> {
     const { chatOptions, outputSchema } = options
-    const thinkingByToolCallBatchKey: ThinkingByToolCallBatchKey = new Map()

     const requestParams = this.mapCommonOptionsToAnthropic(
       chatOptions,
-      thinkingByToolCallBatchKey,
+      this.thinkingByToolCallBatchKey,
     )

Also applies to: 168-173, 413-417, 777-781

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-anthropic/src/adapters/text.ts` around lines 124 -
128, The thinkingByToolCallBatchKey Map is being created per-method-call so
cached thinking is lost between turns; instead make it a persistent field on the
adapter (e.g., this.thinkingByToolCallBatchKey: ThinkingByToolCallBatchKey
initialized once in the class constructor) and remove local new Map() creations
in methods that call mapCommonOptionsToAnthropic, rehydrateThinking, and
wherever it’s currently redeclared; update mapCommonOptionsToAnthropic (and any
callers) to use the instance field (or accept an optional Map) so writes at the
former write site (the code that updates thinkingByToolCallBatchKey) persist
across tool-loop turns and can be read later by rehydrate logic.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hm youre right. i looked into it, im not positve what the right way to do this is

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@imsherrill, happy to walk through the tradeoffs. Here's the core problem and the cleanest solution:


Why the instance field is the right approach

The multi-turn tool loop calls chatStream multiple times on the same adapter instance, each time passing the full accumulated message history. The flow looks like:

chatStream(turn 1)  ─→  emits tool_use  ─→  stores thinking in local Map  ─→  Map goes out of scope ❌
chatStream(turn 2)  ─→  formatMessages sees assistant+tool_calls msg  ─→  new empty Map  ─→  lookup fails ❌

Since ModelMessage has no field for thinking-block metadata, the blocks can't travel inside the message objects themselves. The map has to survive between calls, and the adapter instance is the natural home for it.


Revised fix

 export class AnthropicTextAdapter<...> extends BaseTextAdapter<...> {
   readonly kind = 'text' as const
   readonly name = 'anthropic' as const

   private client: Anthropic_SDK
+  private thinkingByToolCallBatchKey: ThinkingByToolCallBatchKey = new Map()

   // chatStream — remove local map creation
   async *chatStream(options): AsyncIterable<StreamChunk> {
     try {
-      const thinkingByToolCallBatchKey: ThinkingByToolCallBatchKey = new Map()
       const requestParams = this.mapCommonOptionsToAnthropic(
         options,
-        thinkingByToolCallBatchKey,
+        this.thinkingByToolCallBatchKey,
       )
       // ...
       yield* this.processAnthropicStream(
         stream,
         options.model,
         () => generateId(this.name),
-        thinkingByToolCallBatchKey,
+        this.thinkingByToolCallBatchKey,
       )
     }
   }

   // structuredOutput — remove local map creation
   async structuredOutput(options): Promise<StructuredOutputResult<unknown>> {
     const { chatOptions, outputSchema } = options
-    const thinkingByToolCallBatchKey: ThinkingByToolCallBatchKey = new Map()
     const requestParams = this.mapCommonOptionsToAnthropic(
       chatOptions,
-      thinkingByToolCallBatchKey,
+      this.thinkingByToolCallBatchKey,
     )
     // ...
   }
 }

Why the alternatives don't work as well

Approach Issue
Local Map (current) Lost between calls — the bug
Pass map from caller Requires API change; caller shouldn't need to know about Anthropic internals
Encode into ModelMessage Would require changes to the shared ModelMessage type across the whole @tanstack/ai package
Instance field (proposed) ✅ Zero API change, natural lifetime for a single conversation

One thing worth noting: if the same adapter instance is reused across completely independent conversations, stale entries could accumulate. You could add a clearThinkingCache() method or clear the map at the start of a new conversation if that's a concern — but for the typical single-conversation lifecycle this is fine.

  • Move thinkingByToolCallBatchKey to a private instance field initialized to new Map()
  • Remove the local new Map() declarations in chatStream and structuredOutput
  • Update all references to use this.thinkingByToolCallBatchKey

🧠 Learnings used
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to packages/typescript/*/src/adapters/*.ts : Create individual adapter implementations for each provider capability (text, embed, summarize, image) with separate exports to enable tree-shaking

Learnt from: tombeckenham
Repo: TanStack/ai PR: 237
File: packages/typescript/ai-fal/src/adapters/image.ts:39-44
Timestamp: 2026-02-04T01:39:19.784Z
Learning: In the TanStack/ai-fal package, when creating adapter type mappings for 600+ fal.ai models, use mapped types over the known EndpointTypeMap keys with a Record fallback for unknown models: `{ [K in keyof EndpointTypeMap]: ProviderOptions<K> } & Record<string, any>` instead of `Record<FalModel, ProviderOptions<TModel>>` to preserve per-model type safety.

Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to **/*.{ts,tsx} : Implement stream processing with StreamProcessor for handling chunked responses and support partial JSON parsing for streaming AI responses

Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to **/*.{ts,tsx} : Use tree-shakeable adapter architecture for provider implementations - export specialized adapters (text, embedding, summarize, image) as separate imports from `/adapters` subpath rather than monolithic adapters

💡 Use @coderabbitai plan to create a plan for code changes


const stream = await this.client.beta.messages.create(
{ ...requestParams, stream: true },
Expand All @@ -129,8 +135,11 @@ export class AnthropicTextAdapter<
},
)

yield* this.processAnthropicStream(stream, options.model, () =>
generateId(this.name),
yield* this.processAnthropicStream(
stream,
options.model,
() => generateId(this.name),
thinkingByToolCallBatchKey,
)
} catch (error: unknown) {
const err = error as Error & { status?: number; code?: string }
Expand All @@ -156,8 +165,12 @@ export class AnthropicTextAdapter<
options: StructuredOutputOptions<AnthropicTextProviderOptions>,
): Promise<StructuredOutputResult<unknown>> {
const { chatOptions, outputSchema } = options
const thinkingByToolCallBatchKey: ThinkingByToolCallBatchKey = new Map()

const requestParams = this.mapCommonOptionsToAnthropic(chatOptions)
const requestParams = this.mapCommonOptionsToAnthropic(
chatOptions,
thinkingByToolCallBatchKey,
)

// Create a tool that will capture the structured output
// Anthropic's SDK requires input_schema with type: 'object' literal
Expand Down Expand Up @@ -232,12 +245,16 @@ export class AnthropicTextAdapter<

private mapCommonOptionsToAnthropic(
options: TextOptions<AnthropicTextProviderOptions>,
thinkingByToolCallBatchKey: ThinkingByToolCallBatchKey,
) {
const modelOptions = options.modelOptions as
| InternalTextProviderOptions
| undefined

const formattedMessages = this.formatMessages(options.messages)
const formattedMessages = this.formatMessages(
options.messages,
thinkingByToolCallBatchKey,
)
const tools = options.tools
? convertToolsToProviderFormat(options.tools)
: undefined
Expand Down Expand Up @@ -294,6 +311,10 @@ export class AnthropicTextAdapter<
return requestParams
}

private getToolCallBatchKey(toolCallIds: Array<string>): string {
return toolCallIds.join('|')
}
Comment on lines +314 to +316
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Use a deterministic and collision-safe tool-call batch key.

Line [315] uses join('|'), which is delimiter/order sensitive. If tool-call order differs between capture (Line [770]) and replay (Line [414]), thinking lookup will miss.

💡 Proposed fix
   private getToolCallBatchKey(toolCallIds: Array<string>): string {
-    return toolCallIds.join('|')
+    return JSON.stringify([...toolCallIds].sort())
   }

Also applies to: 413-417, 770-779

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-anthropic/src/adapters/text.ts` around lines 314 -
316, The current getToolCallBatchKey uses a simple join which is order- and
delimiter-sensitive; change getToolCallBatchKey to produce a deterministic,
collision-resistant key by first sorting the toolCallIds array, then computing a
stable representation (e.g., JSON.stringify(sortedIds)) and returning a
cryptographic digest (e.g., SHA-256 hex) of that representation; update all call
sites that build or compare batch keys (the capture and replay usages referenced
alongside getToolCallBatchKey) to use this function so lookups are
order-independent and collision-safe.


private convertContentPartToAnthropic(
part: ContentPart,
): TextBlockParam | ImageBlockParam | DocumentBlockParam {
Expand Down Expand Up @@ -365,6 +386,7 @@ export class AnthropicTextAdapter<

private formatMessages(
messages: Array<ModelMessage>,
thinkingByToolCallBatchKey: ThinkingByToolCallBatchKey,
): InternalTextProviderOptions['messages'] {
const formattedMessages: InternalTextProviderOptions['messages'] = []

Expand All @@ -388,6 +410,21 @@ export class AnthropicTextAdapter<

if (role === 'assistant' && message.toolCalls?.length) {
const contentBlocks: AnthropicContentBlocks = []
const preservedThinkingBlocks = thinkingByToolCallBatchKey.get(
this.getToolCallBatchKey(
message.toolCalls.map((toolCall) => toolCall.id),
),
)

if (preservedThinkingBlocks) {
for (const thinkingBlock of preservedThinkingBlocks) {
contentBlocks.push({
type: 'thinking',
thinking: thinkingBlock.thinking,
signature: thinkingBlock.signature,
} as unknown as AnthropicContentBlock)
}
}

if (message.content) {
const content =
Expand Down Expand Up @@ -525,6 +562,7 @@ export class AnthropicTextAdapter<
stream: AsyncIterable<Anthropic_SDK.Beta.BetaRawMessageStreamEvent>,
model: string,
genId: () => string,
thinkingByToolCallBatchKey: ThinkingByToolCallBatchKey,
): AsyncIterable<StreamChunk> {
let accumulatedContent = ''
let accumulatedThinking = ''
Expand All @@ -533,6 +571,7 @@ export class AnthropicTextAdapter<
number,
{ id: string; name: string; input: string; started: boolean }
>()
const completedThinkingBlocks: Array<AnthropicThinkingBlock> = []
let currentToolIndex = -1

// AG-UI lifecycle tracking
Expand All @@ -544,6 +583,7 @@ export class AnthropicTextAdapter<
let hasEmittedRunFinished = false
// Track current content block type for proper content_block_stop handling
let currentBlockType: string | null = null
let currentThinkingBlock: AnthropicThinkingBlock | null = null

try {
for await (const event of stream) {
Expand All @@ -570,6 +610,10 @@ export class AnthropicTextAdapter<
})
} else if (event.content_block.type === 'thinking') {
accumulatedThinking = ''
currentThinkingBlock = {
thinking: '',
signature: '',
}
// Emit STEP_STARTED for thinking
stepId = genId()
yield {
Expand Down Expand Up @@ -607,6 +651,9 @@ export class AnthropicTextAdapter<
} else if (event.delta.type === 'thinking_delta') {
const delta = event.delta.thinking
accumulatedThinking += delta
if (currentThinkingBlock) {
currentThinkingBlock.thinking += delta
}
yield {
type: 'STEP_FINISHED',
stepId: stepId || genId(),
Expand All @@ -615,6 +662,10 @@ export class AnthropicTextAdapter<
delta,
content: accumulatedThinking,
}
} else if (event.delta.type === 'signature_delta') {
if (currentThinkingBlock) {
currentThinkingBlock.signature = event.delta.signature
}
} else if (event.delta.type === 'input_json_delta') {
const existing = toolCallsMap.get(currentToolIndex)
if (existing) {
Expand Down Expand Up @@ -681,6 +732,11 @@ export class AnthropicTextAdapter<
// Reset so a new TEXT_MESSAGE_START is emitted if text follows tool calls
hasEmittedTextMessageStart = false
}
} else if (currentBlockType === 'thinking') {
if (currentThinkingBlock) {
completedThinkingBlocks.push(currentThinkingBlock)
currentThinkingBlock = null
}
} else {
// Emit TEXT_MESSAGE_END only for text blocks (not tool_use blocks)
if (hasEmittedTextMessageStart && accumulatedContent) {
Expand Down Expand Up @@ -711,6 +767,18 @@ export class AnthropicTextAdapter<
hasEmittedRunFinished = true
switch (event.delta.stop_reason) {
case 'tool_use': {
const toolCallIds = Array.from(toolCallsMap.values()).map(
(toolCall) => toolCall.id,
)
if (
completedThinkingBlocks.length > 0 &&
toolCallIds.length > 0
) {
thinkingByToolCallBatchKey.set(
this.getToolCallBatchKey(toolCallIds),
completedThinkingBlocks,
)
}
yield {
type: 'RUN_FINISHED',
runId,
Expand Down