Skip to content

Bug: OpenAI responses API does not send previous_response_id in multi-turn sessions #20847

@wildoranges

Description

@wildoranges

Description

This issue had an initial gap in OpenCode’s Responses caching chain: previous_response_id was not persisted/reused across turns.

That part is now fixed in PR #20848, but we also found a second cache-hit gap:

For custom providers using npm: "@ai-sdk/openai" (with a non-openai provider ID), promptCacheKey was not always set because the old condition mostly relied on providerID === "openai" (or manual setCacheKey).

So requests could carry previous_response_id but still miss stable cache routing from prompt_cache_key.

This is now addressed by extending cache option wiring for SDK-based OpenAI providers.

Plugins

oh-my-opencode

OpenCode version

  • Reproduced on: v1.3.13 and local dev build
  • Verified fixed on: 0.0.0-feat/gpt-incremental-caching-202604031035

Steps to reproduce

  1. Configure a custom provider using npm: "@ai-sdk/openai" with a provider ID not equal to openai.
  2. Start a multi-turn session with a stable large prompt prefix.
  3. Inspect outgoing requests / logs.
  4. Before fix: previous_response_id may be present, but prompt_cache_key may be missing.
  5. After fix: both are present and cache behavior improves.

Screenshot and/or share link

N/A

Operating System

Linux

Terminal

bash

Metadata

Metadata

Assignees

Labels

coreAnything pertaining to core functionality of the application (opencode server stuff)

Type

No type
No fields configured for issues without a type.

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions