Skip to content

fix (core): Enable prompt caching for openai-compatible by default un…#22569

Closed
shantur wants to merge 1 commit intoanomalyco:devfrom
shantur:fix-openai-compatible-caching
Closed

fix (core): Enable prompt caching for openai-compatible by default un…#22569
shantur wants to merge 1 commit intoanomalyco:devfrom
shantur:fix-openai-compatible-caching

Conversation

@shantur
Copy link
Copy Markdown
Contributor

@shantur shantur commented Apr 15, 2026

Enable prompt caching for openai-compatible by default unless disabled.


If this causes issues for you, please disable by putting the following in your opencode.json:

{
 "provider": {
    "my-provider": {
      "options": {
        "setCacheKey": false
      }
    }
 }
}

@github-actions
Copy link
Copy Markdown
Contributor

Thanks for your contribution!

This PR doesn't have a linked issue. All PRs must reference an existing issue.

Please:

  1. Open an issue describing the bug/feature (if one doesn't exist)
  2. Add Fixes #<number> or Closes #<number> to this PR description

See CONTRIBUTING.md for details.

@github-actions github-actions bot added the needs:compliance This means the issue will auto-close after 2 hours. label Apr 15, 2026
@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Apr 15, 2026

This PR doesn't fully meet our contributing guidelines and PR template.

What needs to be fixed:

  • PR description is missing required template sections. Please use the PR template.

Please edit this PR description to address the above within 2 hours, or it will be automatically closed.

If you believe this was flagged incorrectly, please let a maintainer know.

@github-actions
Copy link
Copy Markdown
Contributor

The following comment was made by an LLM, it may be inaccurate:

Based on my search, I found potential related PRs, but no direct duplicates:

Related PRs (not duplicates):

  1. PR feat(provider): add provider-specific cache configuration system (significant token usage reduction) #5422 - "feat(provider): add provider-specific cache configuration system" - This is related as it introduces the cache configuration system that your PR likely builds upon.

  2. PR fix(cache): improve Anthropic prompt cache hit rate with system split and tool stability #14743 - "fix(cache): improve Anthropic prompt cache hit rate with system split and tool stability" - Related to cache optimization but for Anthropic specifically.

  3. PR fix(opencode): wire OpenAI previous_response_id session caching #20848 - "fix(opencode): wire OpenAI previous_response_id session caching" - Related to caching for OpenAI providers.

  4. PR fix: reuse OpenAI Responses previous_response_id across turns #22149 - "fix: reuse OpenAI Responses previous_response_id across turns" - Another OpenAI caching-related PR.

These PRs address caching for various providers but don't appear to be duplicates of your current PR #22569, which specifically enables prompt caching for openai-compatible providers by default.

@shantur shantur force-pushed the fix-openai-compatible-caching branch from a8e31bf to 42bd981 Compare April 15, 2026 05:34
@rekram1-node rekram1-node removed needs:issue needs:compliance This means the issue will auto-close after 2 hours. labels Apr 15, 2026
@rekram1-node
Copy link
Copy Markdown
Collaborator

vouch

opencode-agent bot pushed a commit that referenced this pull request Apr 15, 2026
@github-actions github-actions bot added needs:issue needs:compliance This means the issue will auto-close after 2 hours. labels Apr 15, 2026
@github-actions
Copy link
Copy Markdown
Contributor

This pull request has been automatically closed because it was not updated to meet our contributing guidelines within the 2-hour window.

Feel free to open a new pull request that follows our guidelines.

@github-actions github-actions bot removed the needs:compliance This means the issue will auto-close after 2 hours. label Apr 15, 2026
@github-actions github-actions bot closed this Apr 15, 2026
jwcrystal pushed a commit to jwcrystal/opencode that referenced this pull request Apr 15, 2026
@shantur
Copy link
Copy Markdown
Contributor Author

shantur commented Apr 15, 2026

@rekram1-node - Seems like I wasn't quick enough :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants