Skip to content

Bug: gpt-5 defaults + @ai-sdk/openai-compatible path sends incompatible reasoning params #16154

@andyWang1688

Description

@andyWang1688

What happened

When using a custom provider with @ai-sdk/openai-compatible and a gpt-5* model (e.g. gpt-5-codex), OpenCode sends reasoning params that are incompatible with some OpenAI-compatible backends.

I consistently get:

  • Unsupported parameter: reasoningSummary
  • (or) Unsupported parameter: reasoningEffort

Why this looks like an OpenCode bug

From source behavior:

  1. OpenCode injects GPT-5 defaults in ProviderTransform.options:
    • reasoningEffort = "medium"
    • reasoningSummary = "auto"
  2. For @ai-sdk/openai-compatible, OpenCode uses sdk.languageModel(...).
  3. In AI SDK openai-compatible provider, languageModel maps to chat model (/chat/completions) rather than OpenAI Responses semantics.
  4. reasoningSummary is not a standard chat-compatible field and can be forwarded as an unsupported top-level param for some backends.

By contrast, OpenAI account/provider works because OpenCode’s openai loader uses sdk.responses(...), and OpenAI responses model maps these options to reasoning: { effort, summary }.

Minimal repro

OpenCode version

1.2.17

Config (reduced)

{
  "provider": {
    "packycode-k1": {
      "npm": "@ai-sdk/openai-compatible",
      "options": {
        "baseURL": "https://codex-api.packycode.com/v1"
      },
      "models": {
        "k1-gpt-5-codex": {
          "id": "gpt-5-codex",
          "reasoning": false
        }
      }
    }
  }
}

Run

opencode --model packycode-k1/k1-gpt-5-codex --prompt "ping"

Actual

Request fails with unsupported parameter error (reasoningSummary / reasoningEffort).

Expected

OpenCode should not emit incompatible reasoning params for openai-compatible chat backends, or should map them correctly for responses-compatible backends.

Additional validation

Against the same backend:

  • POST /v1/responses with:

    { "model": "gpt-5-codex", "input": "ping", "reasoning": { "effort": "high", "summary": "auto" } }

    succeeds.

  • POST /v1/responses with top-level:

    { "reasoningEffort": "high", "reasoningSummary": "auto" }

    fails (Unsupported parameter).

So the backend supports reasoning controls, but not in OpenCode’s emitted shape for this path.

Suggested fix

At least one of:

  1. Do not auto-inject reasoningSummary/reasoningEffort for @ai-sdk/openai-compatible by default.
  2. Add provider capability/compatibility gating before injecting GPT-5 defaults.
  3. Support a config switch like wire_api = "responses" for openai-compatible providers and map reasoning to:
    "reasoning": { "effort": "...", "summary": "..." }
  4. If staying on chat path, map only to compatible chat fields (e.g. reasoning_effort) and drop unsupported ones.

Metadata

Metadata

Assignees

Labels

coreAnything pertaining to core functionality of the application (opencode server stuff)

Type

No type
No fields configured for issues without a type.

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions