What happened
When using a custom provider with @ai-sdk/openai-compatible and a gpt-5* model (e.g. gpt-5-codex), OpenCode sends reasoning params that are incompatible with some OpenAI-compatible backends.
I consistently get:
Unsupported parameter: reasoningSummary
- (or)
Unsupported parameter: reasoningEffort
Why this looks like an OpenCode bug
From source behavior:
- OpenCode injects GPT-5 defaults in
ProviderTransform.options:
reasoningEffort = "medium"
reasoningSummary = "auto"
- For
@ai-sdk/openai-compatible, OpenCode uses sdk.languageModel(...).
- In AI SDK openai-compatible provider,
languageModel maps to chat model (/chat/completions) rather than OpenAI Responses semantics.
reasoningSummary is not a standard chat-compatible field and can be forwarded as an unsupported top-level param for some backends.
By contrast, OpenAI account/provider works because OpenCode’s openai loader uses sdk.responses(...), and OpenAI responses model maps these options to reasoning: { effort, summary }.
Minimal repro
OpenCode version
1.2.17
Config (reduced)
{
"provider": {
"packycode-k1": {
"npm": "@ai-sdk/openai-compatible",
"options": {
"baseURL": "https://codex-api.packycode.com/v1"
},
"models": {
"k1-gpt-5-codex": {
"id": "gpt-5-codex",
"reasoning": false
}
}
}
}
}
Run
opencode --model packycode-k1/k1-gpt-5-codex --prompt "ping"
Actual
Request fails with unsupported parameter error (reasoningSummary / reasoningEffort).
Expected
OpenCode should not emit incompatible reasoning params for openai-compatible chat backends, or should map them correctly for responses-compatible backends.
Additional validation
Against the same backend:
So the backend supports reasoning controls, but not in OpenCode’s emitted shape for this path.
Suggested fix
At least one of:
- Do not auto-inject
reasoningSummary/reasoningEffort for @ai-sdk/openai-compatible by default.
- Add provider capability/compatibility gating before injecting GPT-5 defaults.
- Support a config switch like
wire_api = "responses" for openai-compatible providers and map reasoning to:
"reasoning": { "effort": "...", "summary": "..." }
- If staying on chat path, map only to compatible chat fields (e.g.
reasoning_effort) and drop unsupported ones.
What happened
When using a custom provider with
@ai-sdk/openai-compatibleand agpt-5*model (e.g.gpt-5-codex), OpenCode sends reasoning params that are incompatible with some OpenAI-compatible backends.I consistently get:
Unsupported parameter: reasoningSummaryUnsupported parameter: reasoningEffortWhy this looks like an OpenCode bug
From source behavior:
ProviderTransform.options:reasoningEffort = "medium"reasoningSummary = "auto"@ai-sdk/openai-compatible, OpenCode usessdk.languageModel(...).languageModelmaps to chat model (/chat/completions) rather than OpenAI Responses semantics.reasoningSummaryis not a standard chat-compatible field and can be forwarded as an unsupported top-level param for some backends.By contrast, OpenAI account/provider works because OpenCode’s
openailoader usessdk.responses(...), and OpenAI responses model maps these options toreasoning: { effort, summary }.Minimal repro
OpenCode version
1.2.17Config (reduced)
{ "provider": { "packycode-k1": { "npm": "@ai-sdk/openai-compatible", "options": { "baseURL": "https://codex-api.packycode.com/v1" }, "models": { "k1-gpt-5-codex": { "id": "gpt-5-codex", "reasoning": false } } } } }Run
opencode --model packycode-k1/k1-gpt-5-codex --prompt "ping"Actual
Request fails with unsupported parameter error (
reasoningSummary/reasoningEffort).Expected
OpenCode should not emit incompatible reasoning params for openai-compatible chat backends, or should map them correctly for responses-compatible backends.
Additional validation
Against the same backend:
POST /v1/responseswith:{ "model": "gpt-5-codex", "input": "ping", "reasoning": { "effort": "high", "summary": "auto" } }succeeds.
POST /v1/responseswith top-level:{ "reasoningEffort": "high", "reasoningSummary": "auto" }fails (
Unsupported parameter).So the backend supports reasoning controls, but not in OpenCode’s emitted shape for this path.
Suggested fix
At least one of:
reasoningSummary/reasoningEffortfor@ai-sdk/openai-compatibleby default.wire_api = "responses"for openai-compatible providers and map reasoning to:reasoning_effort) and drop unsupported ones.