Skip to content

feat(cli): add openai provider support for OpenAI-compatible endpoints#11941

Open
Thump604 wants to merge 1 commit intoRooCodeInc:mainfrom
Thump604:feat/cli-openai-provider
Open

feat(cli): add openai provider support for OpenAI-compatible endpoints#11941
Thump604 wants to merge 1 commit intoRooCodeInc:mainfrom
Thump604:feat/cli-openai-provider

Conversation

@Thump604
Copy link

@Thump604 Thump604 commented Mar 16, 2026

Summary

Adds the openai provider to the CLI's supported providers list, enabling use of OpenAI-compatible API endpoints. This allows the CLI to connect to locally deployed LLMs (vLLM, llama.cpp, Ollama, LM Studio, etc.) and any OpenAI-compatible inference server.

  • Add "openai" to supportedProviders in types.ts
  • Add openai: "OPENAI_API_KEY" to envVarMap in provider.ts
  • Add "openai" case in getProviderSettings() with OPENAI_BASE_URL env var support
  • Update README environment variable table

The openai provider already exists in @roo-code/types as a customProvider with full schema support (openAiBaseUrl, openAiApiKey, openAiModelId), and the core extension already handles it via OpenAiHandler. The CLI simply needed to expose it in its provider whitelist.

Usage

export OPENAI_API_KEY=sk-local
export OPENAI_BASE_URL=http://localhost:8080/v1
roo --provider openai --model my-model "Hello"

Or via settings file (~/.roo/cli-settings.json):

{
  "provider": "openai",
  "model": "my-model"
}

Difference from openai-native

openai openai-native
API Chat Completions (/v1/chat/completions) Responses API
Custom base URL Yes, via OPENAI_BASE_URL Limited
Local LLMs Yes (vLLM, llama.cpp, Ollama, etc.) No
Settings fields openAiBaseUrl, openAiApiKey, openAiModelId openAiNativeApiKey

Test plan

  • Verified locally with vLLM-MLX serving Qwen 3.5 122B on http://127.0.0.1:8080/v1
  • Confirmed --provider openai --model <model> -k <key> resolves correctly
  • Confirmed OPENAI_API_KEY and OPENAI_BASE_URL env vars are read
  • Type-checked: "openai" satisfies ProviderName (already in customProviders)

Closes #11917

Interactively review PR in Roo Code Cloud

Add the `openai` provider to the CLI's supported providers list,
enabling use of OpenAI-compatible API endpoints (e.g., locally deployed
LLMs via vLLM, llama.cpp, Ollama, LM Studio, etc.).

Changes:
- Add "openai" to supportedProviders in types.ts
- Add "openai" env var mapping (OPENAI_API_KEY) in provider.ts
- Add "openai" case in getProviderSettings() with support for
  OPENAI_BASE_URL environment variable for custom endpoints
- Update README environment variable table

The openai provider uses the existing @roo-code/types openAi schema
(openAiBaseUrl, openAiApiKey, openAiModelId) which is already fully
supported in the core extension via OpenAiHandler.

Usage:
  export OPENAI_API_KEY=sk-local
  export OPENAI_BASE_URL=http://localhost:8080/v1
  roo --provider openai --model my-model "Hello"

Closes RooCodeInc#11917
@dosubot dosubot bot added size:S This PR changes 10-29 lines, ignoring generated files. Enhancement New feature or request labels Mar 16, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Enhancement New feature or request size:S This PR changes 10-29 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[ENHANCEMENT] Roo Code CLI support for OpenAI compatible endpoints

1 participant