Skip to content
10 changes: 10 additions & 0 deletions packages/opencode/src/provider/provider.ts
Original file line number Diff line number Diff line change
Expand Up @@ -410,6 +410,16 @@ function custom(dep: CustomDep): Record<string, CustomLoader> {
},
},
}),
nvidia: () =>
Effect.succeed({
autoload: false,
options: {
headers: {
"HTTP-Referer": "https://opencode.ai/",
"X-Title": "opencode",
},
},
}),
vercel: () =>
Effect.succeed({
autoload: false,
Expand Down
61 changes: 44 additions & 17 deletions packages/web/src/content/docs/providers.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -1316,30 +1316,57 @@ To use Kimi K2 from Moonshot AI:

---

### Mistral AI
### NVIDIA

1. Head over to the [Mistral AI console](https://console.mistral.ai/), create an account, and generate an API key.
NVIDIA provides access to Nemotron models and many other open models through [build.nvidia.com](https://build.nvidia.com) for free.

2. Run the `/connect` command and search for **Mistral AI**.
1. Head over to [build.nvidia.com](https://build.nvidia.com), create an account, and generate an API key.

```txt
/connect
```
2. Run the `/connect` command and search for **NVIDIA**.

3. Enter your Mistral API key.
```txt
/connect
```

3. Enter your NVIDIA API key.

```txt
┌ API key
└ enter
```

4. Run the `/models` command to select a model like nemotron-3-super-120b-a12b.

```txt
/models
```

```txt
┌ API key
└ enter
```
#### On-Prem / NIM

4. Run the `/models` command to select a model like _Mistral Medium_.
You can also use NVIDIA models locally via [NVIDIA NIM](https://docs.nvidia.com/nim/) by setting a custom base URL.

```txt
/models
```
```json title="opencode.json" {6}
{
"$schema": "https://opencode.ai/config.json",
"provider": {
"nvidia": {
"options": {
"baseURL": "http://localhost:8000/v1"
}
}
}
}
```

#### Environment Variable

Alternatively, set your API key as an environment variable.

```bash frame="none"
export NVIDIA_API_KEY=nvapi-your-key-here
```

---

Expand Down
Loading