Description
I am trying to use Nanocoder with a local Ollama instance on a MacBook Air M4. While the Ollama server is running correctly and responds perfectly to manual curl requests, Nanocoder consistently fails to connect or triggers a 404 error on the Ollama server side.
It seems like Nanocoder is not sending the request payload (model name) correctly, or it's malforming the endpoint URL, despite correct configuration.
Environment
- Hardware: MacBook Air M4 (16GB Unified Memory)
- OS: macOS 26.3
- Ollama Model:
ministral-3:8b
- Nanocoder Configuration:
- Provider: Ollama
- Base URL:
http://127.0.0.1:11434/v1
- Model ID:
ministral-3:8b
The Issue
When triggering a generation in Nanocoder, I get a generic "Connection failed: Unable to reach the model server" error after a timeout, or an immediate failure.
aa
However, inspecting the Ollama server logs, I can see the request reaching the server but returning a 404, implying the model was not found, even though the model name matches exactly.
Logs & Evidence
1. Ollama Server Log (When Nanocoder makes a request)
The request reaches the server but results in a 404 error code.
[GIN] 2026/02/18 - 02:09:30 | 404 | 1.704125ms | 127.0.0.1 | POST "/v1/chat/completions"
Troubleshooting Attempted
I have tried the following steps to rule out cold-boot issues or configuration errors, but none resolved the issue in Nanocoder:
- Pre-loading the model: Ran ollama run ministral-3:8b in the terminal to ensure the model is loaded in RAM (preventing timeouts).
- Binding to 0.0.0.0: Ran OLLAMA_HOST=0.0.0.0:11434 ollama serve to rule out localhost binding issues.
- Creating Model Aliases: Created aliases using ollama cp ministral-3:8b gpt-3.5-turbo and ollama cp ministral-3:8b gpt-4, then updated the Nanocoder config to use these names.
- URL Variations: Tried removing /v1 from the Base URL (connection refused) and adding it back (404 error).
- API Key: Added a dummy API key ("ollama") to ensure the request isn't blocked by empty auth headers.
Conclusion
Since curl works fine against the same endpoint with the same payload structure, the issue appears to be how Nanocoder is constructing the request body or the model parameter internally.
Could you please advise on how to debug exactly what payload Nanocoder is sending?
Description
I am trying to use Nanocoder with a local Ollama instance on a MacBook Air M4. While the Ollama server is running correctly and responds perfectly to manual
curlrequests, Nanocoder consistently fails to connect or triggers a 404 error on the Ollama server side.It seems like Nanocoder is not sending the request payload (model name) correctly, or it's malforming the endpoint URL, despite correct configuration.
Environment
ministral-3:8bhttp://127.0.0.1:11434/v1ministral-3:8bThe Issue
When triggering a generation in Nanocoder, I get a generic "Connection failed: Unable to reach the model server" error after a timeout, or an immediate failure.
aa
However, inspecting the Ollama server logs, I can see the request reaching the server but returning a 404, implying the model was not found, even though the model name matches exactly.
Logs & Evidence
1. Ollama Server Log (When Nanocoder makes a request)
The request reaches the server but results in a 404 error code.
Troubleshooting Attempted
I have tried the following steps to rule out cold-boot issues or configuration errors, but none resolved the issue in Nanocoder:
Conclusion
Since curl works fine against the same endpoint with the same payload structure, the issue appears to be how Nanocoder is constructing the request body or the model parameter internally.
Could you please advise on how to debug exactly what payload Nanocoder is sending?