Skip to content

[Bug] Connection failed / 404(Mac M4) #366

@elajep

Description

@elajep

Description

I am trying to use Nanocoder with a local Ollama instance on a MacBook Air M4. While the Ollama server is running correctly and responds perfectly to manual curl requests, Nanocoder consistently fails to connect or triggers a 404 error on the Ollama server side.

It seems like Nanocoder is not sending the request payload (model name) correctly, or it's malforming the endpoint URL, despite correct configuration.

Environment

  • Hardware: MacBook Air M4 (16GB Unified Memory)
  • OS: macOS 26.3
  • Ollama Model: ministral-3:8b
  • Nanocoder Configuration:
    • Provider: Ollama
    • Base URL: http://127.0.0.1:11434/v1
    • Model ID: ministral-3:8b

The Issue

When triggering a generation in Nanocoder, I get a generic "Connection failed: Unable to reach the model server" error after a timeout, or an immediate failure.
aa
However, inspecting the Ollama server logs, I can see the request reaching the server but returning a 404, implying the model was not found, even though the model name matches exactly.

Logs & Evidence

1. Ollama Server Log (When Nanocoder makes a request)

The request reaches the server but results in a 404 error code.

[GIN] 2026/02/18 - 02:09:30 | 404 | 1.704125ms | 127.0.0.1 | POST "/v1/chat/completions"

Troubleshooting Attempted

I have tried the following steps to rule out cold-boot issues or configuration errors, but none resolved the issue in Nanocoder:

  1. Pre-loading the model: Ran ollama run ministral-3:8b in the terminal to ensure the model is loaded in RAM (preventing timeouts).
  2. Binding to 0.0.0.0: Ran OLLAMA_HOST=0.0.0.0:11434 ollama serve to rule out localhost binding issues.
  3. Creating Model Aliases: Created aliases using ollama cp ministral-3:8b gpt-3.5-turbo and ollama cp ministral-3:8b gpt-4, then updated the Nanocoder config to use these names.
  4. URL Variations: Tried removing /v1 from the Base URL (connection refused) and adding it back (404 error).
  5. API Key: Added a dummy API key ("ollama") to ensure the request isn't blocked by empty auth headers.

Conclusion

Since curl works fine against the same endpoint with the same payload structure, the issue appears to be how Nanocoder is constructing the request body or the model parameter internally.
Could you please advise on how to debug exactly what payload Nanocoder is sending?

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions