Skip to content

fix(chat): show actionable error messages when LLM is unreachable #222

@cpcloud

Description

@cpcloud

Overview

When Ollama isn't running or misconfigured, pressing @ and trying to chat shows raw network errors like "connection refused" or "context deadline exceeded". These tell the user nothing about what to DO.

Principle

Every error message should include an actionable next step. Not just "X failed" but "X failed — try Y".

Common cases to handle

  • Ollama not running: "Can't reach Ollama at http://localhost:11434 — start it with ollama serve"
  • Model not found: "Model 'qwen3' not available — run ollama pull qwen3 or try /model to pick another"
  • Timeout: "Ollama is taking too long to respond — it may be loading the model, try again in a moment"
  • Wrong URL: "Can't connect to http://wrong:1234 — check base_url in your config file at [path]"

Where to fix

  • internal/app/chat.go — error display in chat overlay
  • internal/llm/client.go — error wrapping at the client level

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't workingllmLLM and chat features

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions