-
Notifications
You must be signed in to change notification settings - Fork 45
fix(chat): show actionable error messages when LLM is unreachable #222
Copy link
Copy link
Closed
Labels
Description
Overview
When Ollama isn't running or misconfigured, pressing @ and trying to chat shows raw network errors like "connection refused" or "context deadline exceeded". These tell the user nothing about what to DO.
Principle
Every error message should include an actionable next step. Not just "X failed" but "X failed — try Y".
Common cases to handle
- Ollama not running: "Can't reach Ollama at http://localhost:11434 — start it with
ollama serve" - Model not found: "Model 'qwen3' not available — run
ollama pull qwen3or try/modelto pick another" - Timeout: "Ollama is taking too long to respond — it may be loading the model, try again in a moment"
- Wrong URL: "Can't connect to http://wrong:1234 — check
base_urlin your config file at [path]"
Where to fix
internal/app/chat.go— error display in chat overlayinternal/llm/client.go— error wrapping at the client level
Reactions are currently unavailable