fix(llm): use generic model fields in frontend instead of OllamaModels#1093
Merged
Alanxtl merged 1 commit intoMay 10, 2026
Merged
Conversation
Contributor
Author
|
CI 的 lint 失败是 |
Contributor
|
merge to main to fix ci fail |
The frontend index page indexed cfg.OllamaModels[0] directly, which panics with "index out of range [0] with length 0" when LLM_PROVIDER is not ollama (config.go only populates OllamaModels for the ollama provider). The CLI already uses cfg.LLMModelsList / cfg.ModelName, so align the frontend with the same generic fields. Also mark the configured MODEL_NAME as the selected option in the dropdown so the initial display matches the user's .env choice. Closes apache#1072
981c60a to
1bfe9b3
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
修复:非 Ollama 提供商导致前端首页访问 panic
问题
当
LLM_PROVIDER不是ollama时(例如通过 OpenAI 兼容接口使用 openai、gemini),打开前端首页会触发 panic:错误堆栈指向
llm/go-client/frontend/main.go:根本原因
config.go仅在 LLM 提供商为ollama时才会填充OllamaModels:因此对于其他提供商,该切片为空,访问索引
[0]会触发 panic。CLI 已经使用通用的
cfg.LLMModelsList和cfg.ModelName,前端部分未同步适配。修复方案
main.go:将cfg.LLMModelsList作为Models传入,cfg.ModelName作为DefaultModel传入index.html:遍历.Models,将配置的MODEL_NAME标记为selected,确保初始下拉框与用户.env中的配置匹配验证
使用 #1072 中的复现配置进行本地测试:
GET /返回 HTTP 200,无 panic<option value="gemini-2.5-flash" selected>gemini-2.5-flash</option>Ollama 路径不受影响:
LLMModelsList和ModelName在config.go中对所有 provider 都会填充,因此当 provider 是 ollama 时,渲染结果与原来用OllamaModels完全一致,无回归。Closes #1072