Skip to content

Conversation

@ysimonson
Copy link
Contributor

llama.cpp recently added the qwen3vlmoe model architecture, so this update prevents the following error:

2025-11-20T14:07:26.318944Z ERROR load_from_file: llama-cpp-2: error loading model: error loading model architecture: unknown model architecture: 'qwen3vlmoe' module="llama.cpp::llama_model_load"

@MarcusDunn
Copy link
Contributor

thanks for the PR. will merge and cut release if tests give the all-clear

@MarcusDunn
Copy link
Contributor

looks like you'll need to update the Cargo.toml to include some new files:

https://github.com/utilityai/llama-cpp-rs/actions/runs/19540618299/job/55962123635?pr=866

@ysimonson
Copy link
Contributor Author

Hopefully added all the missing files, though is there a way to check w/o CI? Just running cargo test doesn't seem to catch it

@MarcusDunn
Copy link
Contributor

cargo publish --dry-run should do it.

@ysimonson
Copy link
Contributor Author

nice, thanks for the pointer. Works on my end so hopefully will pass CI.

@MarcusDunn MarcusDunn merged commit 927bec2 into utilityai:main Nov 20, 2025
3 of 5 checks passed
@MarcusDunn
Copy link
Contributor

thanks for the PR! there's a new release with the new llama.cpp

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants