-
-
Notifications
You must be signed in to change notification settings - Fork 469
Added Ministral model support #1065
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Codecov Report✅ All modified and coverable lines are covered by tests. 📢 Thoughts on this report? Let us know! |
deep1401
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe lets also try to see what do we need to do for adding support to the LLM LoRA Trainer?
|
Sure |
…ab-app into add/ministral_support
deep1401
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tested this and it worked probably because vllm uses their own model compilations instead of directly using transformers backend. This same architecture is already supported on our fastchat_vision_server. If we end up merging this, we should also do a bit of work to support that? Currently it doesnt work as the transformers backend requires the 5.0.0.dev0 version or it wont recognize ministral3 config.
To install that you need to follow this:
git clone https://github.com/huggingface/transformers.git
cd transformers
uv pip install '.[torch]'
Adding these 3 lines to setup.sh made the model run but then it just crashed for me because of the possibly older compute capability on the RTX3090:
ValueError: FP8 quantized models is only supported on GPUs with compute capability >= 8.9 (e.g 4090/H100), actual = `8.6`
I will let @dadmobile if we want to be on transformers 5.0.0 version just for fastchat vision server to support this?
WAT. 3090 is old and busted now?!? I don't want to live in this world. |
Sorry...serious answer: Since we aren't hearing an overwhelming wave of interest in this model, I think we just make a followup to test out Transformer 5 when we can and make a plan for migrating for each dependent plugin once we know the tradeoff. |
Added support for running ministral models on transformerlabs. Tried running all models and works all of them work on vllm server with this change