Skip to content

Conversation

@ParamThakkar123
Copy link
Contributor

Added support for running ministral models on transformerlabs. Tried running all models and works all of them work on vllm server with this change

@codecov-commenter
Copy link

Codecov Report

✅ All modified and coverable lines are covered by tests.

📢 Thoughts on this report? Let us know!

Copy link
Member

@deep1401 deep1401 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe lets also try to see what do we need to do for adding support to the LLM LoRA Trainer?

@ParamThakkar123
Copy link
Contributor Author

Sure

Copy link
Member

@deep1401 deep1401 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tested this and it worked probably because vllm uses their own model compilations instead of directly using transformers backend. This same architecture is already supported on our fastchat_vision_server. If we end up merging this, we should also do a bit of work to support that? Currently it doesnt work as the transformers backend requires the 5.0.0.dev0 version or it wont recognize ministral3 config.

To install that you need to follow this:

git clone https://github.com/huggingface/transformers.git
cd transformers
uv pip install '.[torch]'

Adding these 3 lines to setup.sh made the model run but then it just crashed for me because of the possibly older compute capability on the RTX3090:

ValueError: FP8 quantized models is only supported on GPUs with compute capability >= 8.9 (e.g 4090/H100), actual = `8.6`

I will let @dadmobile if we want to be on transformers 5.0.0 version just for fastchat vision server to support this?

@dadmobile
Copy link
Member

but then it just crashed for me because of the possibly older compute capability on the RTX3090:

ValueError: FP8 quantized models is only supported on GPUs with compute capability >= 8.9 (e.g 4090/H100), actual = `8.6`

WAT. 3090 is old and busted now?!? I don't want to live in this world.

@dadmobile
Copy link
Member

I will let @dadmobile if we want to be on transformers 5.0.0 version just for fastchat vision server to support this?

Sorry...serious answer: Since we aren't hearing an overwhelming wave of interest in this model, I think we just make a followup to test out Transformer 5 when we can and make a plan for migrating for each dependent plugin once we know the tradeoff.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants