-
Notifications
You must be signed in to change notification settings - Fork 38
Open
Description
Hi Everyone!
First of all thanks for open-sourcing this project.
I had a query regarding the resource needed to finetune a Gemma-3 Model?
Till now I have tried GPU's on these platform
- Google Colab T4-GPU (Not enough Memory)
- Google Colab v2-8 TPU (Slow - Colab runtime gets disconnected)
- Kaggle GPU P-100 (Not enough Memory)
- Kaggle GPU T4*2 GPU (Currently running into issues because of using multiple GPU - will investigate this further)
I would like to know what training time are we estimating for this model to train for a single epoch (full-finetune) and what base configuration you will suggest me to use.
Thanks for the project!
Metadata
Metadata
Assignees
Labels
No labels