Skip to content

Commit cd22d7c

Browse files
danaaubakirovapkooijdanaaubakirovaalibertsfracapuano
authored andcommitted
fix(docs): SmolVLA fine-tuning getting started (#1201)
Co-authored-by: Pepijn <[email protected]> Co-authored-by: danaaubakirova <[email protected]> Co-authored-by: Simon Alibert <[email protected]> Co-authored-by: Francesco Capuano <[email protected]> Co-authored-by: Steven Palma <[email protected]>
1 parent 1afb913 commit cd22d7c

File tree

2 files changed

+97
-0
lines changed

2 files changed

+97
-0
lines changed

docs/source/_toctree.yml

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,10 @@
1414
- local: hilserl_sim
1515
title: Train RL in Simulation
1616
title: "Tutorials"
17+
- sections:
18+
- local: smolvla
19+
title: Finetune SmolVLA
20+
title: "Policies"
1721
- sections:
1822
- local: so101
1923
title: SO-101

docs/source/smolvla.mdx

Lines changed: 93 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,93 @@
1+
# Finetune SmolVLA
2+
3+
SmolVLA is Hugging Face’s lightweight foundation model for robotics. Designed for easy fine-tuning on LeRobot datasets, it helps accelerate your development!
4+
5+
<p align="center">
6+
<img src="https://cdn-uploads.huggingface.co/production/uploads/640e21ef3c82bd463ee5a76d/aooU0a3DMtYmy_1IWMaIM.png" alt="SmolVLA architecture." width="500"/>
7+
<br/>
8+
<em>Figure 1. SmolVLA takes as input (i) multiple cameras views, (ii) the robot’s current sensorimotor state, and (iii) a natural language instruction, encoded into contextual features used to condition the action expert when generating an action chunk.</em>
9+
</p>
10+
11+
## Set Up Your Environment
12+
13+
1. Install LeRobot by following our [Installation Guide](./installation).
14+
2. Install SmolVLA dependencies by running:
15+
16+
```bash
17+
pip install -e ".[smolvla]"
18+
```
19+
20+
## Collect a dataset
21+
22+
SmolVLA is a base model, so fine-tuning on your own data is required for optimal performance in your setup.
23+
We recommend recording ~50 episodes of your task as a starting point. Follow our guide to get started: [Recording a Dataset](https://huggingface.co/docs/lerobot/getting_started_real_world_robot#record-a-dataset)
24+
25+
<Tip>
26+
27+
In your dataset, make sure to have enough demonstrations per each variation (e.g. the cube position on the table if it is cube pick-place task) you are introducing.
28+
29+
We recommend checking out the dataset linked below for reference that was used in the [SmolVLA paper](https://huggingface.co/papers/2506.01844):
30+
31+
🔗 [SVLA SO100 PickPlace](https://huggingface.co/spaces/lerobot/visualize_dataset?path=%2Flerobot%2Fsvla_so100_pickplace%2Fepisode_0)
32+
33+
In this dataset, we recorded 50 episodes across 5 distinct cube positions. For each position, we collected 10 episodes of pick-and-place interactions. This structure, repeating each variation several times, helped the model generalize better. We tried similar dataset with 25 episodes, and it was not enough leading to a bad performance. So, the data quality and quantity is definitely a key.
34+
After you have your dataset available on the Hub, you are good to go to use our finetuning script to adapt SmolVLA to your application.
35+
</Tip>
36+
37+
## Finetune SmolVLA on your data
38+
39+
Use [`smolvla_base`](https://hf.co/lerobot/smolvla_base), our pretrained 450M model, and fine-tune it on your data.
40+
Training the model for 20k steps will roughly take ~4 hrs on a single A100 GPU. You should tune the number of steps based on performance and your use-case.
41+
42+
If you don't have a gpu device, you can train using our notebook on [![Google Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/lerobot/training-smolvla.ipynb)
43+
44+
Pass your dataset to the training script using `--dataset.repo_id`. If you want to test your installation, run the following command where we use one of the datasets we collected for the [SmolVLA Paper](https://huggingface.co/papers/2506.01844).
45+
46+
```bash
47+
cd lerobot && python lerobot/scripts/train.py \
48+
--policy.path=lerobot/smolvla_base \
49+
--dataset.repo_id=${HF_USER}/mydataset \
50+
--batch_size=64 \
51+
--steps=20000 \
52+
--output_dir=outputs/train/my_smolvla \
53+
--job_name=my_smolvla_training \
54+
--policy.device=cuda \
55+
--wandb.enable=true
56+
```
57+
58+
<Tip>
59+
You can start with a small batch size and increase it incrementally, if the GPU allows it, as long as loading times remain short.
60+
</Tip>
61+
62+
Fine-tuning is an art. For a complete overview of the options for finetuning, run
63+
64+
```bash
65+
python lerobot/scripts/train.py --help
66+
```
67+
68+
<p align="center">
69+
<img src="https://cdn-uploads.huggingface.co/production/uploads/640e21ef3c82bd463ee5a76d/S-3vvVCulChREwHDkquoc.gif" alt="Comparison of SmolVLA across task variations." width="500"/>
70+
<br/>
71+
<em>Figure 2: Comparison of SmolVLA across task variations. From left to right: (1) pick-place cube counting, (2) pick-place cube counting, (3) pick-place cube counting under perturbations, and (4) generalization on pick-and-place of the lego block with real-world SO101.</em>
72+
</p>
73+
74+
75+
## Evaluate the finetuned model and run it in real-time
76+
77+
Similarly for when recording an episode, it is recommended that you are logged in to the HuggingFace Hub. You can follow the corresponding steps: [Record a dataset](./getting_started_real_world_robot#record-a-dataset).
78+
Once you are logged in, you can run inference in your setup by doing:
79+
80+
```bash
81+
python -m lerobot.record \
82+
--robot.type=so101_follower \
83+
--robot.port=/dev/ttyACM0 \ # <- Use your port
84+
--robot.id=my_blue_follower_arm \ # <- Use your robot id
85+
--robot.cameras="{ front: {type: opencv, index_or_path: 8, width: 640, height: 480, fps: 30}}" \ # <- Use your cameras
86+
--dataset.single_task="Grasp a lego block and put it in the bin." \ # <- Use the same task description you used in your dataset recording
87+
--dataset.repo_id=${HF_USER}/eval_DATASET_NAME_test \ # <- This will be the dataset name on HF Hub
88+
--dataset.episode_time_s=50 \
89+
--dataset.num_episodes=10 \
90+
--policy.path=HF_USER/FINETUNE_MODEL_NAME # <- Use your fine-tuned model
91+
```
92+
93+
Depending on your evaluation setup, you can configure the duration and the number of episodes to record for your evaluation suite.

0 commit comments

Comments
 (0)