-
Notifications
You must be signed in to change notification settings - Fork 2k
Port HIL SERL #644
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Port HIL SERL #644
Conversation
b1be31a
to
2211209
Compare
lerobot/common/policies/hilserl/classifier/modeling_classifier.py
Outdated
Show resolved
Hide resolved
9a68f20
to
ae12807
Compare
ad51d89
to
808cf63
Compare
Co-authored-by: Michel Aractingi <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added a few more comments
lerobot/common/utils/process.py
Outdated
def signal_handler(signum, frame): | ||
logging.info("Shutdown signal received. Cleaning up...") | ||
shutdown_event.set() | ||
global shutdown_event_counter |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you put this code in a class?
I'd really prefer to not have global
s ^^
@@ -9,6 +9,10 @@ | |||
title: Getting Started with Real-World Robots | |||
- local: cameras | |||
title: Cameras | |||
- local: hilserl |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can refactor/reorganize the docs in an upcoming dedicated PR if not now
…ing variables and simplifying
for more information, see https://pre-commit.ci
…set_seed function for improved clarity
… a default seed value in set_random_seed fixture for consistency
…nalHandler (#1263) Co-authored-by: Steven Palma <[email protected]>
Given the size of this PR and our tight deadline for merging into Once this PR lands in main, we should open tickets/PR to address the unresolved conversations and to review more in-depth the code. This also applies for #1263, which introduces last minutes changes in critical resource management design, for which not all conversations were fully resolved either. Namely: #1263 (comment) cc @AdilZouitine cc @michel-aractingi cc @helper2424 |
@imstevenpmwork sounds good. I also have one more #1266. We can merge it after the hackathon 👍 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, massive thanks and congrats to everyone involved 🥳
@@ -97,7 +98,8 @@ stretch = [ | |||
"pyrender @ git+https://github.com/mmatl/pyrender.git ; sys_platform == 'linux'", | |||
"pyrealsense2>=2.55.1.6486 ; sys_platform != 'darwin'" | |||
] | |||
test = ["pytest>=8.1.0", "pytest-cov>=5.0.0", "mock-serial>=0.0.1 ; sys_platform != 'win32'"] | |||
test = ["pytest>=8.1.0", "pytest-timeout>=2.4.0", "pytest-cov>=5.0.0", "pyserial>=3.5", "mock-serial>=0.0.1 ; sys_platform != 'win32'"] | |||
hilserl = ["transformers>=4.48", "gym-hil>=0.1.8", "protobuf>=5.29.3", "grpcio==1.71.0"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Gamepad introduces a dependency on pygame
and hid
, which don't seem to be explicitly defined in the .toml
, but imported from the reliance of gym-hil
. This means that trying to use the gamepad without importing gym-hil
will raise.
@AdilZouitine @michel-aractingi During data collection, Why does the cube spawn in the same position in all episodes. Is there a config to randomize the cube position during reset ? |
Implementing HIL-SERL
This PR implements the HIL-SERL approach as described in the paper.
HIL-SERL
combines Human in the loop intervention with reinforcement learning to enable efficient learning from human demonstrations.The implementation includes:
Reward classifier training with pretrained architecture: Added a lightweight classification head built on top of a frozen, pretrained image encoder from HuggingFace. This classifier processes robot camera images to predict rewards, supporting binary and multi-class classification. The implementation includes metrics tracking with WandB.
Environment configurations for HILSerlRobotEnv: Added configuration classes for the HIL environment including
VideoRecordConfig
,WrapperConfig
,EEActionSpaceConfig
, andEnvWrapperConfig
. These handle parameters for video recording, action space constraints, end-effector control, and environment-specific settings.SAC-based reinforcement learning algorithm: Implemented Soft Actor-Critic (SAC) algorithm with configurable network architectures and optimization settings. The implementation includes actor and critic networks, policy configurations, temperature auto-tuning, and target network updates via exponential moving averages.
Actor-learner architecture with efficient communication protocols: Added actor server script that establishes connection with the learner, creating queues for parameters, transitions, and interactions. Implemented LearnerService class with
gRPC
for efficient streaming of parameters and transitions between components.Replay buffer for storing transitions: Added ReplayBuffer class for storing and sampling transitions in reinforcement learning. Includes functions for random cropping and shifting of images, memory optimization, and batch sampling capabilities.
End-effector control utilities: Implemented input controllers (
KeyboardController
andGamepadController
) that generate motion deltas for robot control. Added utilities for finding joint and end-effector bounds, and for selecting regions of interest in images.Human intervention support: Added
RobotEnv
class that wraps robot interfaces to provide a consistent API for policy evaluation with integrated human intervention. Created PyTorch-compatible action space wrappers for seamless integration with PyTorch tensors.Engineering Design Choices for HIL-SERL Implementation
Environment Abstraction and Entry Points
Currently, environment building for both simulation and real robot training is embedded within
gym_manipulator.py
. This creates a clean interface for robot interaction. While this approach works well for our immediate needs, future discussions may consider consolidating all environment creation through a single entry point inlerobot.common.envs.factory::make_env
for consistency across the codebase and better maintainability.Gym Manipulator
The
gym_manipulator.py
script contains the mainRobotEnv
class, which defines a gym-based interface for theManipulator
robot class. It also contains a set of wrappers that can be used on top of theRobotEnv
class to provide additional functionality necessary for training. For example, theImageCropResizeWrapper
class is used to crop the image to a region of interest and resize it to a fixed size,EEActionWrapper
is used to convert the end-effector action space to joint position commands, and so on.The script contains three additional functions:
make_robot_env
: This function builds a gymnasium environment with theRobotEnv
base and the requested wrappers.record_dataset
: This function allows you to record the offline dataset of demonstrations by recording the robot's actions in the environment. This dataset can be used to train the reward classifier or as the offline dataset for the RL.replay_dataset
: This function allows you to replay a dataset which can be useful for debugging the action space on the robot.You can record/replay a dataset by setting the arguments of
HILSerlRobotEnvConfig
inlerobot/common/envs/configs.py
related tomode
,dataset
(more details in the guide).Q: Why not use
control_robot.py
for collecting and replaying data?A: Since we mostly use end-effector control and different teleoperation devices (gamepad, keyboard or leader), it is more convinent to collect and replay data using the gym env interface in
gym_manipulator.py
.After PR #777 we might be able to seamlessly change then teleoperation device and action space. Then we can revert to using
control_robot.py
for collecting and replaying data.Optional Dataset in TrainPipelineConfig
The
TrainPipelineConfig
class has been modified to make the dataset parameter optional. This reflects the reality that while imitation learning requires demonstration data, pure reinforcement learning algorithms can function without an offline dataset. This makes the training pipeline more versatile and better aligned with various learning paradigms supported by HIL-SERL.Consolidation of Implementation Files
For
actor_server.py
,learner_server.py
, andgym_manipulator.py
, we deliberately chose to create larger, more comprehensive files rather than splitting functionality across multiple smaller files. While this approach goes against some code organization principles, it significantly reduces the cognitive load required to understand these critical components. Each file represents a complete, coherent system with clear boundaries of responsibility.Organization of Server-Side Components
We've placed multiple related files in the
lerobot/script/server
folder as a first step toward better organization. This groups related functionality for the actor-learner architecture. We're waiting for reviewer feedback before proceeding with further organization to ensure our approach aligns with the project's overall structure.MultiAdamConfig for Optimizer Management
We introduced the
MultiAdamConfig
class to simplify handling multiple optimizers. Reinforcement learning methods like SAC typically rely on different networks (actor, critic, temperature) that are optimized at different frequencies and with different hyperparameters. This class:Gradient Flow Through Normalization
We removed the
torch.no_grad()
decorator from normalization functions to allow gradients to flow through these operations. This is essential for end-to-end training where normalized inputs need to contribute to the gradient computation. Without this change, backpropagation would be blocked at normalization boundaries, preventing the model from learning to account for input normalization during training.How it was tested
Reward with maniskill, training without offline data and human intervention
Plots of the intervention rate and reward vs time during one training run. We are able to train a policy with 100% success between 10-30 minutes.
Other videos using this implemenation:
Training timelapse for a pick and lift task: https://www.youtube.com/watch?v=99sVWGECBas
Learning a policy with this implementation on a push cube task with the Piper X arm - https://www.youtube.com/watch?v=2pD1yhEvSgc
Learning a cube insertion task with the SO-100
IMG_5714.mov
How to check out & try it (for the reviewer) 😃
Documentation