A Reinforcement-Learning(RL) Training Framework for Legged Manipulation Robots
Go2Arm_Lab enables RL training for the Go2Arm robot:
- Base platform: Unitree Go2 quadruped
- Manipulator: Interbotix WidowX 250s robotic arm
Version compatibility
This repository currently depends on IsaacLab v2.2.0.
For IsaacLab v2.1.0, please use the v2.1.0 version of this repository. For IsaacLab v1.4.1, please use the v1.4.1 version of this repository.
Gazebo deployment
If you want to deploy your policy in Gazebo, please use: Go2Arm_sim2sim
| IsaacLab Simulation (v2.2) | Gazebo Simulation |
|---|---|
For more videos, please visit my Bilibili homepage.
- Follow the official guide to install IsaacLab v2.1.0.
- Clone this repository into the same directory as IsaacLab:
git clone https://github.com/zzzJie-Robot/Go2Arm_Lab.git - Install the package using the Python interpreter that IsaacLab uses:
python -m pip install -e source/Go2Arm_Lab
Run reinforcement-learning training in headless mode for higher efficiency:
# Activate IsaacLab environment
conda activate your_isaaclab_env
# Go to Go2Arm_Lab
cd /path/to/Go2Arm_Lab
# Launch training (headless)
python scripts/rsl_rl/train.py --task Isaac-Go2Arm-Flat --headless
Deploy a trained policy in a single environment:
# Activate IsaacLab environment
conda activate your_isaaclab_env
# Go to IsaacLab root
cd /path/to/Go2Arm_Lab
# Run inference
python scripts/rsl_rl/play.py --task Isaac-Go2Arm-Flat-Play --num_envs 1
The RL algorithm implementation in this project references the Deep-Whole-Body-Control project, for which we extend our sincere gratitude.