A comprehensive ROS 2 platform for controlling the TidyBot++ mobile manipulator. This codebase integrates a policy learning pipeline into both simulated and real environments, and encapsulates all communication using ROS 2 nodes. Developed at the ROAHM Lab at the University of Michigan, Ann Arbor.
This project is an adaptation of the original TidyBot++ project (Wu et. al., full citation below) and other open-source projects.
- Zhuoyang Chen (janchen@umich.edu)
- Yuandi Huang (yuandi@umich.edu)
The TidyBot++ is a mobile manipulator consisting of:
- Mobile Base: Omnidirectional platform with holonomic drive
- Manipulator: Kinova Gen3 7-DOF robotic arm
- End Effector: Robotiq 2F-85 parallel gripper
- Sensors: RGBD wrist/base-mounted cameras, optional external cameras
A complete bill of materials and hardware setup guide is available in the original project documentation.
Using this codebase, we have successfully trained diffusion policies and finetuned vision-language-action models. Below are demonstrations of policies deployed in our simulated environment and on hardware:
| Diffusion policy in Gazebo Sim | Diffusion policy on hardware | VLA on hardware |
Robot model definition and simulation setup.
- URDF/XACRO files for complete robot description
- Configurations for Gazebo simulation
- Setup for RViz visualization
Low-level controllers for hardware.
- Kinova Kortex API integration
- Mobile base control interface
- Publishers for camera feeds
- Joint state publisher
Teleoperation and policy deployment.
- WebXR-based smartphone teleoperation
- Gamepad-based teleoperation
- Remote inference integration
- Real-time control command processing
- Episode recording integration
Motion planning and inverse kinematics. We integrate the original TidyBot++ WebXR teleoperation interface with MoveIt2.
- MoveIt2 integration for arm planning
- Real-time servo control capabilities
- Execution of trajectories on low-level controllers
MoveIt2 configuration package (auto-generated from robot description).
- Motion planning configuration
- Kinematics solver setup
- Collision detection parameters
- Planning scene configuration
Data recording and replay system.
- ROS bag recording of teleoperation episodes
- Data conversion to HDF5 format or .parquet
- Dataset generation for policy training
Custom messages and services for teleoperation
- OS: Ubuntu 24.04 LTS
- Docker: Latest version (for containerized deployment)
-
Clone the repository:
git clone https://github.com/roahmlab/tidybot_platform.git cd tidybot_platform -
Build the workspace:
./docker/build.sh
Connect the Canivore-usb module, the Kinova Gen3 arm and Orbbec camera to the dev machine.
Before launching the docker container:
# Setup the base camera access
sudo bash ./scripts/install_udev_rules.sh
sudo udevadm control --reload-rules && sudo udevadm triggerThen run the container and setup the environment inside the container:
# Run the container
./docker/run.sh restart
# --------------------------Inside the container-------------------------- #
# Install the canivore-usb inside the container
sudo apt update && sudo apt install canivore-usb -y
# Create a python virtual environment
python3 -m venv --system-site-packages venv && source venv/bin/activate
# Install all required python packages
pip install -r requirements.txt
# Install pre-built OpenCV wheel
wget -O opencv_python-4.9.0.80-cp312-cp312-linux_x86_64.whl "https://www.dropbox.com/scl/fi/mzfz0i7qljmwzm5bm22td/opencv_python-4.9.0.80-cp312-cp312-linux_x86_64.whl?rlkey=rbero1ycahdk1d6jeixiu5p5l&st=k5dzfv6a&dl=0" && pip install --force-reinstall --no-deps opencv_python-4.9.0.80-cp312-cp312-linux_x86_64.whl && rm ./opencv_python-4.9.0.80-cp312-cp312-linux_x86_64.whl
# Install Kortex-API
wget https://artifactory.kinovaapps.com:443/artifactory/generic-public/kortex/API/2.6.0/kortex_api-2.6.0.post3-py3-none-any.whl && pip install ./kortex_api-2.6.0.post3-py3-none-any.whl && pip install protobuf==3.20.0 && rm ./kortex_api-2.6.0.post3-py3-none-any.whl
# Ignore venv when colcon build
touch venv/COLCON_IGNORE
# Source the environment
colcon build && source install/setup.bash && export PYTHONPATH=$PWD/venv/lib/python3.12/site-packages:$PYTHONPATHTo add venv path to your PYTHONPATH automatically every time you start a new session, use
echo 'export PYTHONPATH=$HOME/tidybot_platform/venv/lib/python3.12/site-packages:$PYTHONPATH' >> ~/.bashrcThen in another host terminal:
# Setup the CAN connection
sudo bash ./scripts/setup_docker_can.shros2 launch tidybot_description launch_sim_robot.launch.py- Launch the Gazebo simulation environment and publish corresponding topics for robot control and monitoring
- Launch RViz2 for robot state visualization and camera view
# Start hardware drivers
ros2 launch tidybot_driver launch_hardware_robot.launch.py mode:=full- Launch RViz2 for robot state visualization and camera view
Publish the WebXR app for phone and tablet to connect and relay the web messages to the robot
ros2 launch tidybot_policy launch_phone_policy.launch.py use_sim:=<true | false>Connect to the a Xbox Series X gamepad and relay joystick messages to the robot
ros2 launch tidybot_policy launch_gamepad_policy.launch.py use_sim:=<true | false>Follow the previous section on how to launch a robot. Then launch the teleop with recording enabled
ros2 launch tidybot_policy launch_phone_policy.launch.py use_sim:=<true | false> record:=trueThis launch file should publish a webserver that will prompt the user to save/discard the recorded episode after the episode ends. If such a window does not pop up, try to refresh the webpage.
The recorded episodes will be stored in episode_bag folder in the current directory as ros bags.
After the data collection is done. Run the converted node to convert all the rosbags into a .hdf5 tarball.
ros2 run tidybot_episode rosbag_to_hdf5The converted dataset will be saved as data.hdf5 under the current directory
We recommend going to the original Tidybot++ codebase to train a diffusion policy for the tidybot platform. Our platform generates datasets with an identical structure to the original project.
The policy server in this part is adapted from the original Tidybot++ project. You can follow the same instructions from the original documentation to setup the policy server on a GPU machine. Once the GPU server is running, setup a SSH tunnel from the dev machine to GPU server by
ssh -L 5555:localhost:5555 <gpu-server-hostname>Then launch the remote policy on the dev machine by
ros2 launch tidybot_policy launch_remote_policy_diffusion.launch.py use_sim:=<true | false>This launch file will also launch a webserver that can be used to control the policy inference. By pressing the middle of the screen, the inference will be enabled. If there's no user interaction detected on the webserver, the inference will be paused.
- Modify
src/tidybot_description/config/tidybot_controllers.yaml - Adjust joint limits in
src/tidybot_moveit_config/config/joint_limits.yaml
- Default WebXR port: 5000
- ROS domain ID can be set via
ROS_DOMAIN_IDenvironment variable
- CAN bus setup:
./setup_can.shif running inside of a docker container - Camera configuration in driver package
-
Build Failures
# Clean and rebuild rm -rf build/ install/ log/ ./build.sh -
Network Issues
- Ensure firewall allows port 5000
- Check ROS_DOMAIN_ID consistency
- Verify network connectivity between devices
-
Hardware Connection Issues
- Check CAN bus connection:
candump can0
- Check CAN bus connection:
-
Docker Container Issues
- If the Rviz / Gazebo simulation viewer does not launch, try to establish X11 permissions/Xauthority on the host before starting the container:
If you are on Wayland, export an X display first:
xhost +SI:localuser:$(whoami) xhost +SI:localuser:rootThen restart the container.export DISPLAY=${DISPLAY:-:0}
- If the Rviz / Gazebo simulation viewer does not launch, try to establish X11 permissions/Xauthority on the host before starting the container:
- ROS logs:
~/.ros/log/ - Container logs:
docker logs tidybot_platform - Enable debug mode: Set
ROS_LOG_LEVEL=DEBUG
- Fork the repository
- Create a feature branch
- Make your changes
- Test thoroughly
- Submit a pull request
- The original Tidybot++ project
@inproceedings{wu2024tidybot, title = {TidyBot++: An Open-Source Holonomic Mobile Manipulator for Robot Learning}, author = {Wu, Jimmy and Chong, William and Holmberg, Robert and Prasad, Aaditya and Gao, Yihuai and Khatib, Oussama and Song, Shuran and Rusinkiewicz, Szymon and Bohg, Jeannette}, booktitle = {Conference on Robot Learning}, year = {2024} }
- MoveIt2
- Kinova Gen3 ROS2
- OrbbecSDK ROS2
- Diffusion Policy