| Code Directory | Description |
|---|---|
| robot_controllers | Impedance controller for the UR5 robot arm |
| box_picking_env | Environment setup for the box picking task |
| vision | Point-Cloud based encoders |
| utils | Point-Cloud fusion and voxelization |
- Follow the installation in the official SERL repo.
- Check envs and either use the provided box_picking_env or set up a new environment using the one mentioned as a template. (New environments have to be registered here)
- Use the config file to configure all the robot-arm specific parameters, as well as gripper and camera infos.
- Go to the box picking folder and modify the bash files
run_learner.pyandrun_actor.py. If no images are used, setcamera_modetonone. WandB logging can be deactivated ifdebugis set to True. - Record 20 demostrations using record_demo.py in the same folder. Double check that the
camera_modeand all environment-wrappers are identical to drq_policy.py. - Execute
run_learner.pyandrun_actor.pysimultaneously to start the RL training. - To evaluate on a policy, modify and execute
run_evaluation.pywith the specified checkpoint path and step.
- improve readme
- add paper link
- document how to use in a real setting