Skip to content

uncore-team/franka_rl

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

27 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

franka_rl

This code is based on dm_robotics_panda, from JeanElsner, and rl_spin_decoupler, from uncore-team.

The intention is to adapt a reinforcement learning environment with HIL (Hardware In the Loop) to gymnasium's API in order to being able to use algorithms libraries such as SB3

Install

Clone the repo:

git clone https://github.com/uncore-team/franka_rl.git
cd franka_rl

Create a python virtual environment and install dependencies:

python3 -m venv .venv
source .venv/bin/activate
pip install dm_robotics_panda
pip install gymnasium
pip install stable-baselines3[extra]

You also need to add rl_spin_decoupler to your workspace (and add it to .gitignore):

cd franka_rl
git clone https://github.com/uncore-team/rl_spin_decoupler.git

Examples

The code based on rl_spin_decoupler uses sockets to communicate two scripts. Open two terminals and execute the code:

cd franka_rl
source .venv/bin/activate
cd test/side_to_side

On terminal 1:

python baselines_side.py

On terminal 2:

python panda_side.py --gui

Create your own training environment

If you want to define your environment (observation and action spaces, reward...), you can do it in a file task.py by creating a child class from the template Task.

About

Code for the Franka Research 3 robot (simulated, real, interface)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •  

Languages