Skip to content

rl-tools/rl-tools

Repository files navigation

Paper on arXiv | Live demo (browser) | Documentation | Zoo | Studio

Documentation Documentation
Run tutorials on Binder Run Example on Colab
Join our Discord!

animated animated
Trained on a 2020 MacBook Pro (M1) using RLtools SAC and TD3 (respectively)

Trained on a 2020 MacBook Pro (M1) using RLtools PPO/Multi-Agent PPO

Trained in 18s on a 2020 MacBook Pro (M1) using RLtools TD3

Benchmarks

Benchmarks of training the Pendulum swing-up using different RL libraries (PPO and SAC respectively)

Benchmarks of training the Pendulum swing-up on different devices (SAC, RLtools)

Benchmarks of the inference frequency for a two-layer [64, 64] fully-connected neural network across different microcontrollers (types and architectures).

Quick Start

Clone this repo, then build a Zoo example:

g++ -std=c++17 -O3 -ffast-math -I include src/rl/zoo/l2f/sac.cpp

Run it ./a.out 1337 (number = seed) then run ./tools/serve.sh to visualize the results. Open http://localhost:8000 and navigate to the ExTrack UI to watch the quadrotor flying.

  • macOS: Append -framework Accelerate -DRL_TOOLS_BACKEND_ENABLE_ACCELERATE for fast training (~4s on M3)
  • Ubuntu: Use apt install libopenblas-dev and append -lopenblas -DRL_TOOLS_BACKEND_ENABLE_OPENBLAS (~6s on Zen 5).

Algorithms

Algorithm Example
TD3 Pendulum, Racing Car, MuJoCo Ant-v4, Acrobot
PPO Pendulum, Racing Car, MuJoCo Ant-v4 (CPU), MuJoCo Ant-v4 (CUDA)
Multi-Agent PPO Bottleneck
SAC Pendulum (CPU), Pendulum (CUDA), Acrobot

Projects Based on RLtools

Getting Started

⚠️ Note: Check out Getting Started in the documentation for a more thorough guide

To get started implementing your own environment please refer to rl-tools/example

Documentation

The documentation is available at docs.rl.tools and consists of C++ notebooks. You can also run them locally to tinker around:

docker run -p 8888:8888 rltools/documentation

After running the Docker container, open the link that is displayed in the CLI (http://127.0.0.1:8888/...) in your browser and enjoy tinkering!

Chapter Interactive Notebook
Overview -
Getting Started -
Containers Binder
Multiple Dispatch Binder
Deep Learning Binder
CPU Acceleration Binder
MNIST Classification Binder
Deep Reinforcement Learning Binder
The Loop Interface Binder
Custom Environment Binder
Python Interface Run Example on Colab

Python Interface

We provide Python bindings that available as rltools through PyPI (the pip package index). Note that using Python Gym environments can slow down the trianing significantly compared to native RLtools environments.

pip install rltools gymnasium

Usage:

from rltools import SAC
import gymnasium as gym
from gymnasium.wrappers import RescaleAction

seed = 0xf00d
def env_factory():
    env = gym.make("Pendulum-v1")
    env = RescaleAction(env, -1, 1)
    env.reset(seed=seed)
    return env

sac = SAC(env_factory)
state = sac.State(seed)

finished = False
while not finished:
    finished = state.step()

You can find more details in the Python Interface documentation and from the repository rl-tools/python-interface.

Embedded Platforms

Inference & Training

Inference

Naming Convention

We use snake_case for variables/instances, functions as well as namespaces and PascalCase for structs/classes. Furthermore, we use upper case SNAKE_CASE for compile-time constants.

Citing

When using RLtools in an academic work please cite our publication using the following Bibtex citation:

@article{eschmann_rltools_2024,
  author  = {Jonas Eschmann and Dario Albani and Giuseppe Loianno},
  title   = {RLtools: A Fast, Portable Deep Reinforcement Learning Library for Continuous Control},
  journal = {Journal of Machine Learning Research},
  year    = {2024},
  volume  = {25},
  number  = {301},
  pages   = {1--19},
  url     = {http://jmlr.org/papers/v25/24-0248.html}
}