EN|中文
JoyRL是一套主要基于Torch的强化学习开源框架,旨在让读者仅仅只需通过调参数的傻瓜式操作就能训练强化学习相关项目,从而远离繁琐的代码操作,并配有详细的注释以兼具帮助初学者入门的作用。
本项目为JoyRL离线版,支持读者更方便的学习和自定义算法代码,同时配备JoyRL上线版,集成度相对更高。
目前支持Python=3.8和gymnasium==0.28.1版本。
下载代码:
git clone https://github.com/johnjim0816/joyrl-offline创建Conda环境(需先安装Anaconda):
conda create -n joyrl python=3.8
conda activate joyrl安装Torch:
# CPU
conda install pytorch==1.10.0 torchvision==0.11.0 torchaudio==0.10.0 cpuonly -c pytorch
# GPU
conda install pytorch==1.10.0 torchvision==0.11.0 torchaudio==0.10.0 cudatoolkit=11.3 -c pytorch -c conda-forge
# GPU镜像安装
pip install torch==1.10.0+cu113 torchvision==0.11.0+cu113 torchaudio==0.10.0 --extra-index-url https://download.pytorch.org/whl/cu113安装其他依赖:
pip install -r requirements.txt
# image-url
pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple直接更改 config.config.GeneralConfig()类以及对应算法比如 algos\DQN\config.py中的参数,然后执行:
python main.py运行之后会在目录下自动生成 tasks文件夹用于保存模型和结果。
或者也可以新建一个 yaml文件自定义参数,例如 config/custom_config_Train.yaml然后执行:
python main.py -c config/custom_config_Train.yaml在presets文件夹中已经有一些预设的 yaml文件,并且相应地在benchmarks文件夹中保存了一些已经训练好的结果。
请跳转envs查看说明
| 算法类型 | 算法名称 | 参考文献 | 作者 | 备注 |
|---|---|---|---|---|
| Monte Carlo | RL introduction | johnjim0816 | ||
| Value Iteration | RL introduction | guoshicheng | ||
| Off-policy | Q-learning | RL introduction | johnjim0816 | |
| On-policy | Sarsa | RL introduction | johnjim0816 |
| 算法类别 | 算法名称 | 参考文献 | 作者 | 备注 |
|---|---|---|---|---|
| Value-based | DQN | DQN Paper | johnjim0816, guoshicheng (CNN) | |
| DoubleDQN | DoubleDQN Paper | johnjim0816 | ||
| Dueling DQN | johnjim0816 | |||
| PER_DQN | PER_DQN Paper | wangzhongren,johnjim0816 | ||
| NoisyDQN | NoisyDQN Paper | wangzhongren | ||
| C51 | C51 Paper | also called Categorical DQN | ||
| Rainbow DQN | Rainbow Paper | wangzhongren | ||
| Policy-based | REINFORCE | REINFORCE Paper | johnjim0816 | 最基础的PG算法 |
| A2C | A2C blog | johnjim0816 | ||
| A3C | A3C paper | johnjim0816, Ariel Chen | ||
| GAE | ||||
| ACER | ||||
| TRPO | TRPO Paper | |||
| PPO | PPO Paper | johnjim0816, Wen Qiu | PPO-clip, PPO-kl | |
| DDPG | DDPG Paper | johnjim0816 | ||
| TD3 | TD3 Paper | johnjim0816 |
| 算法类别 | 算法名称 | 参考文献 | 作者 | 备注 |
|---|---|---|---|---|
| MaxEntropy RL | SoftQ | SoftQ Paper | johnjim0816 | |
| SAC | ||||
| Distributional RL | C51 | C51 Paper | also called Categorical DQN | |
| QRDQN | QRDQN Paper | |||
| Offline RL | CQL | CQL Paper | Ariel Chen | |
| BCQ | ||||
| Multi-Agent | IQL | IQL Paper | ||
| VDN | VDN Paper | |||
| QTRAN | ||||
| QMIX | QMIX Paper | |||
| MAPPO | ||||
| MADDPG | ||||
| Sparse reward | Hierarchical DQN | H-DQN Paper | ||
| ICM | ICM Paper | |||
| HER | HER Paper | |||
| Imitation Learning | GAIL | GAIL Paper | Yi Zhang | |
| TD3+BC | TD3+BC Paper | |||
| Model based | Dyna Q | Dyna Q Paper | guoshicheng | |
| Multi Object RL | MO-Qlearning | MO-QLearning Paper | curryliu30 |
| 环境分类 | 环境名称 | 作者 | 算法 |
|---|---|---|---|
| Toy Text | Blackjack-v1 | ||
| Classic Control | Acrobot | ||
| CartPole-v1 | johnjim0816 | DQN, Double DQN, Dueling DQN, REINFORCE, A2C, A3C | |
| wangzhongren | PER DQN | ||
| MountainCar-v0 | GeYuhong | DQN | |
| MountainCarContinuous | |||
| Pendulum | |||
| Box2D | BipedalWalker-v3 | scchy | DDPG |
| LunarLander-v2 | FinnJob | PPO | |
| LunarLanderContinuous-v2 | MekeyPan | SAC | |
| Car Racing | |||
| MuJoCo | Ant-v4 | ||
| HalfCheetah-v4 | |||
| Hopper-v4 | |||
| Atari | Breakout | ||
| Pong | |||
| Tennis | |||
| Multi-Agent Env | |||
| External Env | Mario |
为了证明JoyRL的可靠性,我们做了一些经典框架的对比
| 算法 | Ant | HalfCheetah | Hopper | |
|---|---|---|---|---|
| DQN | JoyRL | |||
| Dopamine | ||||
| OpenAI Baselines |
| 算法 | Breakout | Pong | Enduro | |
|---|---|---|---|---|
| DQN | JoyRL | |||
| Dopamine | ||||
| OpenAI Baselines |
参考贡献说明