Releases: zuoxingdong/lagom
Minor updates
-
JIT-enabled LayerNormLSTM, it is much faster than the raw implementation !
-
Sync
examples/mdn
andexamples/VAE
to the latest API design
Minor updates
-
Added instruction about how to train
dm_control
environments with minimal modifications. -
Remove
AtariPreprocessing
because it's merged as a PR in gym officially. -
Improve scripts & CI: use conda as much as possible for MKL optimization (like numpy/scipy) etc.
RL Baselines stable release
Research-friendly (easy to read & modify) RL baselines
It contains following algorithms for now:
- ES: CEM/CMA-ES/OpenAI-ES
- RL: VPG/PPO/DDPG/TD3/SAC
Breaking refactoring
Much easier and cleaner API
alpha release
Major high-level designs and major APIs converge to be stable in this release. Most of modules are well-tested.
Preview release
This is a preview release of lagom. For this version, see Basics in README for quick start or directly play around with examples. A full documentation is available online at http://lagom.readthedocs.io/.