This is a bumped up fork of learned_optimization library which works with latest jax and python 3.11. Also uses wandb in summary writer for logging instead of tensorboard.
Install commands (tested on RTX 8000 with cuda 12.8):
# create new conda env
conda create -n glo python=3.11
# install pip packages
pip install "jax[cuda12]==0.6.2"
pip install dm-haiku==0.0.14
pip install optax==0.2.9
pip install tqdm==4.67.1
pip install absl-py==2.2.2
pip install --no-cache-dir tensorflow[and-cuda]
pip install flax==0.10.6
pip install tensorflow-datasets==4.9.8
pip install gin-config==0.5.0
pip install pandas==2.2.3
pip install seqio==0.0.19
pip install git+https://github.com/amoudgl/learned_optimization.git
pip install nvidia-cudnn-cu12==9.8.0.87
pip install tensorflow-probability==0.24.0
pip install tf-keras==2.19.0
pip install wandb==0.21.0To enable logging of intermediate values in jit functions, install oryx from a latest git commit since its pip version (v0.2.9) was last updated in Dec 2024 and hence is not compatible with jax at the time of writing (v0.6.2, July 2025):
pip install git+https://github.com/jax-ml/oryx.git@2619298bbda423ffb0923d69acaeb1cccd7d7e44
The code runs even if you don't install oryx except that intermediate values will not logged by the summary writer.
Original package README below:
learned_optimization is a research codebase for training, designing, evaluating, and applying learned optimizers, and for meta-training of dynamical systems more broadly. It implements hand-designed and learned optimizers, tasks to meta-train and meta-test them, and outer-training algorithms such as ES, PES, and truncated backprop through time.
To get started see our documentation.
Our documentation can also be run as colab notebooks! We recommend running these notebooks with a free accelerator (TPU or GPU) in colab (go to Runtime -> Change runtime type).
- Introduction :
- Creating custom tasks:
- Truncated Steps:
- Gradient estimators:
- Meta training:
- Custom learned optimizers:
Simple, self-contained, learned optimizer example that does not depend on the learned_optimization library:
We strongly recommend using virtualenv to work with this package.
pip3 install virtualenv
git clone git@github.com:google/learned_optimization.git
cd learned_optimization
python3 -m venv env
source env/bin/activate
pip install -e .
To train a learned optimizer on a simple inner-problem, run the following:
python3 -m learned_optimization.examples.simple_lopt_train --train_log_dir=/tmp/logs_folder --alsologtostderr
This will first use tfds to download data, then start running. After a few minutes you should see numbers printed.
A tensorboard can be pointed at this directory for visualization of results. Note this will run very slowly without an accelerator.
File a github issue! We will do our best to respond promptly.
Wrote a paper or blog post that uses learned_optimization? Add it to the list!
- Vicol, Paul, Luke Metz, and Jascha Sohl-Dickstein. "Unbiased gradient estimation in unrolled computation graphs with persistent evolution strategies." International Conference on Machine Learning (Best paper award). PMLR, 2021.
- Metz, Luke*, C. Daniel Freeman*, Samuel S. Schoenholz, and Tal Kachman. "Gradients are Not All You Need." arXiv preprint arXiv:2111.05803 (2021).
We locate test files next to the related source as opposed to in a separate tests/ folder.
Each test can be run directly, or with pytest (e.g. python3 -m pytest learned_optimization/outer_trainers/). Pytest can also be used to run all tests with python3 -m pytest, but this will take quite some time.
If something is broken please file an issue and we will take a look!
To cite this repository:
@inproceedings{metz2022practical,
title={Practical tradeoffs between memory, compute, and performance in learned optimizers},
author={Metz, Luke and Freeman, C Daniel and Harrison, James and Maheswaranathan, Niru and Sohl-Dickstein, Jascha},
booktitle = {Conference on Lifelong Learning Agents (CoLLAs)},
year = {2022},
url = {http://github.com/google/learned_optimization},
}
learned_optimization is not an official Google product.