learned_optimization is a research codebase for training, designing, evaluating, and applying learned optimizers, and for meta-training of dynamical systems more broadly. It implements hand-designed and learned optimizers, tasks to meta-train and meta-test them, and outer-training algorithms such as ES, PES, and truncated backprop through time.
To get started see our documentation.
Our documentation can also be run as colab notebooks! We recommend running these notebooks with a free accelerator (TPU or GPU) in colab (go to Runtime
-> Change runtime type
).
- Introduction :
- Creating custom tasks:
- Truncated Steps:
- Gradient estimators:
- Meta training:
- Custom learned optimizers:
Simple, self-contained, learned optimizer example that does not depend on the learned_optimization library:
We strongly recommend using virtualenv to work with this package.
pip3 install virtualenv
git clone [email protected]:google/learned_optimization.git
cd learned_optimization
python3 -m venv env
source env/bin/activate
pip install -e .
To train a learned optimizer on a simple inner-problem, run the following:
python3 -m learned_optimization.examples.simple_lopt_train --train_log_dir=/tmp/logs_folder --alsologtostderr
This will first use tfds to download data, then start running. After a few minutes you should see numbers printed.
A tensorboard can be pointed at this directory for visualization of results. Note this will run very slowly without an accelerator.
File a github issue! We will do our best to respond promptly.
Wrote a paper or blog post that uses learned_optimization? Add it to the list!
- Vicol, Paul, Luke Metz, and Jascha Sohl-Dickstein. "Unbiased gradient estimation in unrolled computation graphs with persistent evolution strategies." International Conference on Machine Learning (Best paper award). PMLR, 2021.
- Metz, Luke*, C. Daniel Freeman*, Samuel S. Schoenholz, and Tal Kachman. "Gradients are Not All You Need." arXiv preprint arXiv:2111.05803 (2021).
We locate test files next to the related source as opposed to in a separate tests/
folder.
Each test can be run directly, or with pytest (e.g. python3 -m pytest learned_optimization/outer_trainers/
). Pytest can also be used to run all tests with python3 -m pytest
, but this will take quite some time.
If something is broken please file an issue and we will take a look!
To cite this repository:
@inproceedings{metz2022practical,
title={Practical tradeoffs between memory, compute, and performance in learned optimizers},
author={Metz, Luke and Freeman, C Daniel and Harrison, James and Maheswaranathan, Niru and Sohl-Dickstein, Jascha},
booktitle = {Conference on Lifelong Learning Agents (CoLLAs)},
year = {2022},
url = {http://github.com/google/learned_optimization},
}
learned_optimization is not an official Google product.