2022
Lyapunov Exponents for Diversity in Differentiable Games
Jonathan Lorraine, Paul Vicol, Jack Parker-Holder, Tal Kachman, Luke Metz, Jakob Foerster
AAMAS 2022
2021
Unbiased Gradient Estimation in Unrolled Computation Graphs with Persistent Evolution Strategies
Paul Vicol, Luke Metz, Jascha Sohl-Dickstein
ICML 2021, Best paper award
Learn2Hop: Learned Optimization on Rough Landscapes
Amil Merchant, Luke Metz, Samuel S Schoenholz, Ekin D Cubuk
ICML 2021
On linear identifiability of learned representations
Geoffrey Roeder, Luke Metz, Diederik P. Kingma
ICML 2021, DeepMath Workshop 2019
Training Learned Optimizers with Randomly Initialized Learned Optimizers
Luke Metz, C. Daniel Freeman, Niru Maheswaranathan, Jascha Sohl-Dickstein
Reverse engineering learned optimizers reveals known and novel mechanisms
Niru Maheswaranathan, David Sussillo, Luke Metz, Ruoxi Sun, Jascha Sohl-Dickstein
NeurIPS 2021. NeurIPS MetaLearning Workshop 2020
2020
Parallel training of deep networks with local updates
Michael Laskin*, Luke Metz*, Seth Nabarrao, Mark Saroufim, Badreddine Noune, Carlo Luschi, Jascha Sohl-Dickstein, Pieter Abbeel
Ridge Rider: Finding Diverse Solutions by Following Eigenvectors of the Hessian
Jack Parker-Holder*, Luke Metz, Cinjon Resnick, Hengyuan Hu, Adam Lerer, Alistair Letcher, Alexander Peysakhovich, Aldo Pacchiano, Jakob Foerster*
NeurIPS 2020
Tasks, stability, architecture, and compute: Training more effective learned optimizers, and using them to train themselves
Luke Metz, Niru Maheswaranathan, C Daniel Freeman, Ben Poole, Jascha Sohl-Dickstein
NeurIPS MetaLearning Workshop 2020
Using a thousand optimization tasks to learn hyperparameter search strategies
Luke Metz, Niru Maheswaranathan, Ruoxi Sun, C Daniel Freeman, Ben Poole, Jascha Sohl-Dickstein
NeurIPS MetaLearning Workshop 2020
2019
Learning to Predict Without Looking Ahead: World Models Without Forward Prediction
C. Daniel Freeman, Luke Metz, David Ha
NeurIPS 2019
Understanding and correcting pathologies in the training of learned optimizers
Luke Metz, Niru Maheswaranathan, Jeremy Nixon, C. Daniel Freeman, Jascha Sohl-Dickstein
ICML 2019 (20 min oral), NeurIPS 2018 metalearning workshop
Guided evolutionary strategies: Augmenting random search with surrogate gradients
Niru Maheswaranathan, Luke Metz, George Tucker, Jascha Sohl-Dickstein
ICML 2019
Meta-Learning Update Rules for Unsupervised Representation Learning
Luke Metz, Niru Maheswaranathan, Brian Cheung, Jascha Sohl-Dickstein
ICLR 2019 (oral), COSYNE 2018 (oral), NeurIPS 2017 metalearning workshop
Towards GAN Benchmarks Which Require Generalization
Ishaan Gulrajani, Colin Raffel, Luke Metz
ICLR 2019
Using learned optimizers to make models robust to input noise
Luke Metz, Niru Maheswaranathan, Jonathon Shlens, Jascha Sohl-Dickstein, Ekin D. Cubuk
ICML workshop: Uncertainty and Robustness in Deep Learning 2019
2018
Adversarial Spheres
Justin Gilmer, Luke Metz, Fartash Faghri, Samuel S Schoenholz, Maithra Raghu, Martin Wattenberg, Ian Goodfellow
2017
Compositional Pattern Producing GAN
Luke Metz, Ishaan Gulrajani
NeurIPS 2017 creativity workshop
Discrete Sequential Prediction of Continuous Actions for Deep RL
Luke Metz, Julian Ibarz, Navdeep Jaitly, James Davidson
Began: Boundary Equilibrium Generative Adversarial Networks
David Berthelot, Tom Schumm, Luke Metz
Unrolled Generative Adversarial Networks
Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein
ICLR 2017
2016
Unsupervised Representation Learning With Deep Convolutional Generative Adversarial Networks
Alec Radford, Luke Metz, Soumith Chintala
ICLR 2016