Skip to content

Latest commit

 

History

History
46 lines (46 loc) · 1.82 KB

2021-07-21-casgrain21a.md

File metadata and controls

46 lines (46 loc) · 1.82 KB
title abstract layout series publisher issn id month tex_title firstpage lastpage page order cycles bibtex_author author date address container-title volume genre issued pdf extras
Optimizing Optimizers: Regret-optimal gradient descent algorithms
This paper treats the task of designing optimization algorithms as an optimal control problem. Using regret as a metric for an algorithm’s performance, we study the existence, uniqueness and consistency of regret-optimal algorithms. By providing first-order optimality conditions for the control problem, we show that regret-optimal algorithms must satisfy a specific structure in their dynamics which we show is equivalent to performing \emph{dual-preconditioned gradient descent} on the value function generated by its regret. Using these optimal dynamics, we provide bounds on their rates of convergence to solutions of convex optimization problems. Though closed-form optimal dynamics cannot be obtained in general, we present fast numerical methods for approximating them, generating optimization algorithms which directly optimize their long-term regret. These are benchmarked against commonly used optimization algorithms to demonstrate their effectiveness.
inproceedings
Proceedings of Machine Learning Research
PMLR
2640-3498
casgrain21a
0
Optimizing Optimizers: Regret-optimal gradient descent algorithms
883
926
883-926
883
false
Casgrain, Philippe and Kratsios, Anastasis
given family
Philippe
Casgrain
given family
Anastasis
Kratsios
2021-07-21
Proceedings of Thirty Fourth Conference on Learning Theory
134
inproceedings
date-parts
2021
7
21