Skip to content

Latest commit

 

History

History
43 lines (43 loc) · 1.6 KB

2019-06-25-cutkosky19b.md

File metadata and controls

43 lines (43 loc) · 1.6 KB
abstract section title layout series id month tex_title firstpage lastpage page order cycles bibtex_author author date address publisher container-title volume genre issued pdf extras
We show how to take any two parameter-free online learning algorithms with different regret guarantees and obtain a single algorithm whose regret is the minimum of the two base algorithms. Our method is embarrassingly simple: just add the iterates. This trick can generate efficient algorithms that adapt to many norms simultaneously, as well as providing diagonal-style algorithms that still maintain dimension-free guarantees. We then proceed to show how a variant on this idea yields a black-box procedure for generating \emph{optimistic} online learning algorithms. This yields the first optimistic regret guarantees in the unconstrained setting and generically increases adaptivity. Further, our optimistic algorithms are guaranteed to do no worse than their non-optimistic counterparts regardless of the quality of the optimistic estimates provided to the algorithm.
contributed
Combining Online Learning Guarantees
inproceedings
Proceedings of Machine Learning Research
cutkosky19b
0
Combining Online Learning Guarantees
895
913
895-913
895
false
Cutkosky, Ashok
given family
Ashok
Cutkosky
2019-06-25
PMLR
Proceedings of the Thirty-Second Conference on Learning Theory
99
inproceedings
date-parts
2019
6
25