Skip to content

Latest commit

 

History

History
37 lines (37 loc) · 1.91 KB

2021-07-21-chen21f.md

File metadata and controls

37 lines (37 loc) · 1.91 KB
title abstract layout series publisher issn id month tex_title firstpage lastpage page order cycles bibtex_author author date address container-title volume genre issued pdf extras
Impossible Tuning Made Possible: A New Expert Algorithm and Its Applications
We resolve the long-standing "impossible tuning" issue for the classic expert problem and show that, it is in fact possible to achieve regret $O\left(\sqrt{(\ln d)\sum_t \ell_{t,i}^2}\right)$ simultaneously for all expert $i$ in a $T$-round $d$-expert problem where $\ell_{t,i}$ is the loss for expert $i$ in round $t$. Our algorithm is based on the Mirror Descent framework with a correction term and a weighted entropy regularizer. While natural, the algorithm has not been studied before and requires a careful analysis. We also generalize the bound to $O\left(\sqrt{(\ln d)\sum_t (\ell_{t,i}-m_{t,i})^2}\right)$ for any prediction vector $m_t$ that the learner receives, and recover or improve many existing results by choosing different $m_t$. Furthermore, we use the same framework to create a master algorithm that combines a set of base algorithms and learns the best one with little overhead. The new guarantee of our master allows us to derive many new results for both the expert problem and more generally Online Linear Optimization.
inproceedings
Proceedings of Machine Learning Research
PMLR
2640-3498
chen21f
0
Impossible Tuning Made Possible: A New Expert Algorithm and Its Applications
1216
1259
1216-1259
1216
false
Chen, Liyu and Luo, Haipeng and Wei, Chen-Yu
given family
Liyu
Chen
given family
Haipeng
Luo
given family
Chen-Yu
Wei
2021-07-21
Proceedings of Thirty Fourth Conference on Learning Theory
134
inproceedings
date-parts
2021
7
21