Skip to content

Latest commit

 

History

History
53 lines (53 loc) · 2.07 KB

2019-06-25-bubeck19b.md

File metadata and controls

53 lines (53 loc) · 2.07 KB
abstract section title layout series id month tex_title firstpage lastpage page order cycles bibtex_author author date address publisher container-title volume genre issued pdf extras
We study adaptive regret bounds in terms of the variation of the losses (the so-called path-length bounds) for both multi-armed bandit and more generally linear bandit. We first show that the seemingly suboptimal path-length bound of (Wei and Luo, 2018) is in fact not improvable for adaptive adversary. Despite this negative result, we then develop two new algorithms, one that strictly improves over (Wei and Luo, 2018) with a smaller path-length measure, and the other which improves over (Wei and Luo, 2018) for oblivious adversary when the path-length is large. Our algorithms are based on the well-studied optimistic mirror descent framework, but importantly with several novel techniques, including new optimistic predictions, a slight bias towards recently selected arms, and the use of a hybrid regularizer similar to that of (Bubeck et al., 2018). Furthermore, we extend our results to linear bandit by showing a reduction to obtaining dynamic regret for a full-information problem, followed by a further reduction to convex body chasing. As a consequence we obtain new dynamic regret results as well as the first path-length regret bounds for general linear bandit.
contributed
Improved Path-length Regret Bounds for Bandits
inproceedings
Proceedings of Machine Learning Research
bubeck19b
0
Improved Path-length Regret Bounds for Bandits
508
528
508-528
508
false
Bubeck, S{\'e}bastien and Li, Yuanzhi and Luo, Haipeng and Wei, Chen-Yu
given family
Sébastien
Bubeck
given family
Yuanzhi
Li
given family
Haipeng
Luo
given family
Chen-Yu
Wei
2019-06-25
PMLR
Proceedings of the Thirty-Second Conference on Learning Theory
99
inproceedings
date-parts
2019
6
25