abstract | section | title | layout | series | id | month | tex_title | firstpage | lastpage | page | order | cycles | bibtex_author | author | date | address | publisher | container-title | volume | genre | issued | extras | ||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
We study adaptive regret bounds in terms of the variation of the losses (the so-called path-length bounds) for both multi-armed bandit and more generally linear bandit. We first show that the seemingly suboptimal path-length bound of (Wei and Luo, 2018) is in fact not improvable for adaptive adversary. Despite this negative result, we then develop two new algorithms, one that strictly improves over (Wei and Luo, 2018) with a smaller path-length measure, and the other which improves over (Wei and Luo, 2018) for oblivious adversary when the path-length is large. Our algorithms are based on the well-studied optimistic mirror descent framework, but importantly with several novel techniques, including new optimistic predictions, a slight bias towards recently selected arms, and the use of a hybrid regularizer similar to that of (Bubeck et al., 2018). Furthermore, we extend our results to linear bandit by showing a reduction to obtaining dynamic regret for a full-information problem, followed by a further reduction to convex body chasing. As a consequence we obtain new dynamic regret results as well as the first path-length regret bounds for general linear bandit. |
contributed |
Improved Path-length Regret Bounds for Bandits |
inproceedings |
Proceedings of Machine Learning Research |
bubeck19b |
0 |
Improved Path-length Regret Bounds for Bandits |
508 |
528 |
508-528 |
508 |
false |
Bubeck, S{\'e}bastien and Li, Yuanzhi and Luo, Haipeng and Wei, Chen-Yu |
|
2019-06-25 |
PMLR |
Proceedings of the Thirty-Second Conference on Learning Theory |
99 |
inproceedings |
|