Skip to content

Latest commit

 

History

History
52 lines (52 loc) · 2.08 KB

2021-07-21-cohen21a.md

File metadata and controls

52 lines (52 loc) · 2.08 KB
title abstract layout series publisher issn id month tex_title firstpage lastpage page order cycles bibtex_author author date address container-title volume genre issued pdf extras
Online Markov Decision Processes with Aggregate Bandit Feedback
We study a novel variant of online finite-horizon Markov Decision Processes with adversarially changing loss functions and initially unknown dynamics. In each episode, the learner suffers the loss accumulated along the trajectory realized by the policy chosen for the episode, and observes aggregate bandit feedback: the trajectory is revealed along with the cumulative loss suffered, rather than the individual losses encountered along the trajectory. Our main result is a computationally efficient algorithm with $O(\sqrt{K})$ regret for this setting, where K is the number of episodes. We establish this result via an efficient reduction to a novel bandit learning setting we call Distorted Linear Bandits (DLB), which is a variant of bandit linear optimization where actions chosen by the learner are adversarially distorted before they are committed. We then develop a computationally-efficient online algorithm for DLB for which we prove an $O(\sqrt{T})$ regret bound, where T is the number of time steps. Our algorithm is based on online mirror descent with a self-concordant barrier regularization that employs a novel increasing learning rate schedule.
inproceedings
Proceedings of Machine Learning Research
PMLR
2640-3498
cohen21a
0
Online Markov Decision Processes with Aggregate Bandit Feedback
1301
1329
1301-1329
1301
false
Cohen, Alon and Kaplan, Haim and Koren, Tomer and Mansour, Yishay
given family
Alon
Cohen
given family
Haim
Kaplan
given family
Tomer
Koren
given family
Yishay
Mansour
2021-07-21
Proceedings of Thirty Fourth Conference on Learning Theory
134
inproceedings
date-parts
2021
7
21