Skip to content

Latest commit

 

History

History
54 lines (54 loc) · 2.13 KB

2021-07-01-brandfonbrener21a.md

File metadata and controls

54 lines (54 loc) · 2.13 KB
title abstract layout series publisher issn id month tex_title firstpage lastpage page order cycles bibtex_author author date address container-title volume genre issued pdf extras
Offline Contextual Bandits with Overparameterized Models
Recent results in supervised learning suggest that while overparameterized models have the capacity to overfit, they in fact generalize quite well. We ask whether the same phenomenon occurs for offline contextual bandits. Our results are mixed. Value-based algorithms benefit from the same generalization behavior as overparameterized supervised learning, but policy-based algorithms do not. We show that this discrepancy is due to the \emph{action-stability} of their objectives. An objective is action-stable if there exists a prediction (action-value vector or action distribution) which is optimal no matter which action is observed. While value-based objectives are action-stable, policy-based objectives are unstable. We formally prove upper bounds on the regret of overparameterized value-based learning and lower bounds on the regret for policy-based algorithms. In our experiments with large neural networks, this gap between action-stable value-based objectives and unstable policy-based objectives leads to significant performance differences.
inproceedings
Proceedings of Machine Learning Research
PMLR
2640-3498
brandfonbrener21a
0
Offline Contextual Bandits with Overparameterized Models
1049
1058
1049-1058
1049
false
Brandfonbrener, David and Whitney, William and Ranganath, Rajesh and Bruna, Joan
given family
David
Brandfonbrener
given family
William
Whitney
given family
Rajesh
Ranganath
given family
Joan
Bruna
2021-07-01
Proceedings of the 38th International Conference on Machine Learning
139
inproceedings
date-parts
2021
7
1