Skip to content

Latest commit

 

History

History
47 lines (47 loc) · 1.79 KB

2021-07-21-de21a.md

File metadata and controls

47 lines (47 loc) · 1.79 KB
title abstract layout series publisher issn id month tex_title firstpage lastpage page order cycles bibtex_author author date address container-title volume genre issued pdf extras
Weak learning convex sets under normal distributions
This paper addresses the following natural question: can efficient algorithms weakly learn convex sets under normal distributions? Strong learnability of convex sets under normal distributions is well understood, with near-matching upper and lower bounds given by Klivans et al (2008), but prior to the current work nothing seems to have been known about weak learning. We essentially answer this question, giving near-matching algorithms and lower bounds. For our positive result, we give a poly(n)-time algorithm that can weakly learn the class of convex sets to advantage $\Omega(1/\sqrt{n})$ using only random examples drawn from the background Gaussian distribution. Our algorithm and analysis are based on a new “density increment” result for convex sets, which we prove using tools from isoperimetry. We also give an information-theoretic lower bound showing that $O(\log(n)/\sqrt{n})$ advantage is best possible even for algorithms that are allowed to make poly(n) many membership queries.
inproceedings
Proceedings of Machine Learning Research
PMLR
2640-3498
de21a
0
Weak learning convex sets under normal distributions
1399
1428
1399-1428
1399
false
De, Anindya and Servedio, Rocco
given family
Anindya
De
given family
Rocco
Servedio
2021-07-21
Proceedings of Thirty Fourth Conference on Learning Theory
134
inproceedings
date-parts
2021
7
21