abstract | section | title | layout | series | id | month | tex_title | firstpage | lastpage | page | order | cycles | bibtex_author | author | date | address | publisher | container-title | volume | genre | issued | extras | |||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Gibbs-ERM learning is a natural idealized model of learning with stochastic optimization algorithms (such as SGLD and —to some extent— SGD), while it also arises in other contexts, including PAC-Bayesian theory, and sampling mechanisms. In this work we study the excess risk suffered by a Gibbs-ERM learner that uses non-convex, regularized empirical risk with the goal to understand the interplay between the data-generating distribution and learning in large hypothesis spaces. Our main results are \emph{distribution-dependent} upper bounds on several notions of excess risk. We show that, in all cases, the distribution-dependent excess risk is essentially controlled by the \emph{effective dimension} |
contributed |
Distribution-Dependent Analysis of Gibbs-ERM Principle |
inproceedings |
Proceedings of Machine Learning Research |
kuzborskij19a |
0 |
Distribution-Dependent Analysis of Gibbs-ERM Principle |
2028 |
2054 |
2028-2054 |
2028 |
false |
Kuzborskij, Ilja and Cesa-Bianchi, Nicol\`{o} and Szepesv\'ari, Csaba |
|
2019-06-25 |
PMLR |
Proceedings of the Thirty-Second Conference on Learning Theory |
99 |
inproceedings |
|