abstract | openreview | title | layout | series | publisher | issn | id | month | tex_title | firstpage | lastpage | page | order | cycles | bibtex_author | author | date | address | container-title | volume | genre | issued | extras | |||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
In this paper we study the problem of lower bounding the minimum eigenvalue of the neural tangent kernel (NTK) at initialization, an important quantity for the theoretical analysis of training in neural networks. We consider feedforward neural networks with smooth activation functions. Without any distributional assumptions on the input, we present a novel result: we show that for suitable initialization variance, |
98SHia4Hg1 |
Neural tangent kernel at initialization: linear width suffices |
inproceedings |
Proceedings of Machine Learning Research |
PMLR |
2640-3498 |
banerjee23a |
0 |
Neural tangent kernel at initialization: linear width suffices |
110 |
118 |
110-118 |
110 |
false |
Banerjee, Arindam and Cisneros-Velarde, Pedro and Zhu, Libin and Belkin, Mikhail |
|
2023-07-02 |
Proceedings of the Thirty-Ninth Conference on Uncertainty in Artificial Intelligence |
216 |
inproceedings |
|
|