abstract | openreview | title | layout | series | publisher | issn | id | month | tex_title | firstpage | lastpage | page | order | cycles | bibtex_author | author | date | address | container-title | volume | genre | issued | extras | |||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Detection of the outliers is pivotal for any machine learning model deployed and operated in real-world. It is essential for the Deep Neural Networks that were shown to be overconfident with such inputs. Moreover, even deep generative models that allow estimation of the probability density of the input fail in achieving this task. In this work, we concentrate on the specific type of these models: Variational Autoencoders (VAEs). First, we unveil a significant theoretical flaw in the assumption of the classical VAE model. Second, we enforce an accommodating topological property to the image of the deep neural mapping to the latent space: compactness to alleviate the flaw and obtain the means to provably bound the image within the determined limits by squeezing both inliers and outliers together. We enforce compactness using two approaches: |
Knb5BZy-YrU |
Vacant holes for unsupervised detection of the outliers in compact latent representation |
inproceedings |
Proceedings of Machine Learning Research |
PMLR |
2640-3498 |
glazunov23a |
0 |
Vacant holes for unsupervised detection of the outliers in compact latent representation |
701 |
711 |
701-711 |
701 |
false |
Glazunov, Misha and Zarras, Apostolis |
|
2023-07-02 |
Proceedings of the Thirty-Ninth Conference on Uncertainty in Artificial Intelligence |
216 |
inproceedings |
|
|