title | abstract | openreview | software | section | layout | series | publisher | issn | id | month | tex_title | firstpage | lastpage | page | order | cycles | bibtex_author | author | date | address | container-title | volume | genre | issued | extras | |||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Using Autodiff to Estimate Posterior Moments, Marginals and Samples |
Importance sampling is a popular technique in Bayesian inference: by reweighting samples drawn from a proposal distribution we are able to obtain samples and moment estimates from a Bayesian posterior over latent variables. Recent work, however, indicates that importance sampling scales poorly — in order to accurately approximate the true posterior, the required number of importance samples grows is exponential in the number of latent variables [Chatterjee and Diaconis, 2018]. Massively parallel importance sampling works around this issue by drawing |
QUMZJgrjN0 |
Papers |
inproceedings |
Proceedings of Machine Learning Research |
PMLR |
2640-3498 |
bowyer24a |
0 |
Using Autodiff to Estimate Posterior Moments, Marginals and Samples |
394 |
417 |
394-417 |
394 |
false |
Bowyer, Sam and Heap, Thomas and Aitchison, Laurence |
|
2024-09-12 |
Proceedings of the Fortieth Conference on Uncertainty in Artificial Intelligence |
244 |
inproceedings |
|