You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Oct 26, 2024. It is now read-only.
On a basic level, INLA is a scarier algorithm than MCMC. This is because an INLA user has one fewer tool to check correctness. With MCMC, you can jack up the sample count really high to check convergence. This is not possible with INLA. Obviously there are MCMC problems that still cause failures even with millions of samples, but such convergence tests help identify many classes of errors/convergence failures.
So, to feel more comfortable using INLA, we'd like to have better tools for predicting error. In an ideal world, there would even be an iterative method for improving the INLA posterior... but that doesn't currently exist.
An ideal error predictor would be less expensive than INLA itself while simultaneously providing a useful upper bound on error.
Ideas here:
Compare the hessian with the tensor of third derivatives at the mode. Perhaps only looking at the diagonal of each in order to keep the comparison tractable? How expensive is it to compute higher order derivatives using automatic differentiation? Could we even compute 4th or 5th derivatives?
The quadratic approximation at the mode provides predictions for the density under approximation at other points. Compare this prediction with the reality. Is there some kind of very cheap sampling we could with this idea that would provide a useful error estimate/integral?
The text was updated successfully, but these errors were encountered:
tbenthompson
changed the title
Explore ideas for cheaply predicting INLA error.
Compare a sequence of INLA approximations to confirm accuracy.
Jul 14, 2022
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
On a basic level, INLA is a scarier algorithm than MCMC. This is because an INLA user has one fewer tool to check correctness. With MCMC, you can jack up the sample count really high to check convergence. This is not possible with INLA. Obviously there are MCMC problems that still cause failures even with millions of samples, but such convergence tests help identify many classes of errors/convergence failures.
So, to feel more comfortable using INLA, we'd like to have better tools for predicting error. In an ideal world, there would even be an iterative method for improving the INLA posterior... but that doesn't currently exist.
An ideal error predictor would be less expensive than INLA itself while simultaneously providing a useful upper bound on error.
Ideas here:
The text was updated successfully, but these errors were encountered: