-
-
Notifications
You must be signed in to change notification settings - Fork 50
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added MVN_cdf.py #60
base: main
Are you sure you want to change the base?
Added MVN_cdf.py #60
Conversation
Wrapper of scipy logcdf. To calculate the log cdf of multivariate normal distribution
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I left comments for some issues I spotted at a first glace.
You can use aesara test grad utility (don't remember the name right now) to check the gradient implementation is correct, and there is also some utility to check the infer_shape
is correct.
After making sure the Op
is implemented (and tested), you can create a dispatch function for the MvNormal logcdf, so that calling pm.logcdf(MvNormal.dist(...), upper)
will return the right thing. Right now the link is not done yet, and it would raise a NotImplementedError
. It will be something like:
from aesara.tensor.random.basic import MvNormalRV
from aeppl.logprob import _logcdf
@_logcdf.register(MvNormalRV)
def mvnormal_logcdf(value, mu, cov):
return ...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Getting closer, but somethings are still off. I suggest you check the Aesara documentation on how to write a new Op and some of the guides I mentioned in comments below.
https://aesara.readthedocs.io/en/latest/extending/creating_an_op.html
for i in range(len(mu)): | ||
grad_.append(conditional_covariance(cov, mu, i, upper[i])) | ||
grad_.append(conditional_covariance(cov, mu, i, value[i])) | ||
|
||
return np.array(grad_) | ||
return np.array(grad_)*output_grads[0] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The returned gradients need to be symbolic Aesara expressions, not the result of Numpy computations. You might need a separate Op
for the gradient. See an example here: https://www.pymc.io/projects/examples/en/latest/case_studies/blackbox_external_likelihood_numpy.html#blackbox_external_likelihood_numpy and https://www.pymc.io/projects/examples/en/latest/case_studies/wrapping_jax_function.html
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think for now it might be best not to implement the gradient method. The scipy autograd library doesn't exist for multivariate normal cdf. For now I can just try and sample using MH.
@_logcdf.register(MvNormalRV) | ||
def mvnormal_logcdf(value, mu, cov): | ||
|
||
return pm.logcdf(MvNormal.dist(mu, cov), value) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should be something like
@_logcdf.register(MvNormalRV) | |
def mvnormal_logcdf(value, mu, cov): | |
return pm.logcdf(MvNormal.dist(mu, cov), value) | |
mvncdf = Mvncdf() | |
@_logcdf.register(MvNormalRV) | |
def mvnormal_logcdf(op, value, rng, size, dtype, mu, cov, **kwargs): | |
return mvncdf(value, mu, cov) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Which you can then test is working by calling pm.logcdf(MvNormal.dist(mu, cov), value)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there any documentation which shows me how to do this? I tried searching for dispatch function but I have no idea how it works, and what it does. So it's quite tricky for me to know how to fix this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No documentation AFAIK, but you can see how we do it for some custom distributions: https://github.com/pymc-devs/pymc/blob/9b9826c586f04367b5027a0e472122ebcc636139/pymc/distributions/mixture.py#L359-L383
The idea is that the function should return an Aesara expression, which in your case is just the new Op
you created.
Can compute CDF, now work on gradients
Wrapper of scipy logcdf. To calculate the log cdf of multivariate normal distribution @ricardoV94.
See document for calculation of MVN CDF gradient.
mvn_cdf.pdf