-
Notifications
You must be signed in to change notification settings - Fork 88
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cholesky factorization error on cpu #35
Comments
It appears to have something to do also with the number of samples. In my example I was using 100 samples, but if I increase that then the error goes away. I guess I need to keep my n_samples higher than n_components + n_features or something like that. Still tricky to work around. |
I have the same issue. It appears when using cpu or cuda. Is there a possibility to fix it? Or some rule how to avoid it? |
Hi there, Line 299 in 23eaf64
So changing to `double' will help to alleviate the numerical issues. I change and make sure the input of function However, this is a kind of temporary trick and indeed increases the running time and memory occupation. |
there seems to be some limit between the n_components and n_features. If I try and create a model with
it will fail with
_LinAlgError: linalg.cholesky: The factorization could not be completed because the input is not positive-definite (the leading minor of order 22 is not positive-definite).
reducing
n_features=98
will work but then if I raisen_components=2
the error returns.I am trying to work with many more features and components. Potentially 1000+ features and an unknown number of components, but it is likely to be high. Is there any workaround for this?
The text was updated successfully, but these errors were encountered: