You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
Hi, I have been testing out the DenseHMM implementation in Pomegranate v1.0.3 for models with > 50 states, and have occasionally encountered a bug where initializing the model.edges matrix using the model.add_edge results in an edges matrix containing NaNs. The NaNs then propagate through downstream calculations including model.forward_backward and model.predict.
I tracked this down to the following snippet located here:
where torch.empty sometimes returns an array with NaNs, and NaN - float("inf") = NaN. I think additionally, because in my testing I don't set every entry of the edges matrix manually to a specific probability, subsequent usage of model.edges propagates those NaNs.
To Reproduce
Since the bug (I think) comes from the initialization of a large array using torch.empty, the simplest way I have been able to reproduce it is using the above snippet where n is large (> 50), and then not fill in every edge.
The quickest fix I have found is to just pre-set the matrix with torch.log(torch.zeroes((n,n)).
I'm a big fan of the package, and thank you for all the effort you've put in developing it. Just wanted to put this on your radar.
The text was updated successfully, but these errors were encountered:
Sorry for the delay and thanks for pointing this out. Would you mind posting a minimal reproducing script? I've been having some trouble reproducing the issue, but I also didn't spend that much time on it.
I think I may have hit this issue as well. The problem is that a number of the HMM matrices are initialised by creating the memory with torch.empty and then every entry set to -inf by taking away inf. The problem is that if the uninitialised memory contains any values where x - inf is NaN, then you get the NaN instead of -inf.
The first print will non-deterministically be True and the second will always be True.
Describe the bug
Hi, I have been testing out the DenseHMM implementation in Pomegranate v1.0.3 for models with > 50 states, and have occasionally encountered a bug where initializing the model.edges matrix using the model.add_edge results in an edges matrix containing NaNs. The NaNs then propagate through downstream calculations including model.forward_backward and model.predict.
I tracked this down to the following snippet located here:
where torch.empty sometimes returns an array with NaNs, and NaN - float("inf") = NaN. I think additionally, because in my testing I don't set every entry of the edges matrix manually to a specific probability, subsequent usage of model.edges propagates those NaNs.
To Reproduce
Since the bug (I think) comes from the initialization of a large array using torch.empty, the simplest way I have been able to reproduce it is using the above snippet where n is large (> 50), and then not fill in every edge.
The quickest fix I have found is to just pre-set the matrix with torch.log(torch.zeroes((n,n)).
I'm a big fan of the package, and thank you for all the effort you've put in developing it. Just wanted to put this on your radar.
The text was updated successfully, but these errors were encountered: