You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It has been noted by some collaborators that the MI implementation (here) in this library is slightly slower and less easy to tune than De Vos's implementation in TorchIR (https://github.com/BDdeVos/TorchIR/blob/main/torchir/metrics.py#L74). The TorchIR implementation also exposes the Parzen window width as a parameter for users to choose, while we automatically derive this from the choice of number of bins by setting the Gaussian kernel's FWHM as bin width. We could experiment with this as well.
This issue is to track experiments and optimisation.
The text was updated successfully, but these errors were encountered:
qiuhuaqi
changed the title
Optimisation of MI loss
Optimisation of MI loss implementation
Mar 1, 2024
It has been noted by some collaborators that the MI implementation (here) in this library is slightly slower and less easy to tune than De Vos's implementation in TorchIR (https://github.com/BDdeVos/TorchIR/blob/main/torchir/metrics.py#L74). The TorchIR implementation also exposes the Parzen window width as a parameter for users to choose, while we automatically derive this from the choice of number of bins by setting the Gaussian kernel's FWHM as bin width. We could experiment with this as well.
This issue is to track experiments and optimisation.
The text was updated successfully, but these errors were encountered: