You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a question, LPBNN_layers line 75 is: embedded_mean, embedded_logvar=self.encoder_fcmean(embedded),self.encoder_fcmean(embedded)
Should this not be: embedded_mean, embedded_logvar=self.encoder_fcmean(embedded),self.encoder_fcvar(embedded)
As it is, it appears to be enforcing that the mean and logvar are the same. This bug is present in every layer defined in LPBNN_layers.
Additionally, I was wondering why VAE embedding is applied only for alpha and not for gamma. Is there a benefit to only defining alpha as Bayesian? Was this the case for the results reported in your paper?
The text was updated successfully, but these errors were encountered:
Hello, thanks for providing your code.
I have a question, LPBNN_layers line 75 is:
embedded_mean, embedded_logvar=self.encoder_fcmean(embedded),self.encoder_fcmean(embedded)
Should this not be:
embedded_mean, embedded_logvar=self.encoder_fcmean(embedded),self.encoder_fcvar(embedded)
As it is, it appears to be enforcing that the mean and logvar are the same. This bug is present in every layer defined in LPBNN_layers.
Additionally, I was wondering why VAE embedding is applied only for alpha and not for gamma. Is there a benefit to only defining alpha as Bayesian? Was this the case for the results reported in your paper?
The text was updated successfully, but these errors were encountered: