-
Notifications
You must be signed in to change notification settings - Fork 325
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Replicate paper results #43
Comments
push |
I am also facing the same issue. |
Hi, Loss = NLL + KL Adding a beta value multiplied with KL will solve the convergence issue. Loss = NLL + ß * KL where ß is a hyper-parameter. We will update the repo soon with a good way to set ß. |
Do you have a rough estimate, when you will update the repo, e.g. days / weeks / months? Thanks! |
In a week. |
could you explain the parameter of train_ens and val_ens? Does it mean sample_number? |
Yes, that's correct. |
Hello, |
@kumar-shridhar How to reproduce the validation accuracy as stated in the paper on the cifar10 dataset? The network cannot break through 64% in validation accuracy when I use the same setting in your configuration file. Thank you. |
Hi,
thanks for this nice work, I really appreciate it!
I tried to replicate the results from your paper with the repository, but I have not succeeded.
First, I downloaded your repo and the datasets. Then I adapted the configuration for the Bayesian Networks:
Finally I run the evaluation script with
main_bayesian.py --net_type alexnet --dataset CIFAR10
, but the network is not able to overcome a validation accuracy of around 58%:Can you explain, how to replicate the results from the paper?
The text was updated successfully, but these errors were encountered: