-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Batch ensemble #2
Comments
To be 100% honest, I asked the paper's author if they had a draft of their code in Pytorch. Then based on that, I implemented BatchEnsemble. So I need to thank the authors of BatchEnsemble for their help. It is not an official implementation, but I checked that the results were consistent with their paper. |
Thank you for your quick response, the information is very helpful! |
Sorry to disturb again, I have some questions regarding the repeating operation. |
Dear Milliema Sorry to answer so late. I think there is no problem with the inference phase. |
Hi, thanks for the great work you have done. Following your discussion with Milliema, you mentioned that
Could you please explain why repeating the data during training also improves the model performance? I actually found in this repository that by repeating the data, what the model receives are just multiple copies of the same data, i.e. the model is trained with [x1, x2, x3, x4] when the data is not repeated, while if the data is repeated the model is trained with [x1, x1, x1, x1, x2, x2, x2, x2, x3, x3, x3, x3, x4, x4, x4, x4]. I am a bit confused why this can be helpful for improving the model's performance? Thanks again for your contribution and looking forward to seeing your reply! |
if you do not repeat the data during training, the weights alpha and gamma will never see all the data. So while the other weights will make an entire epoch, these weights will only make a fraction of an epoch. That is why it helps the training to repeat. I hope I am clear. |
Thanks for your immediate response. When the data is not repeated, assume we have training samples [$x_1, x_2, x_3, x_4$] and num_models=2, then weights [$\alpha_1, \alpha_1, \alpha_2, \alpha_2$], thus However, my question is that, even with the repeated pattern in the code, still, alpha and gamma are able to see only a fraction of an epoch. E.g. your training data is [$x_1, x_2, x_3, x_4$], it becomes [$x_1, x_1, x_2, x_2, x_3, x_3, x_4, x_4$] after repeating it, and the weights alpha become [$\alpha_1, \alpha_1, \alpha_1, \alpha_1, \alpha_2, \alpha_2, \alpha_2, \alpha_2$]. Within this mode, I hope I state my confusion clear. Thanks again for your reply. |
I agree with you that this is not the intended behavior. I will correct the code by the end of this month and apply Thanks for the correction! |
Thanks for the great work!
May I ask how did you re-implement the code for BatchEnsemble? Since the original official implementation is in Tensorflow, is there any Pytorch resources to refer to?
The text was updated successfully, but these errors were encountered: