You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've been playing a little bit with the code(congratulations for the work by the way 😄 ), and I've seen that when training a FactorVAE model, both the batch size and number of epochs are duplicated:
logger.info("FactorVae needs 2 batches per iteration. To replicate this behavior while being consistent, we double the batch size and the the number of epochs.")
args.batch_size*=2
args.epochs*=2
Does anybody know the reason behind this operation? I've reviewed the original paper but I couldn't find anything related to this.
Thanks for the help!
The text was updated successfully, but these errors were encountered:
Hi!
I've been playing a little bit with the code(congratulations for the work by the way 😄 ), and I've seen that when training a FactorVAE model, both the batch size and number of epochs are duplicated:
disentangling-vae/main.py
Lines 191 to 194 in f045219
Does anybody know the reason behind this operation? I've reviewed the original paper but I couldn't find anything related to this.
Thanks for the help!
The text was updated successfully, but these errors were encountered: