-
-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Running out of memory... best number of samples for custom data sets? #8
Comments
Hello @pgtinsley , |
Hi @explainingai-code , |
Here is the config file:
|
Yeah. And see if its runs. If not then also try with Just to add when I was training on celebhq with 256x256 rgb images, I was using Nvidia V100 |
Still no luck with either of those options... I'll try to get some GPUs with more memory... thank you! |
@pgtinsley, Yes that should solve this problem. Then lastly modify the downsample parameter and have autoencoder work with 128x128 images. |
Hello!
I was wondering if you have any intuition on how many training samples are required to get good results/how much memory is required to train the unconditional VQVAE?
I have about 200k grayscale images at 256x256... which was obviously too much, so I scaled back to 70 images just to see if it would start training, but it didn't... throwing the too little memory error.
Is this something batch size can fix or do I need to mess with a bunch of other parameters? I only changed the im_channels and save_latent parameters from their defaults.
Thank you!
The text was updated successfully, but these errors were encountered: