-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
what parameter changes would I need to make sure it runs on our dataset? #2
Comments
Hello, 224x224 is still large for this model. Can you please try to follow the steps mentioned here and see if it works fine after that ? |
Hi, |
With 224x224 images, using the current code version it would be difficult, but you could try the following:
|
Thank you for your response. |
Hi, I trained model on medical dataset and after sampling results are not as expected. Am I missing something? Please throw some light. |
When you say results are not as expected, do you mean images generated are completely garbage or they are just not of that high quality ? |
Model is improving during training. |
Couple of things that I can think of. Can you see if this helps ? |
No images are not grayscale. It has 3 channels. But I will use epoch more. |
hi there, how you did this? my dataset is also have 3 channel and also i did all the changes which is mention by @explainingai-code but i got size mismatch error. |
Hi @xiaoxiao079 , It looks from the error that code is trying to load a checkpoint which is trained on a different than what you are currently using to train/infer. |
I am running this code on set of images but getting thisu error
" CUDA out of memory. Tried to allocate 150.06 GiB (GPU 0; 15.89 GiB total capacity; 720.18 MiB already allocated; 14.31 GiB free; 736.00 MiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. " I have updated the batch size, and also resize images to 224, 224 shape but it still giving me this CUDA error.
Can you please tell me what shold I do?
Thanks
The text was updated successfully, but these errors were encountered: