Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ability to run my own data through the model #22

Open
pooyasa opened this issue Mar 21, 2021 · 3 comments
Open

Ability to run my own data through the model #22

pooyasa opened this issue Mar 21, 2021 · 3 comments

Comments

@pooyasa
Copy link

pooyasa commented Mar 21, 2021

Hi,
I wanted to check this model out and test it for a dateset of images that I have.
Is that currently possible?
Regards.

@zilunzhang
Copy link
Collaborator

Hi,

Theoretically, it is possible if you understand the data structure of the mini-imagenet dataset and prepare your data in a mini-imagenet-like format. However, the performance of this experiment is unknown since we didn't try one before. If you encounter any problem, feel free to contact us.

Yours,

DPGN team

@pooyasa
Copy link
Author

pooyasa commented Apr 30, 2021

Hi,

Thank you for your reply, I managed to feed an ImageFolder to the model and it works perfectly. I was able to achieve 94% of accuracy after 11 hours of training on my dataset. However, my images were 224 * 224 pixels and I had to crop them to 100 * 100 pixels and decrease the batch size to 20 as they wouldn't fit in Google Colabs 16 GB GPU. In case of 224 * 224 pixels, I had to decrease the batch size to 5 and the network suffered heavily from over fitting. I want to increase the batch size and image sizes because I believe it has a positive effect on the models output.
I was wondering, what was your environment and your hardware specs?
In case of 100 * 100 pixels and batch size of 20, GPU usage was around 14.5 GB with convnet backbone. Is it possible to run the model on two GPUs (RTX 2080 Ti) with each GPU having capacity of 11 GBs?
I attached the learning curves of the both cases, one with 100 pixels and batch size of 20 which works great but I want to increase the image sizes and one with 224 pixel images and batch size of 5 which suffers from over fitting of the training data.

Best regards,
100
200

Pooya

@zilunzhang
Copy link
Collaborator

zilunzhang commented Oct 20, 2021

Hi,

Thank you for your reply, I managed to feed an ImageFolder to the model and it works perfectly. I was able to achieve 94% of accuracy after 11 hours of training on my dataset. However, my images were 224 * 224 pixels and I had to crop them to 100 * 100 pixels and decrease the batch size to 20 as they wouldn't fit in Google Colabs 16 GB GPU. In case of 224 * 224 pixels, I had to decrease the batch size to 5 and the network suffered heavily from over fitting. I want to increase the batch size and image sizes because I believe it has a positive effect on the models output. I was wondering, what was your environment and your hardware specs? In case of 100 * 100 pixels and batch size of 20, GPU usage was around 14.5 GB with convnet backbone. Is it possible to run the model on two GPUs (RTX 2080 Ti) with each GPU having capacity of 11 GBs? I attached the learning curves of the both cases, one with 100 pixels and batch size of 20 which works great but I want to increase the image sizes and one with 224 pixel images and batch size of 5 which suffers from over fitting of the training data.

Best regards, 100 200

Pooya

Hi Pooya,

We used single 2080ti for 1shot experiment on ResNet12 and ConvNet. We used V100 or multiple 2080ti for others.

Try pytorch-memonger, we used it in our codebase before.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants