Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training OOM & TF2? #36

Open
glarfomatic opened this issue Feb 11, 2022 · 2 comments
Open

Training OOM & TF2? #36

glarfomatic opened this issue Feb 11, 2022 · 2 comments

Comments

@glarfomatic
Copy link

glarfomatic commented Feb 11, 2022

Hi -

I'm using my own dataset at 640x480 to train, and I run out of memory almost immediately on my GPU, an RTX 2070 Super - I get a ton of memory allocation and OOM errors related to the GPU. Is there a way to mitigate this by changing batch sizes, etc?

Any update on moving this to TF2.x?

@glarfomatic
Copy link
Author

closed by mistake

@vinthony
Copy link
Owner

Hi, you can try to remove some feature layers(currently 1000+) in the VGG19 backbone, which might not influence if you reduce some of them.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants