Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Really hard RAM usage for large number of classes. #457

Open
mazatov opened this issue Oct 1, 2024 · 1 comment
Open

Really hard RAM usage for large number of classes. #457

mazatov opened this issue Oct 1, 2024 · 1 comment

Comments

@mazatov
Copy link

mazatov commented Oct 1, 2024

I'm training the model on my own dataset and had no issues training even Yolov10x when I was detecting just one class. However ever since I expanded to many classes (320 classes) I do not have enough RAM to train even Yolov10n. What could be the issue?

I remember I had a similar problem with Yolov7 and there it was the OTA loss that was increasing the RAM usage with the number of classes and by turning that off I could decrease the RAM usage significantly. Is something similar going on here?

@mazatov
Copy link
Author

mazatov commented Oct 1, 2024

Also the RAM seems to go up before the first epoch starts. Once it starts everything is fine and it drops.

For example I lowered my image size to 320 pixels, workers to 0, batch to 1 and it finally went through. Once the first epoch kicked of on GPU the RAM dropped to super low percentage. It seems like it's something to do with initialization.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant