You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm training the model on my own dataset and had no issues training even Yolov10x when I was detecting just one class. However ever since I expanded to many classes (320 classes) I do not have enough RAM to train even Yolov10n. What could be the issue?
I remember I had a similar problem with Yolov7 and there it was the OTA loss that was increasing the RAM usage with the number of classes and by turning that off I could decrease the RAM usage significantly. Is something similar going on here?
The text was updated successfully, but these errors were encountered:
Also the RAM seems to go up before the first epoch starts. Once it starts everything is fine and it drops.
For example I lowered my image size to 320 pixels, workers to 0, batch to 1 and it finally went through. Once the first epoch kicked of on GPU the RAM dropped to super low percentage. It seems like it's something to do with initialization.
I'm training the model on my own dataset and had no issues training even Yolov10x when I was detecting just one class. However ever since I expanded to many classes (320 classes) I do not have enough RAM to train even Yolov10n. What could be the issue?
I remember I had a similar problem with Yolov7 and there it was the OTA loss that was increasing the RAM usage with the number of classes and by turning that off I could decrease the RAM usage significantly. Is something similar going on here?
The text was updated successfully, but these errors were encountered: