-
Notifications
You must be signed in to change notification settings - Fork 123
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CUDA Out of Memory Errors w Batch Size of 1 on 16GB V100 #27
Comments
Hi, for the memory issue, please refer to #5 (comment) |
Ahh that great! Thank you. |
For those interested, I found that the HF implementation is set up for Gradient Accumulation. Enable it with: self.model = YolosForObjectDetection.from_pretrained(
self.hparams.pretrained_model_name_or_path,
config=config,
ignore_mismatched_sizes=True
)
self.model.gradient_checkpointing_enable() I was able to increase the batch size from 1 to 8 using this on a T4 with
|
Awesome!:smiling_face_with_three_hearts::smiling_face_with_three_hearts::smiling_face_with_three_hearts: |
Using the default FeatureExtractor settings for the HuggingFace port of YOLOS, I am consistently running into CUDA OOM errors on a 16GB V100 (even with a training batch size of 1).
I would like to train YOLOS on publaynet and ideally use 4-8 V100s.
Is there a way to lower the CUDA memory usage while training YOLOS besides batch size (whilst preserving the accuracy and leveraging the pertained models)?
I see that other models (e.g. DiT) use image sizes of 244x244. However, is it fair to assume that such a small image size would not be appropriate for object detection as too much information is lost? In the DiT case document image classification was the objective.
The text was updated successfully, but these errors were encountered: