-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
About GOT-10k train and test #14
Comments
The hyper-parameter Good luck! |
Thanks for your reply. I set the |
Can the model converge if you use two GPUs? |
I ran into a similar problem.The first time I ran with one RTX3090,and just set "amp" to "True" , "num_processes" to "1" and "num_workers" to "16" .Keep the default Settings for others.It train in GOT,and the results are shown below. The second time,I use two RTX3090. I set the "amp" is "False","num_processes" to "2" and "num_workers" to "16" .Keep the default Settings for others.I just want to see the influence of "amp",but the model didn't converge this time: |
I met the same problem, the model trained with multi-GPU didn`t converge, with or without synchronized-BN. Though it seems converge when trained with 1 gpu |
The main reason is that training with multiple GPU requires rewriting the training code, and the author only provides a single GPU training code. Since the authors' code framework is video_analyst, I refer to the (main/dist-train.py) distributed training code in video_anaylst for training. |
Thanks for your excellent work. I meet a problem when I want to reproduce the results in the paper. When I used GOT alone for training, the iou suppression remained at about 0.38 and there was no improvement. I wonder if there was something wrong with the configuration?
The text was updated successfully, but these errors were encountered: