You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have trained the model many times , but the gap about 3-4% on MOT17 train set is always exist. I train the model in the four gpus env,where base lr 0.01 、steps=(20000,30000) total iters are 35k, and max_size_train is 1200.
I want to know that if the parameter of max_size_train has a great impact on training ? what's more, can you give me some tips about training the model ?
I appreciate your reply to my question. tahnks.
The text was updated successfully, but these errors were encountered:
My initial guess is that the gap may comes from the fact you use "max_size_train == 1200". I haven't tested it with model training on 4 GPUs, and I don't think that should be the major factor for the result discrepancy. It would be helpful if you train with the provided configurations configs/DLA_34_FPN_EMM_MOT17.yaml
I have trained the model many times , but the gap about 3-4% on MOT17 train set is always exist. I train the model in the four gpus env,where base lr 0.01 、steps=(20000,30000) total iters are 35k, and max_size_train is 1200.
I want to know that if the parameter of max_size_train has a great impact on training ? what's more, can you give me some tips about training the model ?
I appreciate your reply to my question. tahnks.
The text was updated successfully, but these errors were encountered: