Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About Train model #37

Open
defense81 opened this issue Nov 24, 2021 · 1 comment
Open

About Train model #37

defense81 opened this issue Nov 24, 2021 · 1 comment

Comments

@defense81
Copy link

I have trained the model many times , but the gap about 3-4% on MOT17 train set is always exist. I train the model in the four gpus env,where base lr 0.01 、steps=(20000,30000) total iters are 35k, and max_size_train is 1200.
I want to know that if the parameter of max_size_train has a great impact on training ? what's more, can you give me some tips about training the model ?
I appreciate your reply to my question. tahnks.

@bingshuai2019
Copy link

My initial guess is that the gap may comes from the fact you use "max_size_train == 1200". I haven't tested it with model training on 4 GPUs, and I don't think that should be the major factor for the result discrepancy. It would be helpful if you train with the provided configurations configs/DLA_34_FPN_EMM_MOT17.yaml

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants