-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
parameters and flops #13
Comments
Thanks for your interest! It seems that you also count the params and flops of the one-to-many head. Due to that the one-to-many head is not needed during inference, this part of params and FLOPs can be ignored.
|
Thank you very much, I see |
Thanks for your guidance. In order to fine-tune my experimental setup, I needed to understand how to implement the following modifications at the code level: Parameter modification at the code level: How should I identify and exclude parameters in the one-to-many head part in the code? Code adjustments for FLOPs calculation: What code segments should I modify to ensure that one-to-many heads are excluded when calculating FLOPs? Looking forward to your specific guidance. Thanks! |
Could you please try to delete the |
Thanks. Could you please try to delete the self.cv2 and self.cv3 and comment the forward of one-to-many head rather than self.one2one_cv2, self.one2one_cv3 and the forward of one-to-one head? |
I want to thank you for your previous guidance and help. I tried commenting the code based on your suggestion, but unfortunately the problem I'm having is still not resolved. Could you help further in order to be able to make the required modifications more precisely? If possible, please indicate the specific code line number that needs to be commented or modified, or if it is convenient for you, provide a relevant code example that has already been commented. |
@Hy87380510 We provide a script to calculate the params and FLOPs correctly in #178. Does that help?
|
I've noticed an issue; the parameter count for yolov10n is 2,708,210, with 8.4 GFLOPs of floating-point operations. This differs from the numbers in the paper. I wonder if it's a calculation issue on my end?
The text was updated successfully, but these errors were encountered: