You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am wondering does your work get improvement only on UCF and AVA datasets?
Because I've run your code on JHMDB dataset without changing anything, the best frame-mAP I got is only 67%.
Is it because the training settings are only for UCF and AVA?
Also, I found that you freeze all the 2D and 3D backbone parameters when training on JHMDB, but in the original YOWO, they still allow the last few layers of backbone to be fine-tuned. Can you explain about this?
The text was updated successfully, but these errors were encountered:
JHMDB is a very small-scale dataset, which means that it is easily over fitted. This dilemma requires us to carefully adjust hyperparameters. I think such a small dataset is unconvincing, and I am not good at adjusting hyperparameters, so I do not deal with JHMDB dataset. Both UCF101-24 and AVA are large-scale benchmarks, so I think that the improvement on these two benchmarks can prove the effectiveness of my job.
All the settings for JHMDB as you can see in my project are just random and not a good choice.
If you solve this problem, I wish you could share the strategy you adopted. Thanks a lot.
Hi,
First of all, thank you for your great work!
I am wondering does your work get improvement only on UCF and AVA datasets?
Because I've run your code on JHMDB dataset without changing anything, the best frame-mAP I got is only 67%.
Is it because the training settings are only for UCF and AVA?
Also, I found that you freeze all the 2D and 3D backbone parameters when training on JHMDB, but in the original YOWO, they still allow the last few layers of backbone to be fine-tuned. Can you explain about this?
The text was updated successfully, but these errors were encountered: