-
Notifications
You must be signed in to change notification settings - Fork 54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Table 4 and Table 5 using COCO pretraining or not? #39
Comments
I have asked the same question in a different issue. This line
seems to suggest that they used a model pretrained on COCO sequences. But I would appreciate a clarification of the others as well! |
Thanks for pointing this out. Let us see if the authors can clarify. |
Hi, Thanks for your attention and pointing this out. Let me clarify this. We have at most three training steps for IDOL: Step 1: pre-training the instance segmentation pipeline on COCO, following all other VIS methods. So, the main difference is Step 2. We will add more detailed experimental settings in the next arXiv version ~ |
@wjf5203 Hi, so there are 2 steps of pre-train. The first step is on single frame with static COCO images. The second step is on pseudo key-reference pairs. And I have 3 questions about this:
|
Hi there,
Thank you for sharing the repo. In Table 3, the results of YOUTUBE-VIS 2019 are reported using both models with and without the COCO pretraining.
How about Table 4 and Table 5 for IDOL? I did not find the detailed settings and explanations for these two results.
Thanks
The text was updated successfully, but these errors were encountered: