Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Part co-segmentation comparison on CUB #11

Open
SDNAFIO opened this issue Dec 30, 2022 · 2 comments
Open

Part co-segmentation comparison on CUB #11

SDNAFIO opened this issue Dec 30, 2022 · 2 comments

Comments

@SDNAFIO
Copy link

SDNAFIO commented Dec 30, 2022

Hello,
is it possible to release the evaluation code for CUB, which reproduces the results presented in the paper?

With the currently available implementation, I'm unfortunately not able to reproduce the results.
I get much worse results for the NMI and ARI.

best regards

@ShirAmir
Copy link
Owner

ShirAmir commented Jan 2, 2023

Hi!
Thank you for finding an interest in our paper.
We used the same evaluation code and data partitions provided by Choudhury et al in this link and replaced their model with our par co-segmentation inference.
For the large pairs, you can train the k-means on 1K images instead of all validation split, and apply k-means inference on all the images.
Let us know if you have further questions!

@SDNAFIO
Copy link
Author

SDNAFIO commented Jan 18, 2023

Hi, thanks for the response!

In the meantime, I also tried to reproduce the inter-class Co-segmentation results for PASCAL-VOC.
I also was not able to reproduce the numbers presented in the paper.
Unfortunately, also none of the related methods listed in Table 2 seem to have published a complete example showing the entire evaluation.
This makes it hard to even come up with the exact same data setup.

Which version of the PASCAL VOC was used (2012, ...)?
Which split of the dataset was used for evaluation (train+val,...)?
Was a different split used for training than for evaluation?
The dataset contains also images without segmentation masks, did you include them in some way to fit the K-Means?
Some images in the dataset are marked as difficult, have they been ignored or included?

Additionally, I am also not sure about the used hyperparameters.
Did you use the same as in the uploaded Jupyter notebook for Co-Segmentation, or different ones?

Also, how were the final Jaccard Index and Mean Precision Scores computed?
The dataset contains for each class a different number of images, did you just average the scores for each class, or did you weight them by the number of images?

Thank you already once again for the effort.
If the evaluation code for the dataset could be published, it would, of course be best.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants