Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Self-supervised training bugs #2

Open
JXZxiao5 opened this issue Aug 10, 2022 · 3 comments
Open

Self-supervised training bugs #2

JXZxiao5 opened this issue Aug 10, 2022 · 3 comments

Comments

@JXZxiao5
Copy link

Thanks for the great project. When I reproduced the self-supervised experiment, there was a gap between our results and those published in the paper. There may be bugs in the code regarding self-supervised training. Can you provide logs or models on self-supervised training?

@gxd1994
Copy link
Collaborator

gxd1994 commented Aug 10, 2022

Thank you for your attention to our work.
You can train with 8 GPUs. Better results can be obtained. It may have a randomness, you can run it again.

@gxd1994
Copy link
Collaborator

gxd1994 commented Aug 10, 2022

If you want to get higher performance in the setting of 40000 points, you can choose to retrain, or only perform finetune in the second stage using 40000 points.

@JXZxiao5
Copy link
Author

If you want to get higher performance in the setting of 40000 points, you can choose to retrain, or only perform finetune in the second stage using 40000 points.

Yes, I do that, but it cannot work due to the heavy cost of memory on 40K points. I levarage the supervised pre-trained model relaeased to conduct the experiments in the setting of 10K, 20K, 25K, 40K points, and find that EPE3D metrics are also increasing as the number of points increases. I don't know if there is something wrong with my experiments and how I can explain it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants