You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for the great project. When I reproduced the self-supervised experiment, there was a gap between our results and those published in the paper. There may be bugs in the code regarding self-supervised training. Can you provide logs or models on self-supervised training?
The text was updated successfully, but these errors were encountered:
If you want to get higher performance in the setting of 40000 points, you can choose to retrain, or only perform finetune in the second stage using 40000 points.
If you want to get higher performance in the setting of 40000 points, you can choose to retrain, or only perform finetune in the second stage using 40000 points.
Yes, I do that, but it cannot work due to the heavy cost of memory on 40K points. I levarage the supervised pre-trained model relaeased to conduct the experiments in the setting of 10K, 20K, 25K, 40K points, and find that EPE3D metrics are also increasing as the number of points increases. I don't know if there is something wrong with my experiments and how I can explain it.
Thanks for the great project. When I reproduced the self-supervised experiment, there was a gap between our results and those published in the paper. There may be bugs in the code regarding self-supervised training. Can you provide logs or models on self-supervised training?
The text was updated successfully, but these errors were encountered: