-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Questions related to the dataset annotation and multi-gpu training results #8
Comments
Have you figured out the reason for the drop in performance? |
No, I haven't. I gave up using this dataset. |
@Reagan1311 I found out that the seed used in the code is not appropriately called, so the reproducibility is not guaranteed. How do you deal with it? Thanks. |
Hey, your comment helps a lot. By the way, I want to know which library and API you use to visualize the data. I used open3d and I cannot darw the affordance map like this. Thanks in advance if you can reply :) |
1 similar comment
Hey, your comment helps a lot. By the way, I want to know which library and API you use to visualize the data. I used open3d and I cannot darw the affordance map like this. Thanks in advance if you can reply :) |
Hi, I use the open3d as well. You can refer to their official guide for point cloud visualization: http://www.open3d.org/docs/release/tutorial/geometry/pointcloud.html |
Can you explain the pipeline for visualization? |
Hi, thanks for the great work! After running some experiments, I found two issues.
Some of the annotations are not correct. Here I show some examples (left model: prediction, right model: GT).
(The top two shelves have no annotation of "contain")
(The pourable annotation lies on the bottom of the bottle)
(The grasp annotation lies on the bottleneck)
(Grasp annotations are quite different for visually similar bottles)
The results vary a lot when using different numbers of GPU, and it seems single GPU got the best performance. What's the reason?
The text was updated successfully, but these errors were encountered: