-
Notifications
You must be signed in to change notification settings - Fork 98
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SampleConcensusModel fails on lidar points #27
Comments
Thanks for making this package available! I am trying it out using Gazebo 7 under ROS Kinetic and Ubuntu 16.04. You can see the simulation setup below: The circles are detected OK by the simulated camera. See the Hough transform ouput below: However, the the circles are not detected in the lidar pointcloud by the sphere consensus model. Instead, I get the following stream of errors:
I can visualize the pointcloud being processed by Calibration3DMarker.cpp. Right after the following code:
the pointcloud looks like this: So far so good. But after the projection onto the camera plane, the pointcloud looks like the following:
So now we can no longer see the marker at all. I understand the projection is using the camera_info topic which is shown below. This matrix is generated by the Gazebo camera plugin:
Can you think of anything that might be going wrong? |
Hi! Hmmm - my code assumes that both the LiDAR and the camera point to the same direction. The projection shows, that in your setup probably do not. I would try to add rotation (around vertical axis) to fulfill this assumption. |
Thanks for your reply! The frames of our LiDAR and camera are shown below. The frame for the camera is the rgb_optical frame. So for the LiDAR the z axis points up, the x-axis points forward and the y-axis points to the left. For the camera optical frame, the z-axis points forward, the x-axis points to the right and the y-axis points down. How are the axes set up in your robot? |
OK, I think I figured out how to align the camera and the LiDAR so now the pointcloud looks correct. However, in the function detect4spheres() in Calibration3DMarker.cpp I can view the input plane and it looks like this: This looks like the LiDAR shadow cast by maker on the wall behind the marker. Shouldn't the plane be the marker itself? |
Yes you are right - it should be the marker not the "shadow". The thing is, that is marker is assumed to be best fitting plane in front of the LiDAR. Since in your simulation there is large ideal wall behind the circle, the algorithm is confused. I would add some clutter (boxes) or make the wall smaller or curved. |
Thanks @martin-velas. I tried filtering the Velodyne pointcloud so that only the target is visible as shown in the image below: Now I seem to get a step further and the output from the course calibration looks like this: `Sphere Inliers: 13
Sphere Inliers: 9 Sphere Inliers: 19 Sphere Inliers: 19 ordered centers: etc. |
@pirobot how did you fix the camera and lidar orientation? And where in code can I do it? |
@SubMishMar Sorry I honestly don't remember and I had to move on to a different approach as we could not get this to work on the real robot. |
@pirobot thanks for responding. What different approach do/did you use? |
We are trying the velo2cam_calibration package but so far we cannot get very good results on the real robot (the Gazebo demo works well). |
Yes, I got the same issue. velo2cam_calibration works well in Gazebo, but in real environments, it takes some effort. |
Hi everyone, thank you for this discussion, @pirobot can you please shared your calibration on gazebo for your robot? |
No description provided.
The text was updated successfully, but these errors were encountered: