Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Camera intrinsics / camera_info topic #15

Open
madsherlock opened this issue Dec 13, 2018 · 6 comments
Open

Camera intrinsics / camera_info topic #15

madsherlock opened this issue Dec 13, 2018 · 6 comments
Labels

Comments

@madsherlock
Copy link

Hi,

Is there, or will there be support for obtaining camera intrinsics using the ifm3d-ros driver?
If yes, are they only for the unrectified images?

Using "ifm3d schema --dump" I see the "INTR_CAL: 256" option.

I suppose the right place for camera intrinsics is the camera_info topic typically associated with image streams from cameras.

I suppose the straightforward way to implementing this is to use the INTR_CAL bit to receive the intrinsics once (just like the extrinsics), and then publishing the camera_info continuously along with the image data.

However, if the on-sensor image rectification is enabled, the intrinsic parameters are modified. For example, the distortion coefficients go towards zero.
When performing image rectification in OpenCV, I would obtain a new camera matrix with new parameters for the "rectified camera". These would depend on eg. my choice of how to crop the rectified image.

@madsherlock
Copy link
Author

If this is WIP, I will make the following observations and suggestions:

  • The sensor is able to rectify and undistort images (and XYZ clouds) by itself. By default, the XYZ image is rectified and undistorted, and so is the point cloud.

  • On-sensor image rectification and undistortion is controlled by two parameters. In the JSON configuration file, these are named: "EnableRectificationAmplitudeImage" and "EnableRectificationDistanceImage". Setting one to true enables rectification of the corresponding image stream.
    This is a very useful feature if you want the two 2D images "amplitude" and "distance" to have a 1:1 pixel correspondence with the point cloud indices (and the XYZ images).

  • Normally in ROS, for each camera, there is a camera_info topic describing the intrinsic parameters of the camera, as well as describing a rectification matrix (rotation) and a projection matrix (modified camera matrix times homogeneous transformation matrix).

  • Rectifying and undistorting images essentially creates a new camera, which would require a new camera_info descriptor. Thus, there should be one for the amplitude stream, and one for the distance stream. The two camera_info topics would be equal if the "EnableRectification" setting is equal for both image streams.

  • There are cases where a user would like to take a set of 2D pixel positions in the infrared amplitude image, and convert these to a set of 3D positions in the camera coordinate system. To do this easily with OpenCV, one would need to know the distortion coefficients and the 3x3 camera matrix. See cv::solvePNP().

  • There is a free scaling parameter involved with image rectification and undistortion, which produces a new camera matrix corresponding to the "virtual" sensor with which the rectified and undistorted images are acquired. See cv::getOptimalNewCameraMatrix().
    I assume this scaling parameter has somehow been chosen by the ifm team, and is part of the undistortion and rectification process producing the rectified amplitude and depth image streams. However, I do not know the value of this parameter, nor the "new camera matrix".
    I believe I need to know the "new camera matrix" if I want to convert 2D positions in the rectified images to 3D rays in camera coordinates by inverse projection. Or rather, I need to know the "new projection matrix", as I see there is a non-zero translation and a non-zero rotation (rectification) in the "Intrinsic Parameters" of the camera.

Can you give any hints as to what type of calibration information will eventually be exposed in the ROS interface?

I don't know how to convert the intrinsic parameters to a camera_info topic, because the conversion from the three rotation angles to a rotation matrix is not known to me, and because I don't know the coefficients of the so-called "new camera matrix".

@theseankelly
Copy link
Contributor

@madsherlock -- sorry it's taken so long for us to acknowledge your feedback. You're correct in noticing we don't expose intrinsics from the camera currently. Thanks for taking the time to provide detailed thoughts. I've added this to our backlog for future scheduling and will take your feedback into account when designing the feature.

@servos
Copy link

servos commented Nov 9, 2022

I was just wondering if adding camera_info is a planned feature. Not having the camera info topic makes it so that this device not "plug and play" compatible with many standard ros systems thus making it much harder to work with. Are there any plans to add this feature in the near future?

@lola-masson
Copy link
Member

Hi @servos,
We would definitely like to implement this, but there are a couple considerations that we need to resolve first, including the fact that the calibration model we use is not available by default as a distortion model.
May I ask what you would use the camera_info for? We might be able to find a way around in the meantime.

@servos
Copy link

servos commented Nov 10, 2022

There are many different ros tools and packages that assume the camera_info exists since having a image + camera_info is the standard camera driver API for ros. Things like http://wiki.ros.org/image_proc tools, the "Camera" visualization in RVIZ, http://wiki.ros.org/image_geometry, and many packages use the CameraSubscriber to get an image + its camera info.

Many sensors get around the distortion model not being available by publishing already rectified image topics such that the distortion model becomes a no-op. The Realsense is a good example of this.

@lola-masson
Copy link
Member

Thanks, James. We will work on fitting to this standard in our next releases.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants