Skip to content

Commit

Permalink
v2.0 release. Added support for monocular cameras and multi-pose cali…
Browse files Browse the repository at this point in the history
…bration.
  • Loading branch information
beltransen committed Jan 13, 2021
1 parent 79aab46 commit d4d87f2
Show file tree
Hide file tree
Showing 41 changed files with 2,876 additions and 2,782 deletions.
3 changes: 2 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -23,4 +23,5 @@ srv_gen/

launch/calibrated_tf*
scripts/
src/mono_pattern.cpp
.idea/
.vscode/
45 changes: 17 additions & 28 deletions CMakeLists.txt
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
cmake_minimum_required(VERSION 2.8.3)
project(velo2cam_calibration)

SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++11 -w ")
SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++14 -w ")

## Find catkin macros and libraries
find_package(catkin REQUIRED COMPONENTS
Expand All @@ -18,19 +18,16 @@ find_package(catkin REQUIRED COMPONENTS
tf_conversions
dynamic_reconfigure
cmake_modules
# geometry_msgs
# tf2
# tf2_ros
# tf2_sensor_msgs
)

find_package(TinyXML REQUIRED)
find_package(OpenCV 3 REQUIRED)
find_package(OpenCV REQUIRED)
find_package(PCL 1.7 REQUIRED)

generate_dynamic_reconfigure_options(
cfg/Laser.cfg
cfg/Camera.cfg
cfg/Lidar.cfg
cfg/Stereo.cfg
cfg/Monocular.cfg
cfg/Plane.cfg
)

Expand Down Expand Up @@ -78,18 +75,16 @@ add_definitions(
)

## Declare cpp executables
add_executable(laser_pattern src/laser_pattern.cpp)
add_executable(lidar_pattern src/lidar_pattern.cpp)
add_executable(stereo_pattern src/stereo_pattern.cpp)
add_executable(mono_qr_pattern src/mono_qr_pattern.cpp)
add_executable(velo2cam_calibration src/velo2cam_calibration.cpp)
add_executable(pcl_coloring src/pcl_coloring.cpp)
add_executable(pcl_projection src/pcl_projection.cpp)
add_executable(levinson src/levinson.cpp)
add_executable(v2c_plane_segmentation src/plane.cpp)
add_executable(v2c_disp_masker src/disp_masker.cpp)

## Add cmake target dependencies of the executable/library
## as an example, message headers may need to be generated before nodes
add_dependencies(laser_pattern
add_dependencies(lidar_pattern
${catkin_EXPORTED_TARGETS}
velo2cam_calibration_generate_messages_cpp
${PROJECT_NAME}_gencfg
Expand All @@ -99,13 +94,17 @@ add_dependencies(stereo_pattern
velo2cam_calibration_generate_messages_cpp
${PROJECT_NAME}_gencfg
)
add_dependencies(mono_qr_pattern
${catkin_EXPORTED_TARGETS}
velo2cam_calibration_generate_messages_cpp
${PROJECT_NAME}_gencfg
)
add_dependencies(
velo2cam_calibration
${catkin_EXPORTED_TARGETS}
velo2cam_calibration_generate_messages_cpp
${PROJECT_NAME}_gencfg
)
add_dependencies(levinson ${catkin_EXPORTED_TARGETS})
add_dependencies(v2c_plane_segmentation ${PROJECT_NAME}_gencfg)

## Specify libraries to link a library or executable target against
Expand All @@ -114,31 +113,21 @@ target_link_libraries(stereo_pattern
${OpenCV_LIBRARIES}
${PCL_LIBRARIES}
)
target_link_libraries(laser_pattern
${catkin_LIBRARIES}
${OpenCV_LIBRARIES}
${PCL_LIBRARIES}
)
target_link_libraries(velo2cam_calibration
${catkin_LIBRARIES}
${OpenCV_LIBRARIES}
${PCL_LIBRARIES}
${TinyXML_LIBRARIES}
)
target_link_libraries(pcl_coloring
target_link_libraries(lidar_pattern
${catkin_LIBRARIES}
${OpenCV_LIBRARIES}
${PCL_LIBRARIES}
)
target_link_libraries(pcl_projection
target_link_libraries(mono_qr_pattern
${catkin_LIBRARIES}
${OpenCV_LIBRARIES}
${PCL_LIBRARIES}
)
target_link_libraries(levinson
target_link_libraries(velo2cam_calibration
${catkin_LIBRARIES}
${OpenCV_LIBRARIES}
${PCL_LIBRARIES}
${TinyXML_LIBRARIES}
)
target_link_libraries(v2c_plane_segmentation
${catkin_LIBRARIES}
Expand Down
96 changes: 96 additions & 0 deletions HOWTO.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,96 @@
# User guide: How to calibrate your sensors
*Velo2cam_calibration* is a tool to obtain the extrinsic parameters expressing the relative pose of a pair of sensors. Each of these sensors can be a monocular camera, a stereo camera, or a lidar scanner.

# Setup #
First, you must identify the topics delivered by the drivers of the sensors to be calibrated. In particular, velo2cam_calibration expects the following inputs:
* Monocular: a rectified image ([sensor_msgs/Image](http://docs.ros.org/api/sensor_msgs/html/msg/Image.html)) and the corresponding meta-information topic ([sensor_msgs/CameraInfo](http://docs.ros.org/api/sensor_msgs/html/msg/CameraInfo.html)). Both topics are expected to be referred to the same *frame_id*.
* Stereo: a pair of rectified images (2x [sensor_msgs/Image](http://docs.ros.org/api/sensor_msgs/html/msg/Image.html)) and the corresponding meta-information topic ([sensor_msgs/CameraInfo](http://docs.ros.org/api/sensor_msgs/html/msg/CameraInfo.html)). All the topics are expected to be referred to the same *frame_id*.
* LiDAR: a 3D point cloud ([sensor_msgs/PointCloud2](http://docs.ros.org/api/sensor_msgs/html/msg/PointCloud2.html)).

To start the calibration procedure, the reference point extraction nodes must be run. To that end, you can use the launch files included in this same repository. Arguments are available to pass the topic and frame names to the nodes. A different launch file must be used for each sensor, and it will be different depending on its modality:

* Monocular: *mono_pattern.launch*. Arguments to be set:
* *camera_name*: camera namespace in which the image topic and the meta-information topics exist.
* *image_topic*: name of the image topic excluding the camera namespace (e.g., *image_rect_color*).
* *frame_name*: *frame_id* to which camera topics are referred to.
* Stereo: *stereo_pattern.launch*. Arguments to be set:
* *camera_name*: camera namespace. Images of the stereo pair and meta-information topics should exist under *<camera_name>/left* and *<camera_name>/right*.
* *image_topic*: name of the image topics under their respective namespaces (e.g., *image_rect_color*).
* *frame_name*: *frame_id* of the left image.
* LiDAR: *lidar_pattern.launch*. Arguments to be set:
* *cloud_topic*: LiDAR point cloud topic.

In all cases, if the two sensors to be calibrated belong to the same modality, an additional argument *sensor_id* must be set so that one of the devices is given *sensor_id=0* and the other, *sensor_id=1*.

Apart from this, a third launch file must be run to perform the registration step:
* Registration: *registration.launch*. Arguments to be set:
* *sensor1_type* and *sensor2_type*: modalities of the sensors to be calibrated, to be chosen from *mono*, *stereo*, and *lidar*.
* *sensor1_id* and *sensor2_id*: to be set to 0 and 1, respectively, only if the two devices to be calibrated use the same modality.

## Examples ##
For instance, if you want to calibrate a monocular camera delivering the topics */blackflys/image_rect_color* (rectified_image) and */blackflys/camera_info* (meta-information), both referred to the *blackflys* *frame_id*, and a LiDAR scanner providing a */hdl64_points* cloud, you should run the following commands in three separate terminals:

```roslaunch velo2cam_calibration mono_pattern.launch camera_name:=/blackflys image_topic:=image_rect_color frame_name:=blackflys```

```roslaunch velo2cam_calibration lidar_pattern.launch cloud_topic:=/hdl64_points```

```roslaunch velo2cam_calibration registration.launch sensor1_type:=mono sensor2_type:=lidar```


On the other hand, if a stereo camera is involved, calibration can be performed using the monocular procedure on the left image of the pair. However, there also exists the possibility to use the specific stereo-based reference point extraction. If the stereo camera is publishing images in the */bumblebee_xb3/left/image_rect_color* and */bumblebee_xb3/left/image_rect_color* topics (with *frame_id* *bumblebee_xb3*), the launch files to use are:

```roslaunch velo2cam_calibration stereo_pattern.launch camera_name:=/bumblebee_xb3 image_topic:=image_rect_color frame_name:=bumblebee_xb3```

```roslaunch velo2cam_calibration lidar_pattern.launch cloud_topic:=/hdl64_points```

```roslaunch velo2cam_calibration registration.launch sensor1_type:=stereo sensor2_type:=lidar```

Finally, if the sensors that will be calibrated are two LiDAR scanners, publishing */hdl64_points* and */hdl32_points* clouds, the *sensor_id* arguments must be set appropriately:

```roslaunch velo2cam_calibration lidar_pattern.launch cloud_topic:=/hdl64_points sensor_id:=0```

```roslaunch velo2cam_calibration lidar_pattern.launch cloud_topic:=/hdl32_points sensor_id:=1```

```roslaunch velo2cam_calibration registration.launch sensor1_type:=lidar sensor2_type:=lidar sensor1_id:=0 sensor2_id:=1```

# Warm-up stage #
The instructions to perform the calibration will appear on the terminal where the registration launch file has been run. In the beginning, there is a warm-up stage where the user must check that the calibration pattern is correctly represented in the data from both sensors. The procedure takes place sequentially (one sensor after the other), and there are two tasks to be performed: one, to make sure that the pattern is within the field of view of the sensors; and two, to adjust the passthrough filters intended to prevent outliers in range data. Any sensor data visualization tool can be used at this point, although *rviz* is encouraged. Passthrough filters can be modified through *dynamic_reconfigure*. In a typical use case, the following GUIs can be launched in parallel:

```rviz```

```rosrun rqt_reconfigure rqt_reconfigure```

*rviz* will be used to visualize the images and/or point clouds, and *rqt_reconfigure* to modify the filter parameters on the fly. Note that the registration node only will let the user advance when the pattern can be identified in the current sensor. If the calibration pattern needs to be moved, the procedure should be reset so that the adequacy of the new location is rechecked.

For monocular devices, the only relevant data is the image, where the pattern should be fully visible. For range data, the checklist depends on the modality.

## Stereo ##
The four pattern circles must be well visible in *<camera_ns>/z_filtered_cloud*. Otherwise, please modify the limits of the passthrough filter of the *full_pass_through_z* node in rqt_reconfigure (parameters *filter_limit_min* and *filter_limit_max*) so that they represent the area where the pattern is located. Then, apply the same values to the *edges_pass_through_z* node.

As an example, please check the following image, where the default passthrough values are not suitable:

![gazebo screenshot](screenshots/stereo_filters_1.png)

After the *filter_limit_max* variable has been conveniently enlarged to 4 meters, the pattern becomes visible:

![gazebo screenshot](screenshots/stereo_filters_2.png)

## LiDAR ##
The *lidar_pattern_<id>/zyx_filtered* cloud should represent only the calibration pattern and the points corresponding to those lidar rays that go through the holes. The cloud can be filtered through the parameters *filter_limit_min* and *filter_limit_max* of the *pass_through_x_velo_<id>*, *pass_through_y_velo_<id>*, and *pass_through_z_velo_<id>* nodes.

On the other hand, *lidar_pattern_<id>/range_filtered_cloud* must contain only the calibration pattern. To that end, a radial passthrough filter is available in the *lidar_pattern_<id>* node, tunable through the *passthrough_radius_min* and *passthrough_radius_max* parameters.

Please check the following example, where the *lidar_pattern_<id>/zyx_filtered* cloud is depicted in white and *lidar_pattern_<id>/range_filtered_cloud*, in red. The initial parameters yield the following clouds:

![gazebo screenshot](screenshots/lidar_filters_1.png)

Ideally, the parameters should be set so that the clouds are close to the following:

![gazebo screenshot](screenshots/lidar_filters_2.png)

Note, however, that the filters do not need an exact tuning as long as the areas of interest (i.e., the pattern and the surface behind it) are well defined in the LiDAR clouds.

## Calibration ##
When the pattern is visible in all the modalities, and the passthrough filters have been adjusted, the user can start the calibration. It is important to make sure that the scene is static during the whole process, which will require 30 frames where the calibration pattern is correctly identified. Some errors might be shown in the terminals where the reference point extraction launch files were run; it is safe to ignore them as long as the iteration count provided by the registration node increases. At the end of the calibration, the program will ask the user if a new pattern pose is needed. It is recommended to repeat the procedure at least three times, each with a different position and orientation of the calibration pattern. Each new iteration starts with the same warm-up stage described above, where the passthrough filters should be properly adjusted to the new pattern location.

Calibration can be finished after any iteration. The final result will be immediately displayed, and a calibrated_tf.launch file will be created in the *launch* folder within the *velo2cam_calibration* package path, containing the *static_transform_publisher* able to broadcast the correct *tf* transformation.
96 changes: 19 additions & 77 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,92 +1,34 @@
# velo2cam_calibration [![Build Status](http://build.ros.org/job/Kdev__velo2cam_calibration__ubuntu_xenial_amd64/badge/icon)](http://build.ros.org/job/Kdev__velo2cam_calibration__ubuntu_xenial_amd64/)

The *velo2cam_calibration* software implements an Automatic Calibration algorithm for Lidar-Stereo camera setups \[1\]. This software is provided as a ROS package.
The *velo2cam_calibration* software implements a state-of-the-art automatic calibration algorithm for pair of sensors composed of LiDAR and camera devices in any possible combination \[1\]. This software is provided as a ROS package.

Package developed at [Intelligent Systems Laboratory](http://www.uc3m.es/islab), Universidad Carlos III de Madrid.

![gazebo screenshot](screenshots/velo2cam_calibration_setup.png)
![real results](screenshots/real_results.png)

# Nodes #
## stereo_pattern ##
### Subscribed Topics ###
*cloud2* ([sensor_msgs/PointCloud2](http://docs.ros.org/api/sensor_msgs/html/msg/PointCloud2.html))
## Setup ##
To install this ROS package:
1. Clone the repository into your *catkin_ws/src/* folder.
2. Install run dependencies: ```sudo apt-get install ros-<distro>-opencv-apps```
3. Build your workspace [as usual](http://wiki.ros.org/ROS/Tutorials/BuildingPackages).

&nbsp;&nbsp;&nbsp;&nbsp;Stereo camera point cloud containing points belonging to edges in the left image
## Usage ##
See [HOWTO.md](HOWTO.md) for detailed instructions on how to use this software.

*cam_plane_coeffs* ([pcl_msgs::ModelCoefficients](http://docs.ros.org/api/pcl_msgs/html/msg/ModelCoefficients.html))
To test the algorithm in a virtual environment, you can launch any of the calibration scenarios included in our [Gazebo validation suite](https://github.com/beltransen/velo2cam_gazebo).

&nbsp;&nbsp;&nbsp;&nbsp;Coefficients of the calibration target plane model
### Published Topics ###
*/stereo_pattern/centers_cloud* ([velo2cam_calibration::ClusterCentroids](http://docs.ros.org/kinetic/api/velo2cam_calibration/html/msg/ClusterCentroids.html))
## Calibration target ##
The following picture shows a possible embodiment of the proposed calibration target used by this algorithm and its corresponding dimensional drawing.

&nbsp;&nbsp;&nbsp;&nbsp;Target circles centers obtained from stereo camera data
![calibration target](screenshots/calibration_target_real_scheme_journal.png)

<!-- ### Parameters ### -->
## laser_pattern ##
### Subscribed Topics ###
*cloud1* ([sensor_msgs/PointCloud2](http://docs.ros.org/api/sensor_msgs/html/msg/PointCloud2.html))
**Note:** Other size may be used for convenience. If so, please configure node parameters accordingly.

&nbsp;&nbsp;&nbsp;&nbsp;LIDAR pointcloud
### Published Topics ###
*/laser_pattern/centers_cloud* ([velo2cam_calibration::ClusterCentroids](http://docs.ros.org/kinetic/api/velo2cam_calibration/html/msg/ClusterCentroids.html))
## Citation ##
If you use this work in your research, please consider citing the following paper:

&nbsp;&nbsp;&nbsp;&nbsp;Target circles centers obtained from LIDAR data
<!-- ### Parameters ### -->
## velo2cam_calibration ##
### Subscribed Topics ###
*~cloud1* ([velo2cam_calibration::ClusterCentroids](http://docs.ros.org/kinetic/api/velo2cam_calibration/html/msg/ClusterCentroids.html))
[1] Beltrán, J., Guindel, C., and García, F. (2021). [Automatic Extrinsic Calibration Method for LiDAR and Camera Sensor Setups](https://arxiv.org/abs/2101.04431). arXiv:2101.04431 [cs.CV]. Submitted to IEEE Transactions on Intelligent Transportation Systems.

&nbsp;&nbsp;&nbsp;&nbsp;Target circles centers obtained from LIDAR data
A previous version of this tool is available [here](https://github.com/beltransen/velo2cam_calibration/tree/v1.0) and was described on this paper:

*~cloud2* ([velo2cam_calibration::ClusterCentroids](http://docs.ros.org/kinetic/api/velo2cam_calibration/html/msg/ClusterCentroids.html))

&nbsp;&nbsp;&nbsp;&nbsp;Target circles centers obtained from stereo camera data

*~cloud3* ([sensor_msgs/PointCloud2](http://docs.ros.org/api/sensor_msgs/html/msg/PointCloud2.html))

&nbsp;&nbsp;&nbsp;&nbsp;Original LIDAR pointcloud
### Published Topics ###
TF containing the transformation between both sensors ([see TF info in ROS Wiki](http://wiki.ros.org/tf))

The node broadcasts the TF transformation between *velodyne* and *stereo* frames.
The fixed transformation between *stereo_camera* and *stereo* is published by a static broadcaster in *stereo_pattern.launch*.
The picture below shows the three coordinate frames:

![gazebo screenshot](screenshots/coordinates_frames.png)

**Note**: Additionally, a .launch file called *calibrated_tf.launch* containing the proper TF broadcasters is created in the */launch* folder of the *velo2cam_calibration* package so
you can use the calibration just executing:

```roslaunch velo2cam_calibration calibrated_tf.launch```

### Parameters ###
As described in the paper, different parameters can be selected. The ones that you will usually need are set in the launch files. For convenience, two sets of launch files are provided, with different parameters intended for the Gazebo environment, on the one hand, or the real wood pattern, on the other hand. The latter have *real_* as a prefix.

# Usage #
Some sample .launch files are provided in this package. The simplest way to launch the algorithm is by running the three main ROS nodes as follows:

```roslaunch velo2cam_calibration laser_pattern.launch```

```roslaunch velo2cam_calibration stereo_pattern.launch```

```roslaunch velo2cam_calibration velo2cam_calibration.launch```

We also provide a launch file containing the three launch files above:

```roslaunch velo2cam_calibration full_calibration.launch```

In order to test the algorithm, you can download an example .bag file [here](https://www.dropbox.com/s/fdvew31azdp9cbg/1_trans_0.bag?dl=1).

**Note**: The .bag file above was generated using a simulator environment in Gazebo which is provided [here](https://github.com/beltransen/velo2cam_gazebo). We use the simulator in order to test the algorithm with a proper ground truth.

# Calibration target details #
The following scheme shows the real size of the calibration target used by this algorithm. Measurements are given in centimeters (cm).

![gazebo screenshot](screenshots/calibration_target_scheme.png)

**Note:** Other size may be used for convenience. If so, please configure nodes parameters accordingly.

# Citation #
\[1\] Guindel, C., Beltrán, J., Martín, D. and García, F. (2017). Automatic Extrinsic Calibration for Lidar-Stereo Vehicle Sensor Setups. *IEEE International Conference on Intelligent Transportation Systems (ITSC), 674–679*.

Pre-print available [here](https://arxiv.org/abs/1705.04085).
[2] Guindel, C., Beltrán, J., Martín, D., and García, F. (2017). [Automatic Extrinsic Calibration for Lidar-Stereo Vehicle Sensor Setup](https://arxiv.org/abs/1705.04085). *IEEE International Conference on Intelligent Transportation Systems (ITSC), 674–679*.
Loading

0 comments on commit d4d87f2

Please sign in to comment.