Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: unrecify image projection based fusion #7030

Closed
Show file tree
Hide file tree
Changes from 5 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,8 @@
#include "image_projection_based_fusion/fusion_node.hpp"
#include "tier4_autoware_utils/ros/debug_publisher.hpp"

#include <image_projection_based_fusion/utils/utils.hpp>

#include "autoware_auto_perception_msgs/msg/object_classification.hpp"

#include <map>
Expand Down Expand Up @@ -45,7 +47,8 @@ class RoiDetectedObjectFusionNode

std::map<std::size_t, DetectedObjectWithFeature> generateDetectedObjectRoIs(
const DetectedObjects & input_object_msg, const double image_width, const double image_height,
const Eigen::Affine3d & object2camera_affine, const Eigen::Matrix4d & camera_projection);
const Eigen::Affine3d & object2camera_affine,
const image_geometry::PinholeCameraModel & pinhole_camera_model);

void fuseObjectsOnImage(
const DetectedObjects & input_object_msg,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,8 @@

#include "image_projection_based_fusion/fusion_node.hpp"

#include <image_projection_based_fusion/utils/utils.hpp>

#include <string>
#include <vector>
namespace image_projection_based_fusion
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,8 @@

#include "image_projection_based_fusion/fusion_node.hpp"

#include <image_projection_based_fusion/utils/utils.hpp>

#include <string>
#include <vector>

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -42,6 +42,7 @@
#include "autoware_auto_perception_msgs/msg/shape.hpp"
#include "tier4_perception_msgs/msg/detected_object_with_feature.hpp"

#include <image_geometry/pinhole_camera_model.h>
#include <pcl/point_cloud.h>
#include <pcl/point_types.h>
#include <pcl_conversions/pcl_conversions.h>
Expand All @@ -60,6 +61,10 @@ struct PointData
float distance;
size_t orig_index;
};

Eigen::Vector2d calcRawImageProjectedPoint(
const image_geometry::PinholeCameraModel & pinhole_camera_model, const cv::Point3d & point3d);

std::optional<geometry_msgs::msg::TransformStamped> getTransformStamped(
const tf2_ros::Buffer & tf_buffer, const std::string & target_frame_id,
const std::string & source_frame_id, const rclcpp::Time & time);
Expand Down
1 change: 1 addition & 0 deletions perception/image_projection_based_fusion/package.xml
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@
<depend>autoware_point_types</depend>
<depend>cv_bridge</depend>
<depend>euclidean_cluster</depend>
<depend>image_geometry</depend>
<depend>image_transport</depend>
<depend>lidar_centerpoint</depend>
<depend>message_filters</depend>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -307,6 +307,9 @@ void PointPaintingFusionNode::fuseOnSingleImage(
const auto class_offset = painted_pointcloud_msg.fields.at(4).offset;
const auto p_step = painted_pointcloud_msg.point_step;
// projection matrix
image_geometry::PinholeCameraModel pinhole_camera_model;
pinhole_camera_model.fromCameraInfo(camera_info);

Eigen::Matrix3f camera_projection; // use only x,y,z
camera_projection << camera_info.p.at(0), camera_info.p.at(1), camera_info.p.at(2),
camera_info.p.at(4), camera_info.p.at(5), camera_info.p.at(6);
Expand Down Expand Up @@ -342,15 +345,15 @@ dc | dc dc dc dc ||zc|
continue;
}
// project
Eigen::Vector3f normalized_projected_point = camera_projection * Eigen::Vector3f(p_x, p_y, p_z);
Eigen::Vector2d projected_point =
calcRawImageProjectedPoint(pinhole_camera_model, cv::Point3d(p_x, p_y, p_z));

// iterate 2d bbox
for (const auto & feature_object : objects) {
sensor_msgs::msg::RegionOfInterest roi = feature_object.feature.roi;
// paint current point if it is inside bbox
int label2d = feature_object.object.classification.front().label;
if (
!isUnknown(label2d) &&
isInsideBbox(normalized_projected_point.x(), normalized_projected_point.y(), roi, p_z)) {
if (!isUnknown(label2d) && isInsideBbox(projected_point.x(), projected_point.y(), roi, p_z)) {
data = &painted_pointcloud_msg.data[0];
auto p_class = reinterpret_cast<float *>(&output[stride + class_offset]);
for (const auto & cls : isClassTable_) {
Expand All @@ -361,7 +364,7 @@ dc | dc dc dc dc ||zc|
#if 0
// Parallelizing loop don't support push_back
if (debugger_) {
debug_image_points.push_back(normalized_projected_point);
debug_image_points.push_back(projected_point);
}
#endif
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -83,10 +83,8 @@ void RoiClusterFusionNode::fuseOnSingleImage(
const DetectedObjectsWithFeature & input_roi_msg,
const sensor_msgs::msg::CameraInfo & camera_info, DetectedObjectsWithFeature & output_cluster_msg)
{
Eigen::Matrix4d projection;
projection << camera_info.p.at(0), camera_info.p.at(1), camera_info.p.at(2), camera_info.p.at(3),
camera_info.p.at(4), camera_info.p.at(5), camera_info.p.at(6), camera_info.p.at(7),
camera_info.p.at(8), camera_info.p.at(9), camera_info.p.at(10), camera_info.p.at(11);
image_geometry::PinholeCameraModel pinhole_camera_model;
pinhole_camera_model.fromCameraInfo(camera_info);

// get transform from cluster frame id to camera optical frame id
geometry_msgs::msg::TransformStamped transform_stamped;
Expand Down Expand Up @@ -133,23 +131,19 @@ void RoiClusterFusionNode::fuseOnSingleImage(
continue;
}

Eigen::Vector4d projected_point =
projection * Eigen::Vector4d(*iter_x, *iter_y, *iter_z, 1.0);
Eigen::Vector2d normalized_projected_point = Eigen::Vector2d(
projected_point.x() / projected_point.z(), projected_point.y() / projected_point.z());
Eigen::Vector2d projected_point =
calcRawImageProjectedPoint(pinhole_camera_model, cv::Point3d(*iter_x, *iter_y, *iter_z));
if (
0 <= static_cast<int>(normalized_projected_point.x()) &&
static_cast<int>(normalized_projected_point.x()) <=
static_cast<int>(camera_info.width) - 1 &&
0 <= static_cast<int>(normalized_projected_point.y()) &&
static_cast<int>(normalized_projected_point.y()) <=
static_cast<int>(camera_info.height) - 1) {
min_x = std::min(static_cast<int>(normalized_projected_point.x()), min_x);
min_y = std::min(static_cast<int>(normalized_projected_point.y()), min_y);
max_x = std::max(static_cast<int>(normalized_projected_point.x()), max_x);
max_y = std::max(static_cast<int>(normalized_projected_point.y()), max_y);
projected_points.push_back(normalized_projected_point);
if (debugger_) debugger_->obstacle_points_.push_back(normalized_projected_point);
0 <= static_cast<int>(projected_point.x()) &&
static_cast<int>(projected_point.x()) <= static_cast<int>(camera_info.width) - 1 &&
0 <= static_cast<int>(projected_point.y()) &&
static_cast<int>(projected_point.y()) <= static_cast<int>(camera_info.height) - 1) {
min_x = std::min(static_cast<int>(projected_point.x()), min_x);
min_y = std::min(static_cast<int>(projected_point.y()), min_y);
max_x = std::max(static_cast<int>(projected_point.x()), max_x);
max_y = std::max(static_cast<int>(projected_point.y()), max_y);
projected_points.push_back(projected_point);
if (debugger_) debugger_->obstacle_points_.push_back(projected_point);
}
}
if (projected_points.empty()) {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -86,15 +86,12 @@ void RoiDetectedObjectFusionNode::fuseOnSingleImage(
object2camera_affine = transformToEigen(transform_stamped_optional.value().transform);
}

Eigen::Matrix4d camera_projection;
camera_projection << camera_info.p.at(0), camera_info.p.at(1), camera_info.p.at(2),
camera_info.p.at(3), camera_info.p.at(4), camera_info.p.at(5), camera_info.p.at(6),
camera_info.p.at(7), camera_info.p.at(8), camera_info.p.at(9), camera_info.p.at(10),
camera_info.p.at(11);
image_geometry::PinholeCameraModel pinhole_camera_model;
pinhole_camera_model.fromCameraInfo(camera_info);

const auto object_roi_map = generateDetectedObjectRoIs(
input_object_msg, static_cast<double>(camera_info.width),
static_cast<double>(camera_info.height), object2camera_affine, camera_projection);
static_cast<double>(camera_info.height), object2camera_affine, pinhole_camera_model);
fuseObjectsOnImage(input_object_msg, input_roi_msg.feature_objects, object_roi_map);

if (debugger_) {
Expand All @@ -109,7 +106,8 @@ void RoiDetectedObjectFusionNode::fuseOnSingleImage(
std::map<std::size_t, DetectedObjectWithFeature>
RoiDetectedObjectFusionNode::generateDetectedObjectRoIs(
const DetectedObjects & input_object_msg, const double image_width, const double image_height,
const Eigen::Affine3d & object2camera_affine, const Eigen::Matrix4d & camera_projection)
const Eigen::Affine3d & object2camera_affine,
const image_geometry::PinholeCameraModel & pinhole_camera_model)
{
std::map<std::size_t, DetectedObjectWithFeature> object_roi_map;
int64_t timestamp_nsec =
Expand Down Expand Up @@ -148,13 +146,8 @@ RoiDetectedObjectFusionNode::generateDetectedObjectRoIs(
continue;
}

Eigen::Vector2d proj_point;
{
Eigen::Vector4d proj_point_hom =
camera_projection * Eigen::Vector4d(point.x(), point.y(), point.z(), 1.0);
proj_point = Eigen::Vector2d(
proj_point_hom.x() / (proj_point_hom.z()), proj_point_hom.y() / (proj_point_hom.z()));
}
Eigen::Vector2d proj_point = calcRawImageProjectedPoint(
pinhole_camera_model, cv::Point3d(point.x(), point.y(), point.z()));

min_x = std::min(proj_point.x(), min_x);
min_y = std::min(proj_point.y(), min_y);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -101,10 +101,9 @@ void RoiPointCloudFusionNode::fuseOnSingleImage(
}

// transform pointcloud to camera optical frame id
Eigen::Matrix4d projection;
projection << camera_info.p.at(0), camera_info.p.at(1), camera_info.p.at(2), camera_info.p.at(3),
camera_info.p.at(4), camera_info.p.at(5), camera_info.p.at(6), camera_info.p.at(7),
camera_info.p.at(8), camera_info.p.at(9), camera_info.p.at(10), camera_info.p.at(11);
image_geometry::PinholeCameraModel pinhole_camera_model;
pinhole_camera_model.fromCameraInfo(camera_info);

geometry_msgs::msg::TransformStamped transform_stamped;
{
const auto transform_stamped_optional = getTransformStamped(
Expand Down Expand Up @@ -143,10 +142,8 @@ void RoiPointCloudFusionNode::fuseOnSingleImage(
if (transformed_z <= 0.0) {
continue;
}
Eigen::Vector4d projected_point =
projection * Eigen::Vector4d(transformed_x, transformed_y, transformed_z, 1.0);
Eigen::Vector2d normalized_projected_point = Eigen::Vector2d(
projected_point.x() / projected_point.z(), projected_point.y() / projected_point.z());
Eigen::Vector2d projected_point = calcRawImageProjectedPoint(
pinhole_camera_model, cv::Point3d(transformed_x, transformed_y, transformed_z));
for (std::size_t i = 0; i < output_objs.size(); ++i) {
auto & feature_obj = output_objs.at(i);
const auto & check_roi = feature_obj.feature.roi;
Expand All @@ -156,10 +153,9 @@ void RoiPointCloudFusionNode::fuseOnSingleImage(
continue;
}
if (
check_roi.x_offset <= normalized_projected_point.x() &&
check_roi.y_offset <= normalized_projected_point.y() &&
check_roi.x_offset + check_roi.width >= normalized_projected_point.x() &&
check_roi.y_offset + check_roi.height >= normalized_projected_point.y()) {
check_roi.x_offset <= projected_point.x() && check_roi.y_offset <= projected_point.y() &&
check_roi.x_offset + check_roi.width >= projected_point.x() &&
check_roi.y_offset + check_roi.height >= projected_point.y()) {
std::memcpy(
&cluster.data[clusters_data_size.at(i)], &input_pointcloud_msg.data[offset], point_step);
clusters_data_size.at(i) += point_step;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -65,11 +65,9 @@ void SegmentPointCloudFusionNode::fuseOnSingleImage(
if (mask.cols == 0 || mask.rows == 0) {
return;
}
Eigen::Matrix4d projection;
projection << camera_info.p.at(0), camera_info.p.at(1), camera_info.p.at(2), camera_info.p.at(3),
camera_info.p.at(4), camera_info.p.at(5), camera_info.p.at(6), camera_info.p.at(7),
camera_info.p.at(8), camera_info.p.at(9), camera_info.p.at(10), camera_info.p.at(11), 0.0, 0.0,
0.0, 1.0;
image_geometry::PinholeCameraModel pinhole_camera_model;
pinhole_camera_model.fromCameraInfo(camera_info);

geometry_msgs::msg::TransformStamped transform_stamped;
// transform pointcloud from frame id to camera optical frame id
{
Expand Down Expand Up @@ -99,22 +97,19 @@ void SegmentPointCloudFusionNode::fuseOnSingleImage(
continue;
}

Eigen::Vector4d projected_point = projection * Eigen::Vector4d(*iter_x, *iter_y, *iter_z, 1.0);
Eigen::Vector2d normalized_projected_point = Eigen::Vector2d(
projected_point.x() / projected_point.z(), projected_point.y() / projected_point.z());
Eigen::Vector2d projected_point =
calcRawImageProjectedPoint(pinhole_camera_model, cv::Point3d(*iter_x, *iter_y, *iter_z));

bool is_inside_image =
normalized_projected_point.x() > 0 && normalized_projected_point.x() < camera_info.width &&
normalized_projected_point.y() > 0 && normalized_projected_point.y() < camera_info.height;
bool is_inside_image = projected_point.x() > 0 && projected_point.x() < camera_info.width &&
projected_point.y() > 0 && projected_point.y() < camera_info.height;
if (!is_inside_image) {
output_cloud.push_back(pcl::PointXYZ(*iter_orig_x, *iter_orig_y, *iter_orig_z));
continue;
}

// skip filtering pointcloud where semantic id out of the defined list
uint8_t semantic_id = mask.at<uint8_t>(
static_cast<uint16_t>(normalized_projected_point.y()),
static_cast<uint16_t>(normalized_projected_point.x()));
static_cast<uint16_t>(projected_point.y()), static_cast<uint16_t>(projected_point.x()));
if (static_cast<size_t>(semantic_id) >= filter_semantic_label_target_.size()) {
output_cloud.push_back(pcl::PointXYZ(*iter_orig_x, *iter_orig_y, *iter_orig_z));
continue;
Expand Down
25 changes: 25 additions & 0 deletions perception/image_projection_based_fusion/src/utils/utils.cpp
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
// Copyright 2022 TIER IV, Inc.

Check notice on line 1 in perception/image_projection_based_fusion/src/utils/utils.cpp

View check run for this annotation

CodeScene Delta Analysis / CodeScene Cloud Delta Analysis (main)

ℹ Getting worse: Overall Code Complexity

The mean cyclomatic complexity increases from 4.38 to 4.44, threshold = 4. This file has many conditional statements (e.g. if, for, while) across its implementation, leading to lower code health. Avoid adding more conditionals.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
Expand All @@ -13,8 +13,33 @@
// limitations under the License.

#include "image_projection_based_fusion/utils/utils.hpp"

namespace image_projection_based_fusion
{
Eigen::Vector2d calcRawImageProjectedPoint(
const image_geometry::PinholeCameraModel & pinhole_camera_model, const cv::Point3d & point3d)
{
cv::Point2d rectified_image_point = pinhole_camera_model.project3dToPixel(point3d);

cv::Point2d raw_image_point = rectified_image_point;

try {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is this try sentence for?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The code for determining whether to unrectify in the library of image_geometry seems to change occasionally. If it can't be done, it is thrown, so I want to be able to catch it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, understood. It is a difficult decision, but IMHO we should avoid catching this error and just logging warning without killing the node, since it will be difficult for developers to be aware of a breaking change or a bug in image_geometry library, increasing a risk of a potential degradation of Autoware. What do you think?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kminoda So, what exactly do you want to do?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please see here #7030 (comment)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kminoda This time, when d is 0 in the image_geometry class, it is determined that the image is not calibrated, and thus there is no need to unrectify, resulting in an exception being thrown. This determination of being uncalibrated seems to change frequently in the code. Even if an exception is thrown, there is no need to terminate the process since it only means that unrectifying is not possible. Wouldn't it be better to skip this with a try-catch block?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will close this PR as there seems to have been a misunderstanding on my part.

bool need_unrectify = false;
for (const auto & distortion : pinhole_camera_model.cameraInfo().d) {
if (distortion != 0.0) {
need_unrectify = true;
}
}
kminoda marked this conversation as resolved.
Show resolved Hide resolved

if (need_unrectify) {
raw_image_point = pinhole_camera_model.unrectifyPoint(rectified_image_point);
}
} catch (cv::Exception & e) {
RCLCPP_WARN_STREAM(rclcpp::get_logger("image_projection_based_fusion"), e.what());
}

return Eigen::Vector2d(raw_image_point.x, raw_image_point.y);
}

std::optional<geometry_msgs::msg::TransformStamped> getTransformStamped(
const tf2_ros::Buffer & tf_buffer, const std::string & target_frame_id,
Expand Down
Loading