Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(image_projection_based_fusion): enable to show iou value in roi_cluster_fusion debug image #5541

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -50,6 +50,7 @@ class Debugger
std::vector<RegionOfInterest> image_rois_;
std::vector<RegionOfInterest> obstacle_rois_;
std::vector<Eigen::Vector2d> obstacle_points_;
std::vector<double> max_iou_for_image_rois_;

private:
void imageCallback(
Expand Down
56 changes: 48 additions & 8 deletions perception/image_projection_based_fusion/src/debugger.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -77,36 +77,76 @@
image_rois_.clear();
obstacle_rois_.clear();
obstacle_points_.clear();
max_iou_for_image_rois_.clear();
}

void Debugger::publishImage(const std::size_t image_id, const rclcpp::Time & stamp)
{
const boost::circular_buffer<sensor_msgs::msg::Image::ConstSharedPtr> & image_buffer =
image_buffers_.at(image_id);
const image_transport::Publisher & image_pub = image_pubs_.at(image_id);
const bool draw_iou_score =
max_iou_for_image_rois_.size() > 0 && max_iou_for_image_rois_.size() == image_rois_.size();

for (std::size_t i = 0; i < image_buffer.size(); ++i) {
if (image_buffer.at(i)->header.stamp != stamp) {
continue;
}

auto cv_ptr = cv_bridge::toCvCopy(image_buffer.at(i), image_buffer.at(i)->encoding);

// draw obstacle points
for (const auto & point : obstacle_points_) {
cv::circle(
cv_ptr->image, cv::Point(static_cast<int>(point.x()), static_cast<int>(point.y())), 2,
cv::Scalar(255, 255, 255), 3, 4);
}

// draw rois
const int img_height = static_cast<int>(image_buffer.at(i)->height);
const int img_width = static_cast<int>(image_buffer.at(i)->width);
for (const auto & roi : obstacle_rois_) {
drawRoiOnImage(
cv_ptr->image, roi, image_buffer.at(i)->width, image_buffer.at(i)->height,
cv::Scalar(255, 0, 0));
drawRoiOnImage(cv_ptr->image, roi, img_width, img_height, cv::Scalar(255, 0, 0)); // blue
}
// TODO(yukke42): show iou_score on image
for (const auto & roi : image_rois_) {
drawRoiOnImage(
cv_ptr->image, roi, image_buffer.at(i)->width, image_buffer.at(i)->height,
cv::Scalar(0, 0, 255));
drawRoiOnImage(cv_ptr->image, roi, img_width, img_height, cv::Scalar(0, 0, 255)); // red
}

// show iou_score on image
if (draw_iou_score) {
for (auto roi_index = 0; roi_index < static_cast<int>(image_rois_.size()); ++roi_index) {
std::stringstream stream;
stream << std::fixed << std::setprecision(2) << max_iou_for_image_rois_.at(roi_index);
std::string iou_score = stream.str();

// set text position
int baseline = 3;
cv::Size textSize = cv::getTextSize(iou_score, cv::FONT_HERSHEY_SIMPLEX, 0.5, 1, &baseline);
const int text_height = static_cast<int>(textSize.height);
const int text_width = static_cast<int>(textSize.width);
int x = image_rois_.at(roi_index).x_offset;
int y = image_rois_.at(roi_index).y_offset; // offset for text
if (y < 0 + text_height)
y = text_height; // if roi is on the top of image, put text on lower left of roi
if (y > img_height - text_height)
y = img_height -
text_height; // if roi is on the bottom of image, put text on upper left of roi
if (x > img_width - text_width)
x = img_width - text_width; // if roi is on the right of image, put text on left of roi
if (x < 0) x = 0; // if roi is on the left of image, put text on right of roi

// choice color by iou score
// cv::Scalar color = max_iou_for_image_rois_.at(i) > 0.5 ? cv::Scalar(0, 255, 0) :
// cv::Scalar(0, 0, 255);
cv::Scalar color = cv::Scalar(0, 0, 255); // red

cv::rectangle(
cv_ptr->image, cv::Point(x, y - textSize.height - baseline),
cv::Point(x + textSize.width, y), cv::Scalar(255, 255, 255),
cv::FILLED); // white background
cv::putText(
cv_ptr->image, iou_score, cv::Point(x, y), cv::FONT_HERSHEY_SIMPLEX, 0.5, color, 1,
cv::LINE_AA); // text
}

Check warning on line 149 in perception/image_projection_based_fusion/src/debugger.cpp

View check run for this annotation

CodeScene Delta Analysis / CodeScene Cloud Delta Analysis (main)

❌ New issue: Complex Method

Debugger::publishImage has a cyclomatic complexity of 14, threshold = 9. This function has many conditional statements (e.g. if, for, while), leading to lower code health. Avoid adding more conditionals and code to it without refactoring.

Check warning on line 149 in perception/image_projection_based_fusion/src/debugger.cpp

View check run for this annotation

CodeScene Delta Analysis / CodeScene Cloud Delta Analysis (main)

❌ New issue: Bumpy Road Ahead

Debugger::publishImage has 2 blocks with nested conditional logic. Any nesting of 2 or deeper is considered. Threshold is one single, nested block per function. The Bumpy Road code smell is a function that contains multiple chunks of nested conditional logic. The deeper the nesting and the more bumps, the lower the code health.

Check warning on line 149 in perception/image_projection_based_fusion/src/debugger.cpp

View check run for this annotation

CodeScene Delta Analysis / CodeScene Cloud Delta Analysis (main)

❌ New issue: Deep, Nested Complexity

Debugger::publishImage has a nested complexity depth of 4, threshold = 4. This function contains deeply nested logic such as if statements and/or loops. The deeper the nesting, the lower the code health.
}

image_pub.publish(cv_ptr->toImageMsg());
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -203,6 +203,7 @@ void FusionNode<Msg, Obj>::subCallback(const typename Msg::ConstSharedPtr input_
(*output_msg).header.stamp.sec * (int64_t)1e9 + (*output_msg).header.stamp.nanosec;

// if matching rois exist, fuseOnSingle
// please ask maintainers before parallelize this loop because debugger is not thread safe
for (std::size_t roi_i = 0; roi_i < rois_number_; ++roi_i) {
if (camera_info_map_.find(roi_i) == camera_info_map_.end()) {
RCLCPP_WARN_THROTTLE(
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -83,161 +83,159 @@
const DetectedObjectsWithFeature & input_cluster_msg, const std::size_t image_id,
const DetectedObjectsWithFeature & input_roi_msg,
const sensor_msgs::msg::CameraInfo & camera_info, DetectedObjectsWithFeature & output_cluster_msg)
{
std::vector<sensor_msgs::msg::RegionOfInterest> debug_image_rois;
std::vector<sensor_msgs::msg::RegionOfInterest> debug_pointcloud_rois;
std::vector<Eigen::Vector2d> debug_image_points;

Eigen::Matrix4d projection;
projection << camera_info.p.at(0), camera_info.p.at(1), camera_info.p.at(2), camera_info.p.at(3),
camera_info.p.at(4), camera_info.p.at(5), camera_info.p.at(6), camera_info.p.at(7),
camera_info.p.at(8), camera_info.p.at(9), camera_info.p.at(10), camera_info.p.at(11);

// get transform from cluster frame id to camera optical frame id
geometry_msgs::msg::TransformStamped transform_stamped;
{
const auto transform_stamped_optional = getTransformStamped(
tf_buffer_, /*target*/ camera_info.header.frame_id,
/*source*/ input_cluster_msg.header.frame_id, camera_info.header.stamp);
if (!transform_stamped_optional) {
RCLCPP_WARN_STREAM(
get_logger(), "Failed to get transform from " << input_cluster_msg.header.frame_id << " to "
<< camera_info.header.frame_id);
return;
}
transform_stamped = transform_stamped_optional.value();
}

std::map<std::size_t, RegionOfInterest> m_cluster_roi;
for (std::size_t i = 0; i < input_cluster_msg.feature_objects.size(); ++i) {
if (input_cluster_msg.feature_objects.at(i).feature.cluster.data.empty()) {
continue;
}

if (is_far_enough(input_cluster_msg.feature_objects.at(i), fusion_distance_)) {
continue;
}

// filter point out of scope
if (debugger_ && out_of_scope(input_cluster_msg.feature_objects.at(i))) {
continue;
}

sensor_msgs::msg::PointCloud2 transformed_cluster;
tf2::doTransform(
input_cluster_msg.feature_objects.at(i).feature.cluster, transformed_cluster,
transform_stamped);

int min_x(camera_info.width), min_y(camera_info.height), max_x(0), max_y(0);
std::vector<Eigen::Vector2d> projected_points;
projected_points.reserve(transformed_cluster.data.size());
for (sensor_msgs::PointCloud2ConstIterator<float> iter_x(transformed_cluster, "x"),
iter_y(transformed_cluster, "y"), iter_z(transformed_cluster, "z");
iter_x != iter_x.end(); ++iter_x, ++iter_y, ++iter_z) {
if (*iter_z <= 0.0) {
continue;
}

Eigen::Vector4d projected_point =
projection * Eigen::Vector4d(*iter_x, *iter_y, *iter_z, 1.0);
Eigen::Vector2d normalized_projected_point = Eigen::Vector2d(
projected_point.x() / projected_point.z(), projected_point.y() / projected_point.z());
if (
0 <= static_cast<int>(normalized_projected_point.x()) &&
static_cast<int>(normalized_projected_point.x()) <=
static_cast<int>(camera_info.width) - 1 &&
0 <= static_cast<int>(normalized_projected_point.y()) &&
static_cast<int>(normalized_projected_point.y()) <=
static_cast<int>(camera_info.height) - 1) {
min_x = std::min(static_cast<int>(normalized_projected_point.x()), min_x);
min_y = std::min(static_cast<int>(normalized_projected_point.y()), min_y);
max_x = std::max(static_cast<int>(normalized_projected_point.x()), max_x);
max_y = std::max(static_cast<int>(normalized_projected_point.y()), max_y);
projected_points.push_back(normalized_projected_point);
debug_image_points.push_back(normalized_projected_point);
if (debugger_) debugger_->obstacle_points_.push_back(normalized_projected_point);
}
}
if (projected_points.empty()) {
continue;
}

sensor_msgs::msg::RegionOfInterest roi;
// roi.do_rectify = m_camera_info_.at(id).do_rectify;
roi.x_offset = min_x;
roi.y_offset = min_y;
roi.width = max_x - min_x;
roi.height = max_y - min_y;
m_cluster_roi.insert(std::make_pair(i, roi));
debug_pointcloud_rois.push_back(roi);
if (debugger_) debugger_->obstacle_rois_.push_back(roi);
}

for (const auto & feature_obj : input_roi_msg.feature_objects) {
int index = -1;
bool associated = false;
double max_iou = 0.0;
bool is_roi_label_known =
feature_obj.object.classification.front().label != ObjectClassification::UNKNOWN;
for (const auto & cluster_map : m_cluster_roi) {
double iou(0.0);
bool is_use_non_trust_object_iou_mode = is_far_enough(
input_cluster_msg.feature_objects.at(cluster_map.first), trust_object_distance_);
if (is_use_non_trust_object_iou_mode || is_roi_label_known) {
iou =
cal_iou_by_mode(cluster_map.second, feature_obj.feature.roi, non_trust_object_iou_mode_);
} else {
iou = cal_iou_by_mode(cluster_map.second, feature_obj.feature.roi, trust_object_iou_mode_);
}

const bool passed_inside_cluster_gate =
only_allow_inside_cluster_
? is_inside(feature_obj.feature.roi, cluster_map.second, roi_scale_factor_)
: true;
if (max_iou < iou && passed_inside_cluster_gate) {
index = cluster_map.first;
max_iou = iou;
associated = true;
}
}

if (!associated) {
continue;
}

if (!output_cluster_msg.feature_objects.empty()) {
bool is_roi_existence_prob_higher =
output_cluster_msg.feature_objects.at(index).object.existence_probability <=
feature_obj.object.existence_probability;
if (iou_threshold_ < max_iou && is_roi_existence_prob_higher && is_roi_label_known) {
output_cluster_msg.feature_objects.at(index).object.classification =
feature_obj.object.classification;

// Update existence_probability for fused objects
if (
output_cluster_msg.feature_objects.at(index).object.existence_probability <
min_roi_existence_prob_) {
output_cluster_msg.feature_objects.at(index).object.existence_probability =
min_roi_existence_prob_;
}
}

// fuse with unknown roi

if (unknown_iou_threshold_ < max_iou && is_roi_existence_prob_higher && !is_roi_label_known) {
output_cluster_msg.feature_objects.at(index).object.classification =
feature_obj.object.classification;
// Update existence_probability for fused objects
if (
output_cluster_msg.feature_objects.at(index).object.existence_probability <
min_roi_existence_prob_) {
output_cluster_msg.feature_objects.at(index).object.existence_probability =
min_roi_existence_prob_;
}
}
}
debug_image_rois.push_back(feature_obj.feature.roi);
if (debugger_) debugger_->image_rois_.push_back(feature_obj.feature.roi);
if (debugger_) debugger_->max_iou_for_image_rois_.push_back(max_iou);
}

// note: debug objects are safely cleared in fusion_node.cpp
if (debugger_) {

Check notice on line 238 in perception/image_projection_based_fusion/src/roi_cluster_fusion/node.cpp

View check run for this annotation

CodeScene Delta Analysis / CodeScene Cloud Delta Analysis (main)

ℹ Getting worse: Complex Method

RoiClusterFusionNode::fuseOnSingleImage increases in cyclomatic complexity from 38 to 42, threshold = 9. This function has many conditional statements (e.g. if, for, while), leading to lower code health. Avoid adding more conditionals and code to it without refactoring.

Check notice on line 238 in perception/image_projection_based_fusion/src/roi_cluster_fusion/node.cpp

View check run for this annotation

CodeScene Delta Analysis / CodeScene Cloud Delta Analysis (main)

ℹ Getting worse: Bumpy Road Ahead

RoiClusterFusionNode::fuseOnSingleImage increases from 5 to 6 logical blocks with deeply nested code, threshold is one single block per function. The Bumpy Road code smell is a function that contains multiple chunks of nested conditional logic. The deeper the nesting and the more bumps, the lower the code health.
debugger_->image_rois_ = debug_image_rois;
debugger_->obstacle_rois_ = debug_pointcloud_rois;
debugger_->obstacle_points_ = debug_image_points;
debugger_->publishImage(image_id, input_roi_msg.header.stamp);
}
}
Expand Down
Loading