Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve and document the point cloud pipeline performance evaluation methods #6032

Closed
9 tasks done
xmfcx opened this issue Jan 9, 2024 · 2 comments
Closed
9 tasks done
Assignees
Labels
status:stale Inactive or outdated issues. (auto-assigned) type:documentation Creating or refining documentation. (auto-assigned) type:new-feature New functionalities or additions, feature requests. type:performance Software optimization and system performance.

Comments

@xmfcx
Copy link
Contributor

xmfcx commented Jan 9, 2024

Checklist

  • I've read the contribution guidelines.
  • I've searched other issues and no duplicate issues were found.
  • I've agreed with the maintainers that I can plan this task.

Description

Currently, in Autoware, many developers rely on ros2 topic delay pointcloud_topic_name to measure the point cloud pipeline performance. This method is suitable for small messages, but for larger messages such as point clouds or images, it introduces significant overhead and reports exaggerated delays.

The primary reason is that our sensing/perception nodes run within composable node containers and utilize intra-process communication. External subscriptions to these messages, like using ros2 topic delay or rviz2, add additional delays. Even slow down existing pipelines just by subscribing from outside.

A more efficient method would be for the nodes themselves to report the delays.

Purpose

The purpose of this issue is to improve the accuracy and reliability of point cloud pipeline performance evaluation in Autoware. By developing a method where nodes internally report delays, we can achieve a more precise measurement of performance, essential for optimizing and benchmarking our systems.

Possible approaches

  1. Extend existing tier4_autoware_utils::DebugPublisher instances within nodes to report accumulated delays (current time - header time).
    auto accumulated_time =
      std::chrono::duration<double, std::milli>(
        std::chrono::nanoseconds((this->get_clock()->now() - input->header.stamp).nanoseconds()))
        .count();
    
    debug_publisher_->publish<tier4_debug_msgs::msg::Float64Stamped>(
      "debug/accumulated_time_ms", accumulated_time);
  2. Update the Autoware documentation to reflect these changes
  3. Add a tutorial page for the evaluation of the point cloud pipeline performance
  4. Create additional tools to assist in the evaluation

Definition of done

cc. @kminoda @miursh

@xmfcx xmfcx added type:performance Software optimization and system performance. type:documentation Creating or refining documentation. (auto-assigned) type:new-feature New functionalities or additions, feature requests. labels Jan 9, 2024
Copy link

stale bot commented Jun 13, 2024

This pull request has been automatically marked as stale because it has not had recent activity.

@stale stale bot added the status:stale Inactive or outdated issues. (auto-assigned) label Jun 13, 2024
@brkay54
Copy link
Member

brkay54 commented Jun 14, 2024

Hi @xmfcx I completed all the tasks above, I forgot this issue sorry for that. If you don't have anything want to add, I think we can close this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
status:stale Inactive or outdated issues. (auto-assigned) type:documentation Creating or refining documentation. (auto-assigned) type:new-feature New functionalities or additions, feature requests. type:performance Software optimization and system performance.
Projects
No open projects
Development

No branches or pull requests

2 participants