From a367a099c0337df1fb651ba92d468b8a9db8fbaa Mon Sep 17 00:00:00 2001 From: github-actions Date: Thu, 21 Sep 2023 06:10:12 +0000 Subject: [PATCH] Deployed 88ce4306 to latest with MkDocs 1.4.3 and mike 1.2.0.dev0 --- latest/index.html | 3 +++ latest/search/search_index.json | 2 +- latest/sitemap.xml.gz | Bin 2834 -> 2834 bytes 3 files changed, 4 insertions(+), 1 deletion(-) diff --git a/latest/index.html b/latest/index.html index a378e72cc1641..643b2b3d72aa1 100644 --- a/latest/index.html +++ b/latest/index.html @@ -11953,6 +11953,9 @@

autoware.universe#

+

For Autoware's general documentation, see Autoware Documentation.

+

For detailed documents of Autoware Universe components, see Autoware Universe Documentation.

+
diff --git a/latest/search/search_index.json b/latest/search/search_index.json index 5848fc6cafd63..118a28ef41d3e 100644 --- a/latest/search/search_index.json +++ b/latest/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"autoware.universe","text":""},{"location":"CODE_OF_CONDUCT/","title":"Contributor Covenant Code of Conduct","text":""},{"location":"CODE_OF_CONDUCT/#our-pledge","title":"Our Pledge","text":"

We as members, contributors, and leaders pledge to make participation in our community a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation.

We pledge to act and interact in ways that contribute to an open, welcoming, diverse, inclusive, and healthy community.

"},{"location":"CODE_OF_CONDUCT/#our-standards","title":"Our Standards","text":"

Examples of behavior that contributes to a positive environment for our community include:

Examples of unacceptable behavior include:

"},{"location":"CODE_OF_CONDUCT/#enforcement-responsibilities","title":"Enforcement Responsibilities","text":"

Community leaders are responsible for clarifying and enforcing our standards of acceptable behavior and will take appropriate and fair corrective action in response to any behavior that they deem inappropriate, threatening, offensive, or harmful.

Community leaders have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, and will communicate reasons for moderation decisions when appropriate.

"},{"location":"CODE_OF_CONDUCT/#scope","title":"Scope","text":"

This Code of Conduct applies within all community spaces, and also applies when an individual is officially representing the community in public spaces. Examples of representing our community include using an official e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event.

"},{"location":"CODE_OF_CONDUCT/#enforcement","title":"Enforcement","text":"

Instances of abusive, harassing, or otherwise unacceptable behavior may be reported to the community leaders responsible for enforcement at conduct@autoware.org. All complaints will be reviewed and investigated promptly and fairly.

All community leaders are obligated to respect the privacy and security of the reporter of any incident.

"},{"location":"CODE_OF_CONDUCT/#enforcement-guidelines","title":"Enforcement Guidelines","text":"

Community leaders will follow these Community Impact Guidelines in determining the consequences for any action they deem in violation of this Code of Conduct:

"},{"location":"CODE_OF_CONDUCT/#1-correction","title":"1. Correction","text":"

Community Impact: Use of inappropriate language or other behavior deemed unprofessional or unwelcome in the community.

Consequence: A private, written warning from community leaders, providing clarity around the nature of the violation and an explanation of why the behavior was inappropriate. A public apology may be requested.

"},{"location":"CODE_OF_CONDUCT/#2-warning","title":"2. Warning","text":"

Community Impact: A violation through a single incident or series of actions.

Consequence: A warning with consequences for continued behavior. No interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, for a specified period of time. This includes avoiding interactions in community spaces as well as external channels like social media. Violating these terms may lead to a temporary or permanent ban.

"},{"location":"CODE_OF_CONDUCT/#3-temporary-ban","title":"3. Temporary Ban","text":"

Community Impact: A serious violation of community standards, including sustained inappropriate behavior.

Consequence: A temporary ban from any sort of interaction or public communication with the community for a specified period of time. No public or private interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, is allowed during this period. Violating these terms may lead to a permanent ban.

"},{"location":"CODE_OF_CONDUCT/#4-permanent-ban","title":"4. Permanent Ban","text":"

Community Impact: Demonstrating a pattern of violation of community standards, including sustained inappropriate behavior, harassment of an individual, or aggression toward or disparagement of classes of individuals.

Consequence: A permanent ban from any sort of public interaction within the community.

"},{"location":"CODE_OF_CONDUCT/#attribution","title":"Attribution","text":"

This Code of Conduct is adapted from the Contributor Covenant, version 2.1, available at https://www.contributor-covenant.org/version/2/1/code_of_conduct.html.

Community Impact Guidelines were inspired by Mozilla's code of conduct enforcement ladder.

For answers to common questions about this code of conduct, see the FAQ at https://www.contributor-covenant.org/faq. Translations are available at https://www.contributor-covenant.org/translations.

"},{"location":"CONTRIBUTING/","title":"Contributing","text":"

See https://autowarefoundation.github.io/autoware-documentation/main/contributing/.

"},{"location":"DISCLAIMER/","title":"DISCLAIMER","text":"

DISCLAIMER

\u201cAutoware\u201d will be provided by The Autoware Foundation under the Apache License 2.0. This \u201cDISCLAIMER\u201d will be applied to all users of Autoware (a \u201cUser\u201d or \u201cUsers\u201d) with the Apache License 2.0 and Users shall hereby approve and acknowledge all the contents specified in this disclaimer below and will be deemed to consent to this disclaimer without any objection upon utilizing or downloading Autoware.

Disclaimer and Waiver of Warranties

  1. AUTOWARE FOUNDATION MAKES NO REPRESENTATION OR WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, WITH RESPECT TO PROVIDING AUTOWARE (the \u201cService\u201d) including but not limited to any representation or warranty (i) of fitness or suitability for a particular purpose contemplated by the Users, (ii) of the expected functions, commercial value, accuracy, or usefulness of the Service, (iii) that the use by the Users of the Service complies with the laws and regulations applicable to the Users or any internal rules established by industrial organizations, (iv) that the Service will be free of interruption or defects, (v) of the non-infringement of any third party's right and (vi) the accuracy of the content of the Services and the software itself.

  2. The Autoware Foundation shall not be liable for any damage incurred by the User that are attributable to the Autoware Foundation for any reasons whatsoever. UNDER NO CIRCUMSTANCES SHALL THE AUTOWARE FOUNDATION BE LIABLE FOR INCIDENTAL, INDIRECT, SPECIAL OR FUTURE DAMAGES OR LOSS OF PROFITS.

  3. A User shall be entirely responsible for the content posted by the User and its use of any content of the Service or the Website. If the User is held responsible in a civil action such as a claim for damages or even in a criminal case, the Autoware Foundation and member companies, governments and academic & non-profit organizations and their directors, officers, employees and agents (collectively, the \u201cIndemnified Parties\u201d) shall be completely discharged from any rights or assertions the User may have against the Indemnified Parties, or from any legal action, litigation or similar procedures.

Indemnity

A User shall indemnify and hold the Indemnified Parties harmless from any of their damages, losses, liabilities, costs or expenses (including attorneys' fees or criminal compensation), or any claims or demands made against the Indemnified Parties by any third party, due to or arising out of, or in connection with utilizing Autoware (including the representations and warranties), the violation of applicable Product Liability Law of each country (including criminal case) or violation of any applicable laws by the Users, or the content posted by the User or its use of any content of the Service or the Website.

"},{"location":"common/autoware_auto_common/design/comparisons/","title":"Comparisons","text":"

The float_comparisons.hpp library is a simple set of functions for performing approximate numerical comparisons. There are separate functions for performing comparisons using absolute bounds and relative bounds. Absolute comparison checks are prefixed with abs_ and relative checks are prefixed with rel_.

The bool_comparisons.hpp library additionally contains an XOR operator.

The intent of the library is to improve readability of code and reduce likelihood of typographical errors when using numerical and boolean comparisons.

"},{"location":"common/autoware_auto_common/design/comparisons/#target-use-cases","title":"Target use cases","text":"

The approximate comparisons are intended to be used to check whether two numbers lie within some absolute or relative interval. The exclusive_or function will test whether two values cast to different boolean values.

"},{"location":"common/autoware_auto_common/design/comparisons/#assumptions","title":"Assumptions","text":""},{"location":"common/autoware_auto_common/design/comparisons/#example-usage","title":"Example Usage","text":"
#include \"common/bool_comparisons.hpp\"\n#include \"common/float_comparisons.hpp\"\n\n#include <iostream>\n\n// using-directive is just for illustration; don't do this in practice\nusing namespace autoware::common::helper_functions::comparisons;\n\nstatic constexpr auto epsilon = 0.2;\nstatic constexpr auto relative_epsilon = 0.01;\n\nstd::cout << exclusive_or(true, false) << \"\\n\";\n// Prints: true\n\nstd::cout << rel_eq(1.0, 1.1, relative_epsilon)) << \"\\n\";\n// Prints: false\n\nstd::cout << approx_eq(10000.0, 10010.0, epsilon, relative_epsilon)) << \"\\n\";\n// Prints: true\n\nstd::cout << abs_eq(4.0, 4.2, epsilon) << \"\\n\";\n// Prints: true\n\nstd::cout << abs_ne(4.0, 4.2, epsilon) << \"\\n\";\n// Prints: false\n\nstd::cout << abs_eq_zero(0.2, epsilon) << \"\\n\";\n// Prints: false\n\nstd::cout << abs_lt(4.0, 4.25, epsilon) << \"\\n\";\n// Prints: true\n\nstd::cout << abs_lte(1.0, 1.2, epsilon) << \"\\n\";\n// Prints: true\n\nstd::cout << abs_gt(1.25, 1.0, epsilon) << \"\\n\";\n// Prints: true\n\nstd::cout << abs_gte(0.75, 1.0, epsilon) << \"\\n\";\n// Prints: false\n
"},{"location":"common/autoware_auto_geometry/design/interval/","title":"Interval","text":"

The interval is a standard 1D real-valued interval. The class implements a representation and operations on the interval type and guarantees interval validity on construction. Basic operations and accessors are implemented, as well as other common operations. See 'Example Usage' below.

"},{"location":"common/autoware_auto_geometry/design/interval/#target-use-cases","title":"Target use cases","text":""},{"location":"common/autoware_auto_geometry/design/interval/#properties","title":"Properties","text":""},{"location":"common/autoware_auto_geometry/design/interval/#conventions","title":"Conventions","text":""},{"location":"common/autoware_auto_geometry/design/interval/#assumptions","title":"Assumptions","text":""},{"location":"common/autoware_auto_geometry/design/interval/#example-usage","title":"Example Usage","text":"
#include \"geometry/interval.hpp\"\n\n#include <iostream>\n\n// using-directive is just for illustration; don't do this in practice\nusing namespace autoware::common::geometry;\n\n// bounds for example interval\nconstexpr auto MIN = 0.0;\nconstexpr auto MAX = 1.0;\n\n//\n// Try to construct an invalid interval. This will give the following error:\n// 'Attempted to construct an invalid interval: {\"min\": 1.0, \"max\": 0.0}'\n//\n\ntry {\nconst auto i = Interval_d(MAX, MIN);\n} catch (const std::runtime_error& e) {\nstd::cerr << e.what();\n}\n\n//\n// Construct a double precision interval from 0 to 1\n//\n\nconst auto i = Interval_d(MIN, MAX);\n\n//\n// Test accessors and properties\n//\n\nstd::cout << Interval_d::min(i) << \" \" << Interval_d::max(i) << \"\\n\";\n// Prints: 0.0 1.0\n\nstd::cout << Interval_d::empty(i) << \" \" << Interval_d::length(i) << \"\\n\";\n// Prints: false 1.0\n\nstd::cout << Interval_d::contains(i, 0.3) << \"\\n\";\n// Prints: true\n\nstd::cout << Interval_d::is_subset_eq(Interval_d(0.2, 0.4), i) << \"\\n\";\n// Prints: true\n\n//\n// Test operations.\n//\n\nstd::cout << Interval_d::intersect(i, Interval(-1.0, 0.3)) << \"\\n\";\n// Prints: {\"min\": 0.0, \"max\": 0.3}\n\nstd::cout << Interval_d::project_to_interval(i, 0.5) << \" \"\n<< Interval_d::project_to_interval(i, -1.3) << \"\\n\";\n// Prints: 0.5 0.0\n\n//\n// Distinguish empty/zero measure\n//\n\nconst auto i_empty = Interval();\nconst auto i_zero_length = Interval(0.0, 0.0);\n\nstd::cout << Interval_d::empty(i_empty) << \" \"\n<< Interval_d::empty(i_zero_length) << \"\\n\";\n// Prints: true false\n\nstd::cout << Interval_d::zero_measure(i_empty) << \" \"\n<< Interval_d::zero_measure(i_zero_length) << \"\\n\";\n// Prints: false false\n
"},{"location":"common/autoware_auto_geometry/design/polygon_intersection_2d-design/","title":"2D Convex Polygon Intersection","text":"

Two convex polygon's intersection can be visualized on the image below as the blue area:

"},{"location":"common/autoware_auto_geometry/design/polygon_intersection_2d-design/#purpose-use-cases","title":"Purpose / Use cases","text":"

Computing the intersection between two polygons can be useful in many applications of scene understanding. It can be used to estimate collision detection, shape alignment, shape association and in any application that deals with the objects around the perceiving agent.

"},{"location":"common/autoware_auto_geometry/design/polygon_intersection_2d-design/#design","title":"Design","text":"

Livermore, Calif, 1977 mention the following observations about convex polygon intersection:

"},{"location":"common/autoware_auto_geometry/design/polygon_intersection_2d-design/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

With the observation mentioned above, the current algorithm operates in the following way:

"},{"location":"common/autoware_auto_geometry/design/polygon_intersection_2d-design/#inputs-outputs-api","title":"Inputs / Outputs / API","text":"

Inputs:

Outputs:

"},{"location":"common/autoware_auto_geometry/design/polygon_intersection_2d-design/#future-work","title":"Future Work","text":""},{"location":"common/autoware_auto_geometry/design/polygon_intersection_2d-design/#1230-applying-efficient-algorithms","title":"1230: Applying efficient algorithms.","text":""},{"location":"common/autoware_auto_geometry/design/polygon_intersection_2d-design/#related-issues","title":"Related issues","text":""},{"location":"common/autoware_auto_geometry/design/polygon_intersection_2d-design/#983-integrate-vision-detections-in-object-tracker","title":"983: Integrate vision detections in object tracker","text":""},{"location":"common/autoware_auto_geometry/design/spatial-hash-design/","title":"Spatial Hash","text":"

The spatial hash is a data structure designed for efficient fixed-radius near-neighbor queries in low dimensions.

The fixed-radius near-neighbors problem is defined as follows:

For point p, find all points p' s.t. d(p, p') < r

Where in this case d(p, p') is euclidean distance, and r is the fixed radius.

For n points with an average of k neighbors each, this data structure can perform m near-neighbor queries (to generate lists of near-neighbors for m different points) in O(mk) time.

By contrast, using a k-d tree for successive nearest-neighbor queries results in a running time of O(m log n).

The spatial hash works as follows:

Under the hood, an std::unordered_multimap is used, where the key is a bin/voxel index. The bin size was computed to be the same as the lookup distance.

In addition, this data structure can support 2D or 3D queries. This is determined during configuration, and baked into the data structure via the configuration class. The purpose of this was to avoid if statements in tight loops. The configuration class specializations themselves use CRTP (Curiously Recurring Template Patterns) to do \"static polymorphism\", and avoid a dispatching call.

"},{"location":"common/autoware_auto_geometry/design/spatial-hash-design/#performance-characterization","title":"Performance characterization","text":""},{"location":"common/autoware_auto_geometry/design/spatial-hash-design/#time","title":"Time","text":"

Insertion is O(n) because lookup time for the underlying hashmap is O(n) for hashmaps. In practice, lookup time for hashmaps and thus insertion time should be O(1).

Removing a point is O(1) because the current API only supports removal via direct reference to a node.

Finding k near-neighbors is worst case O(n) in the case of an adversarial example, but in practice O(k).

"},{"location":"common/autoware_auto_geometry/design/spatial-hash-design/#space","title":"Space","text":"

The module consists of the following components:

This results in O(n) space complexity.

"},{"location":"common/autoware_auto_geometry/design/spatial-hash-design/#states","title":"States","text":"

The spatial hash's state is dictated by the status of the underlying unordered_multimap.

The data structure is wholly configured by a config class. The constructor of the class determines in the data structure accepts strictly 2D or strictly 3D queries.

"},{"location":"common/autoware_auto_geometry/design/spatial-hash-design/#inputs","title":"Inputs","text":"

The primary method of introducing data into the data structure is via the insert method.

"},{"location":"common/autoware_auto_geometry/design/spatial-hash-design/#outputs","title":"Outputs","text":"

The primary method of retrieving data from the data structure is via the near2D configuration or near 3D configuration method.

The whole data structure can also be traversed using standard constant iterators.

"},{"location":"common/autoware_auto_geometry/design/spatial-hash-design/#future-work","title":"Future Work","text":""},{"location":"common/autoware_auto_geometry/design/spatial-hash-design/#related-issues","title":"Related issues","text":""},{"location":"common/autoware_auto_geometry/design/spatial-hash-design/#28-port-to-autowareauto","title":"28: Port to autoware.Auto","text":""},{"location":"common/autoware_auto_perception_rviz_plugin/","title":"autoware_auto_perception_plugin","text":""},{"location":"common/autoware_auto_perception_rviz_plugin/#purpose","title":"Purpose","text":"

It is an rviz plugin for visualizing the result from perception module. This package is based on the implementation of the rviz plugin developed by Autoware.Auto.

See Autoware.Auto design documentation for the original design philosophy. [1]

"},{"location":"common/autoware_auto_perception_rviz_plugin/#input-types-visualization-results","title":"Input Types / Visualization Results","text":""},{"location":"common/autoware_auto_perception_rviz_plugin/#detectedobjects","title":"DetectedObjects","text":""},{"location":"common/autoware_auto_perception_rviz_plugin/#input-types","title":"Input Types","text":"Name Type Description autoware_auto_perception_msgs::msg::DetectedObjects detection result array"},{"location":"common/autoware_auto_perception_rviz_plugin/#visualization-result","title":"Visualization Result","text":""},{"location":"common/autoware_auto_perception_rviz_plugin/#trackedobjects","title":"TrackedObjects","text":""},{"location":"common/autoware_auto_perception_rviz_plugin/#input-types_1","title":"Input Types","text":"Name Type Description autoware_auto_perception_msgs::msg::TrackedObjects tracking result array"},{"location":"common/autoware_auto_perception_rviz_plugin/#visualization-result_1","title":"Visualization Result","text":"

Overwrite tracking results with detection results.

"},{"location":"common/autoware_auto_perception_rviz_plugin/#predictedobjects","title":"PredictedObjects","text":""},{"location":"common/autoware_auto_perception_rviz_plugin/#input-types_2","title":"Input Types","text":"Name Type Description autoware_auto_perception_msgs::msg::PredictedObjects prediction result array"},{"location":"common/autoware_auto_perception_rviz_plugin/#visualization-result_2","title":"Visualization Result","text":"

Overwrite prediction results with tracking results.

"},{"location":"common/autoware_auto_perception_rviz_plugin/#referencesexternal-links","title":"References/External links","text":"

[1] https://gitlab.com/autowarefoundation/autoware.auto/AutowareAuto/-/tree/master/src/tools/visualization/autoware_rviz_plugins

"},{"location":"common/autoware_auto_perception_rviz_plugin/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":""},{"location":"common/autoware_auto_tf2/design/autoware-auto-tf2-design/","title":"autoware_auto_tf2","text":"

This is the design document for the autoware_auto_tf2 package.

"},{"location":"common/autoware_auto_tf2/design/autoware-auto-tf2-design/#purpose-use-cases","title":"Purpose / Use cases","text":"

In general, users of ROS rely on tf (and its successor, tf2) for publishing and utilizing coordinate frame transforms. This is true even to the extent that the tf2 contains the packages tf2_geometry_msgs and tf2_sensor_msgs which allow for easy conversion to and from the message types defined in geometry_msgs and sensor_msgs, respectively. However, AutowareAuto contains some specialized message types which are not transformable between frames using the ROS2 library. The autoware_auto_tf2 package aims to provide developers with tools to transform applicable autoware_auto_msgs types. In addition to this, this package also provides transform tools for messages types in geometry_msgs missing in tf2_geometry_msgs.

"},{"location":"common/autoware_auto_tf2/design/autoware-auto-tf2-design/#design","title":"Design","text":"

While writing tf2_some_msgs or contributing to tf2_geometry_msgs, compatibility and design intent was ensured with the following files in the existing tf2 framework:

For example:

void tf2::convert( const A & a,B & b)\n

The method tf2::convert is dependent on the following:

template<typename A, typename B>\nB tf2::toMsg(const A& a);\ntemplate<typename A, typename B>\nvoid tf2::fromMsg(const A&, B& b);\n\n// New way to transform instead of using tf2::doTransform() directly\ntf2_ros::BufferInterface::transform(...)\n

Which, in turn, is dependent on the following:

void tf2::convert( const A & a,B & b)\nconst std::string& tf2::getFrameId(const T& t)\nconst ros::Time& tf2::getTimestamp(const T& t);\n
"},{"location":"common/autoware_auto_tf2/design/autoware-auto-tf2-design/#current-implementation-of-tf2_geometry_msgs","title":"Current Implementation of tf2_geometry_msgs","text":"

In both ROS1 and ROS2 stamped msgs like Vector3Stamped, QuaternionStamped have associated functions like:

In ROS1, to support tf2::convert and need in doTransform of the stamped data, non-stamped underlying data like Vector3, Point, have implementations of the following functions:

In ROS2, much of the doTransform method is not using toMsg and fromMsg as data types from tf2 are not used. Instead doTransform is done using KDL, thus functions relating to underlying data were not added; such as Vector3, Point, or ported in this commit ros/geometry2/commit/6f2a82. The non-stamped data with toMsg and fromMsg are Quaternion, Transform. Pose has the modified toMsg and not used by PoseStamped.

"},{"location":"common/autoware_auto_tf2/design/autoware-auto-tf2-design/#plan-for-autoware_auto_perception_msgsmsgboundingboxarray","title":"Plan for autoware_auto_perception_msgs::msg::BoundingBoxArray","text":"

The initial rough plan was to implement some of the common tf2 functions like toMsg, fromMsg, and doTransform, as needed for all the underlying data types in BoundingBoxArray. Examples of the data types include: BoundingBox, Quaternion32, and Point32. In addition, the implementation should be done such that upstream contributions could also be made to geometry_msgs.

"},{"location":"common/autoware_auto_tf2/design/autoware-auto-tf2-design/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

Due to conflicts in a function signatures, the predefined template of convert.h/ transform_functions.h is not followed and compatibility with tf2::convert(..) is broken and toMsg is written differently.

// Old style\ngeometry_msgs::Vector3 toMsg(const tf2::Vector3& in)\ngeometry_msgs::Point& toMsg(const tf2::Vector3& in)\n\n// New style\ngeometry_msgs::Point& toMsg(const tf2::Vector3& in, geometry_msgs::Point& out)\n
"},{"location":"common/autoware_auto_tf2/design/autoware-auto-tf2-design/#inputs-outputs-api","title":"Inputs / Outputs / API","text":"

The library provides API doTransform for the following data-types that are either not available in tf2_geometry_msgs or the messages types are part of autoware_auto_msgs and are therefore custom and not inherently supported by any of the tf2 libraries. The following APIs are provided for the following data types:

inline void doTransform(\nconst geometry_msgs::msg::Point32 & t_in,\ngeometry_msgs::msg::Point32 & t_out,\nconst geometry_msgs::msg::TransformStamped & transform)\n
inline void doTransform(\nconst autoware_auto_geometry_msgs::msg::Quaternion32 & t_in,\nautoware_auto_geometry_msgs::msg::Quaternion32 & t_out,\nconst geometry_msgs::msg::TransformStamped & transform)\n
inline void doTransform(\nconst BoundingBox & t_in, BoundingBox & t_out,\nconst geometry_msgs::msg::TransformStamped & transform)\n
inline void doTransform(\nconst BoundingBoxArray & t_in,\nBoundingBoxArray & t_out,\nconst geometry_msgs::msg::TransformStamped & transform)\n

In addition, the following helper methods are also added:

inline tf2::TimePoint getTimestamp(const BoundingBoxArray & t)\n\ninline std::string getFrameId(const BoundingBoxArray & t)\n
"},{"location":"common/autoware_auto_tf2/design/autoware-auto-tf2-design/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":""},{"location":"common/autoware_auto_tf2/design/autoware-auto-tf2-design/#challenges","title":"Challenges","text":""},{"location":"common/autoware_testing/design/autoware_testing-design/","title":"autoware_testing","text":"

This is the design document for the autoware_testing package.

"},{"location":"common/autoware_testing/design/autoware_testing-design/#purpose-use-cases","title":"Purpose / Use cases","text":"

The package aims to provide a unified way to add standard testing functionality to the package, currently supporting:

"},{"location":"common/autoware_testing/design/autoware_testing-design/#design","title":"Design","text":"

Uses ros_testing (which is an extension of launch_testing) and provides some parametrized, reusable standard tests to run.

"},{"location":"common/autoware_testing/design/autoware_testing-design/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

Parametrization is limited to package, executable names, parameters filename and executable arguments. Test namespace is set as 'test'. Parameters file for the package is expected to be in param directory inside package.

"},{"location":"common/autoware_testing/design/autoware_testing-design/#inputs-outputs-api","title":"Inputs / Outputs / API","text":"

To add a smoke test to your package tests, add test dependency on autoware_testing to package.xml

<test_depend>autoware_testing</test_depend>\n

and add the following two lines to CMakeLists.txt in the IF (BUILD_TESTING) section:

find_package(autoware_testing REQUIRED)\nadd_smoke_test(<package_name> <executable_name> [PARAM_FILENAME <param_filename>] [EXECUTABLE_ARGUMENTS <arguments>])\n

Where

<package_name> - [required] tested node package name.

<executable_name> - [required] tested node executable name.

<param_filename> - [optional] param filename. Default value is test.param.yaml. Required mostly in situation where there are multiple smoke tests in a package and each requires different parameters set

<arguments> - [optional] arguments passed to executable. By default no arguments are passed.

which adds <executable_name>_smoke_test test to suite.

Example test result:

build/<package_name>/test_results/<package_name>/<executable_name>_smoke_test.xunit.xml: 1 test, 0 errors, 0 failures, 0 skipped\n
"},{"location":"common/autoware_testing/design/autoware_testing-design/#references-external-links","title":"References / External links","text":""},{"location":"common/autoware_testing/design/autoware_testing-design/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":""},{"location":"common/autoware_testing/design/autoware_testing-design/#related-issues","title":"Related issues","text":""},{"location":"common/bag_time_manager_rviz_plugin/","title":"bag_time_manager_rviz_plugin","text":""},{"location":"common/bag_time_manager_rviz_plugin/#purpose","title":"Purpose","text":"

This plugin allows publishing and controlling the ros bag time.

"},{"location":"common/bag_time_manager_rviz_plugin/#output","title":"Output","text":"

tbd.

"},{"location":"common/bag_time_manager_rviz_plugin/#howtouse","title":"HowToUse","text":"
  1. Start rviz and select panels/Add new panel.

  2. Select BagTimeManagerPanel and press OK.

  3. See bag_time_manager_rviz_plugin/BagTimeManagerPanel is added.

"},{"location":"common/component_interface_utils/","title":"component_interface_utils","text":""},{"location":"common/component_interface_utils/#features","title":"Features","text":"

This is a utility package that provides the following features:

"},{"location":"common/component_interface_utils/#design","title":"Design","text":"

This package provides the wrappers for the interface classes of rclcpp. The wrappers limit the usage of the original class to enforce the processing recommended by the component interface. Do not inherit the class of rclcpp, and forward or wrap the member function that is allowed to be used.

"},{"location":"common/component_interface_utils/#instantiation-of-the-wrapper-class","title":"Instantiation of the wrapper class","text":"

The wrapper class requires interface information in this format.

struct SampleService\n{\nusing Service = sample_msgs::srv::ServiceType;\nstatic constexpr char name[] = \"/sample/service\";\n};\n\nstruct SampleMessage\n{\nusing Message = sample_msgs::msg::MessageType;\nstatic constexpr char name[] = \"/sample/message\";\nstatic constexpr size_t depth = 3;\nstatic constexpr auto reliability = RMW_QOS_POLICY_RELIABILITY_RELIABLE;\nstatic constexpr auto durability = RMW_QOS_POLICY_DURABILITY_TRANSIENT_LOCAL;\n};\n

Create the wrapper using the above definition as follows.

// header file\ncomponent_interface_utils::Service<SampleService>::SharedPtr srv_;\ncomponent_interface_utils::Client<SampleService>::SharedPtr cli_;\ncomponent_interface_utils::Publisher<SampleMessage>::SharedPtr pub_;\ncomponent_interface_utils::Subscription<SampleMessage>::SharedPtr sub_;\n\n// source file\nconst auto node = component_interface_utils::NodeAdaptor(this);\nnode.init_srv(srv_, callback);\nnode.init_cli(cli_);\nnode.init_pub(pub_);\nnode.init_sub(sub_, callback);\n
"},{"location":"common/component_interface_utils/#logging-for-service-and-client","title":"Logging for service and client","text":"

If the wrapper class is used, logging is automatically enabled. The log level is RCLCPP_INFO.

"},{"location":"common/component_interface_utils/#service-exception-for-response","title":"Service exception for response","text":"

If the wrapper class is used and the service response has status, throwing ServiceException will automatically catch and set it to status. This is useful when returning an error from a function called from the service callback.

void service_callback(Request req, Response res)\n{\nfunction();\nres->status.success = true;\n}\n\nvoid function()\n{\nthrow ServiceException(ERROR_CODE, \"message\");\n}\n

If the wrapper class is not used or the service response has no status, manually catch the ServiceException as follows.

void service_callback(Request req, Response res)\n{\ntry {\nfunction();\nres->status.success = true;\n} catch (const ServiceException & error) {\nres->status = error.status();\n}\n}\n
"},{"location":"common/component_interface_utils/#relays-for-topic-and-service","title":"Relays for topic and service","text":"

There are utilities for relaying services and messages of the same type.

const auto node = component_interface_utils::NodeAdaptor(this);\nservice_callback_group_ = create_callback_group(rclcpp::CallbackGroupType::MutuallyExclusive);\nnode.relay_message(pub_, sub_);\nnode.relay_service(cli_, srv_, service_callback_group_);  // group is for avoiding deadlocks\n
"},{"location":"common/fake_test_node/design/fake_test_node-design/","title":"Fake Test Node","text":""},{"location":"common/fake_test_node/design/fake_test_node-design/#what-this-package-provides","title":"What this package provides","text":"

When writing an integration test for a node in C++ using GTest, there is quite some boilerplate code that needs to be written to set up a fake node that would publish expected messages on an expected topic and subscribes to messages on some other topic. This is usually implemented as a custom GTest fixture.

This package contains a library that introduces two utility classes that can be used in place of custom fixtures described above to write integration tests for a node:

These fixtures take care of initializing and re-initializing rclcpp as well as of checking that all subscribers and publishers have a match, thus reducing the amount of boilerplate code that the user needs to write.

"},{"location":"common/fake_test_node/design/fake_test_node-design/#how-to-use-this-library","title":"How to use this library","text":"

After including the relevant header the user can use a typedef to use a custom fixture name and use the provided classes as fixtures in TEST_F and TEST_P tests directly.

"},{"location":"common/fake_test_node/design/fake_test_node-design/#example-usage","title":"Example usage","text":"

Let's say there is a node NodeUnderTest that requires testing. It just subscribes to std_msgs::msg::Int32 messages and publishes a std_msgs::msg::Bool to indicate that the input is positive. To test such a node the following code can be used utilizing the autoware::tools::testing::FakeTestNode:

using FakeNodeFixture = autoware::tools::testing::FakeTestNode;\n\n/// @test Test that we can use a non-parametrized test.\nTEST_F(FakeNodeFixture, Test) {\nInt32 msg{};\nmsg.data = 15;\nconst auto node = std::make_shared<NodeUnderTest>();\n\nBool::SharedPtr last_received_msg{};\nauto fake_odom_publisher = create_publisher<Int32>(\"/input_topic\");\nauto result_odom_subscription = create_subscription<Bool>(\"/output_topic\", *node,\n[&last_received_msg](const Bool::SharedPtr msg) {last_received_msg = msg;});\n\nconst auto dt{std::chrono::milliseconds{100LL}};\nconst auto max_wait_time{std::chrono::seconds{10LL}};\nauto time_passed{std::chrono::milliseconds{0LL}};\nwhile (!last_received_msg) {\nfake_odom_publisher->publish(msg);\nrclcpp::spin_some(node);\nrclcpp::spin_some(get_fake_node());\nstd::this_thread::sleep_for(dt);\ntime_passed += dt;\nif (time_passed > max_wait_time) {\nFAIL() << \"Did not receive a message soon enough.\";\n}\n}\nEXPECT_TRUE(last_received_msg->data);\nSUCCEED();\n}\n

Here only the TEST_F example is shown but a TEST_P usage is very similar with a little bit more boilerplate to set up all the parameter values, see test_fake_test_node.cpp for an example usage.

"},{"location":"common/global_parameter_loader/Readme/","title":"Autoware Global Parameter Loader","text":"

This package is to set common ROS parameters to each node.

"},{"location":"common/global_parameter_loader/Readme/#usage","title":"Usage","text":"

Add the following lines to the launch file of the node in which you want to get global parameters.

<!-- Global parameters -->\n<include file=\"$(find-pkg-share global_parameter_loader)/launch/global_params.launch.py\">\n<arg name=\"vehicle_model\" value=\"$(var vehicle_model)\"/>\n</include>\n

The vehicle model parameter is read from config/vehicle_info.param.yaml in vehicle_model_description package.

"},{"location":"common/global_parameter_loader/Readme/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

Currently only vehicle_info is loaded by this launcher.

"},{"location":"common/goal_distance_calculator/Readme/","title":"goal_distance_calculator","text":""},{"location":"common/goal_distance_calculator/Readme/#purpose","title":"Purpose","text":"

This node publishes deviation of self-pose from goal pose.

"},{"location":"common/goal_distance_calculator/Readme/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"common/goal_distance_calculator/Readme/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"common/goal_distance_calculator/Readme/#input","title":"Input","text":"Name Type Description /planning/mission_planning/route autoware_auto_planning_msgs::msg::Route Used to get goal pose /tf tf2_msgs/TFMessage TF (self-pose)"},{"location":"common/goal_distance_calculator/Readme/#output","title":"Output","text":"Name Type Description deviation/lateral tier4_debug_msgs::msg::Float64Stamped publish lateral deviation of self-pose from goal pose[m] deviation/longitudinal tier4_debug_msgs::msg::Float64Stamped publish longitudinal deviation of self-pose from goal pose[m] deviation/yaw tier4_debug_msgs::msg::Float64Stamped publish yaw deviation of self-pose from goal pose[rad] deviation/yaw_deg tier4_debug_msgs::msg::Float64Stamped publish yaw deviation of self-pose from goal pose[deg]"},{"location":"common/goal_distance_calculator/Readme/#parameters","title":"Parameters","text":""},{"location":"common/goal_distance_calculator/Readme/#node-parameters","title":"Node Parameters","text":"Name Type Default Value Explanation update_rate double 10.0 Timer callback period. [Hz]"},{"location":"common/goal_distance_calculator/Readme/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Explanation oneshot bool true publish deviations just once or repeatedly"},{"location":"common/goal_distance_calculator/Readme/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"common/grid_map_utils/","title":"Grid Map Utils","text":""},{"location":"common/grid_map_utils/#overview","title":"Overview","text":"

This packages contains a re-implementation of the grid_map::PolygonIterator used to iterate over all cells of a grid map contained inside some polygon.

"},{"location":"common/grid_map_utils/#algorithm","title":"Algorithm","text":"

This implementation uses the scan line algorithm, a common algorithm used to draw polygons on a rasterized image. The main idea of the algorithm adapted to a grid map is as follow:

More details on the scan line algorithm can be found in the References.

"},{"location":"common/grid_map_utils/#api","title":"API","text":"

The grid_map_utils::PolygonIterator follows the same API as the original grid_map::PolygonIterator.

"},{"location":"common/grid_map_utils/#assumptions","title":"Assumptions","text":"

The behavior of the grid_map_utils::PolygonIterator is only guaranteed to match the grid_map::PolygonIterator if edges of the polygon do not exactly cross any cell center. In such a case, whether the crossed cell is considered inside or outside of the polygon can vary due to floating precision error.

"},{"location":"common/grid_map_utils/#performances","title":"Performances","text":"

Benchmarking code is implemented in test/benchmarking.cpp and is also used to validate that the grid_map_utils::PolygonIterator behaves exactly like the grid_map::PolygonIterator.

The following figure shows a comparison of the runtime between the implementation of this package (grid_map_utils) and the original implementation (grid_map). The time measured includes the construction of the iterator and the iteration over all indexes and is shown using a logarithmic scale. Results were obtained varying the side size of a square grid map with 100 <= n <= 1000 (size=n means a grid of n x n cells), random polygons with a number of vertices 3 <= m <= 100 and with each parameter (n,m) repeated 10 times.

"},{"location":"common/grid_map_utils/#future-improvements","title":"Future improvements","text":"

There exists variations of the scan line algorithm for multiple polygons. These can be implemented if we want to iterate over the cells contained in at least one of multiple polygons.

The current implementation imitate the behavior of the original grid_map::PolygonIterator where a cell is selected if its center position is inside the polygon. This behavior could be changed for example to only return all cells overlapped by the polygon.

"},{"location":"common/grid_map_utils/#references","title":"References","text":""},{"location":"common/interpolation/","title":"Interpolation package","text":"

This package supplies linear and spline interpolation functions.

"},{"location":"common/interpolation/#linear-interpolation","title":"Linear Interpolation","text":"

lerp(src_val, dst_val, ratio) (for scalar interpolation) interpolates src_val and dst_val with ratio. This will be replaced with std::lerp(src_val, dst_val, ratio) in C++20.

lerp(base_keys, base_values, query_keys) (for vector interpolation) applies linear regression to each two continuous points whose x values arebase_keys and whose y values are base_values. Then it calculates interpolated values on y-axis for query_keys on x-axis.

"},{"location":"common/interpolation/#spline-interpolation","title":"Spline Interpolation","text":"

spline(base_keys, base_values, query_keys) (for vector interpolation) applies spline regression to each two continuous points whose x values arebase_keys and whose y values are base_values. Then it calculates interpolated values on y-axis for query_keys on x-axis.

"},{"location":"common/interpolation/#evaluation-of-calculation-cost","title":"Evaluation of calculation cost","text":"

We evaluated calculation cost of spline interpolation for 100 points, and adopted the best one which is tridiagonal matrix algorithm. Methods except for tridiagonal matrix algorithm exists in spline_interpolation package, which has been removed from Autoware.

Method Calculation time Tridiagonal Matrix Algorithm 0.007 [ms] Preconditioned Conjugate Gradient 0.024 [ms] Successive Over-Relaxation 0.074 [ms]"},{"location":"common/interpolation/#spline-interpolation-algorithm","title":"Spline Interpolation Algorithm","text":"

Assuming that the size of base_keys (x_i) and base_values (y_i) are N + 1, we aim to calculate spline interpolation with the following equation to interpolate between y_i and y_{i+1}.

Y_i(x) = a_i (x - x_i)^3 + b_i (x - x_i)^2 + c_i (x - x_i) + d_i \\ \\ \\ (i = 0, \\dots, N-1)

Constraints on spline interpolation are as follows. The number of constraints is 4N, which is equal to the number of variables of spline interpolation.

\\begin{align} Y_i (x_i) & = y_i \\ \\ \\ (i = 0, \\dots, N-1) \\\\ Y_i (x_{i+1}) & = y_{i+1} \\ \\ \\ (i = 0, \\dots, N-1) \\\\ Y'_i (x_{i+1}) & = Y'_{i+1} (x_{i+1}) \\ \\ \\ (i = 0, \\dots, N-2) \\\\ Y''_i (x_{i+1}) & = Y''_{i+1} (x_{i+1}) \\ \\ \\ (i = 0, \\dots, N-2) \\\\ Y''_0 (x_0) & = 0 \\\\ Y''_{N-1} (x_N) & = 0 \\end{align}

According to this article, spline interpolation is formulated as the following linear equation.

\\begin{align} \\begin{pmatrix} 2(h_0 + h_1) & h_1 \\\\ h_0 & 2 (h_1 + h_2) & h_2 & & O \\\\ & & & \\ddots \\\\ O & & & & h_{N-2} & 2 (h_{N-2} + h_{N-1}) \\end{pmatrix} \\begin{pmatrix} v_1 \\\\ v_2 \\\\ v_3 \\\\ \\vdots \\\\ v_{N-1} \\end{pmatrix}= \\begin{pmatrix} w_1 \\\\ w_2 \\\\ w_3 \\\\ \\vdots \\\\ w_{N-1} \\end{pmatrix} \\end{align}

where

\\begin{align} h_i & = x_{i+1} - x_i \\ \\ \\ (i = 0, \\dots, N-1) \\\\ w_i & = 6 \\left(\\frac{y_{i+1} - y_{i+1}}{h_i} - \\frac{y_i - y_{i-1}}{h_{i-1}}\\right) \\ \\ \\ (i = 1, \\dots, N-1) \\end{align}

The coefficient matrix of this linear equation is tridiagonal matrix. Therefore, it can be solve with tridiagonal matrix algorithm, which can solve linear equations without gradient descent methods.

Solving this linear equation with tridiagonal matrix algorithm, we can calculate coefficients of spline interpolation as follows.

\\begin{align} a_i & = \\frac{v_{i+1} - v_i}{6 (x_{i+1} - x_i)} \\ \\ \\ (i = 0, \\dots, N-1) \\\\ b_i & = \\frac{v_i}{2} \\ \\ \\ (i = 0, \\dots, N-1) \\\\ c_i & = \\frac{y_{i+1} - y_i}{x_{i+1} - x_i} - \\frac{1}{6}(x_{i+1} - x_i)(2 v_i + v_{i+1}) \\ \\ \\ (i = 0, \\dots, N-1) \\\\ d_i & = y_i \\ \\ \\ (i = 0, \\dots, N-1) \\end{align}"},{"location":"common/interpolation/#tridiagonal-matrix-algorithm","title":"Tridiagonal Matrix Algorithm","text":"

We solve tridiagonal linear equation according to this article where variables of linear equation are expressed as follows in the implementation.

\\begin{align} \\begin{pmatrix} b_0 & c_0 & & \\\\ a_0 & b_1 & c_2 & O \\\\ & & \\ddots \\\\ O & & a_{N-2} & b_{N-1} \\end{pmatrix} x = \\begin{pmatrix} d_0 \\\\ d_2 \\\\ d_3 \\\\ \\vdots \\\\ d_{N-1} \\end{pmatrix} \\end{align}"},{"location":"common/kalman_filter/","title":"kalman_filter","text":""},{"location":"common/kalman_filter/#purpose","title":"Purpose","text":"

This common package contains the kalman filter with time delay and the calculation of the kalman filter.

"},{"location":"common/kalman_filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"common/motion_utils/","title":"Motion Utils package","text":""},{"location":"common/motion_utils/#definition-of-terms","title":"Definition of terms","text":""},{"location":"common/motion_utils/#segment","title":"Segment","text":"

Segment in Autoware is the line segment between two successive points as follows.

The nearest segment index and nearest point index to a certain position is not always th same. Therefore, we prepare two different utility functions to calculate a nearest index for points and segments.

"},{"location":"common/motion_utils/#nearest-index-search","title":"Nearest index search","text":"

In this section, the nearest index and nearest segment index search is explained.

We have the same functions for the nearest index search and nearest segment index search. Taking for the example the nearest index search, we have two types of functions.

The first function finds the nearest index with distance and yaw thresholds.

template <class T>\nsize_t findFirstNearestIndexWithSoftConstraints(\nconst T & points, const geometry_msgs::msg::Pose & pose,\nconst double dist_threshold = std::numeric_limits<double>::max(),\nconst double yaw_threshold = std::numeric_limits<double>::max());\n

This function finds the first local solution within thresholds. The reason to find the first local one is to deal with some edge cases explained in the next subsection.

There are default parameters for thresholds arguments so that you can decide which thresholds to pass to the function.

  1. When both the distance and yaw thresholds are given.
    • First, try to find the nearest index with both the distance and yaw thresholds.
    • If not found, try to find again with only the distance threshold.
    • If not found, find without any thresholds.
  2. When only distance are given.
    • First, try to find the nearest index the distance threshold.
    • If not found, find without any thresholds.
  3. When no thresholds are given.
    • Find the nearest index.

The second function finds the nearest index in the lane whose id is lane_id.

size_t findNearestIndexFromLaneId(\nconst autoware_auto_planning_msgs::msg::PathWithLaneId & path,\nconst geometry_msgs::msg::Point & pos, const int64_t lane_id);\n
"},{"location":"common/motion_utils/#application-to-various-object","title":"Application to various object","text":"

Many node packages often calculate the nearest index of objects. We will explain the recommended method to calculate it.

"},{"location":"common/motion_utils/#nearest-index-for-the-ego","title":"Nearest index for the ego","text":"

Assuming that the path length before the ego is short enough, we expect to find the correct nearest index in the following edge cases by findFirstNearestIndexWithSoftConstraints with both distance and yaw thresholds. Blue circles describes the distance threshold from the base link position and two blue lines describe the yaw threshold against the base link orientation. Among points in these cases, the correct nearest point which is red can be found.

Therefore, the implementation is as follows.

const size_t ego_nearest_idx = findFirstNearestIndexWithSoftConstraints(points, ego_pose, ego_nearest_dist_threshold, ego_nearest_yaw_threshold);\nconst size_t ego_nearest_seg_idx = findFirstNearestIndexWithSoftConstraints(points, ego_pose, ego_nearest_dist_threshold, ego_nearest_yaw_threshold);\n
"},{"location":"common/motion_utils/#nearest-index-for-dynamic-objects","title":"Nearest index for dynamic objects","text":"

For the ego nearest index, the orientation is considered in addition to the position since the ego is supposed to follow the points. However, for the dynamic objects (e.g., predicted object), sometimes its orientation may be different from the points order, e.g. the dynamic object driving backward although the ego is driving forward.

Therefore, the yaw threshold should not be considered for the dynamic object. The implementation is as follows.

const size_t dynamic_obj_nearest_idx = findFirstNearestIndexWithSoftConstraints(points, dynamic_obj_pose, dynamic_obj_nearest_dist_threshold);\nconst size_t dynamic_obj_nearest_seg_idx = findFirstNearestIndexWithSoftConstraints(points, dynamic_obj_pose, dynamic_obj_nearest_dist_threshold);\n
"},{"location":"common/motion_utils/#nearest-index-for-traffic-objects","title":"Nearest index for traffic objects","text":"

In lanelet maps, traffic objects belong to the specific lane. With this specific lane's id, the correct nearest index can be found.

The implementation is as follows.

// first extract `lane_id` which the traffic object belong to.\nconst size_t traffic_obj_nearest_idx = findNearestIndexFromLaneId(path_with_lane_id, traffic_obj_pos, lane_id);\nconst size_t traffic_obj_nearest_seg_idx = findNearestSegmentIndexFromLaneId(path_with_lane_id, traffic_obj_pos, lane_id);\n
"},{"location":"common/motion_utils/#pathtrajectory-length-calculation-between-designated-points","title":"Path/Trajectory length calculation between designated points","text":"

Based on the discussion so far, the nearest index search algorithm is different depending on the object type. Therefore, we recommended using the wrapper utility functions which require the nearest index search (e.g., calculating the path length) with each nearest index search.

For example, when we want to calculate the path length between the ego and the dynamic object, the implementation is as follows.

const size_t ego_nearest_seg_idx = findFirstNearestSegmentIndex(points, ego_pose, ego_nearest_dist_threshold, ego_nearest_yaw_threshold);\nconst size_t dyn_obj_nearest_seg_idx = findFirstNearestSegmentIndex(points, dyn_obj_pose, dyn_obj_nearest_dist_threshold);\nconst double length_from_ego_to_obj = calcSignedArcLength(points, ego_pose, ego_nearest_seg_idx, dyn_obj_pose, dyn_obj_nearest_seg_idx);\n
"},{"location":"common/motion_utils/docs/vehicle/vehicle/","title":"vehicle utils","text":"

Vehicle utils provides a convenient library used to check vehicle status.

"},{"location":"common/motion_utils/docs/vehicle/vehicle/#feature","title":"Feature","text":"

The library contains following classes.

"},{"location":"common/motion_utils/docs/vehicle/vehicle/#vehicle_stop_checker","title":"vehicle_stop_checker","text":"

This class check whether the vehicle is stopped or not based on localization result.

"},{"location":"common/motion_utils/docs/vehicle/vehicle/#subscribed-topics","title":"Subscribed Topics","text":"Name Type Description /localization/kinematic_state nav_msgs::msg::Odometry vehicle odometry"},{"location":"common/motion_utils/docs/vehicle/vehicle/#parameters","title":"Parameters","text":"Name Type Default Value Explanation velocity_buffer_time_sec double 10.0 odometry buffering time [s]"},{"location":"common/motion_utils/docs/vehicle/vehicle/#member-functions","title":"Member functions","text":"
bool isVehicleStopped(const double stop_duration)\n
"},{"location":"common/motion_utils/docs/vehicle/vehicle/#example-usage","title":"Example Usage","text":"

Necessary includes:

#include <tier4_autoware_utils/vehicle/vehicle_state_checker.hpp>\n

1.Create a checker instance.

class SampleNode : public rclcpp::Node\n{\npublic:\nSampleNode() : Node(\"sample_node\")\n{\nvehicle_stop_checker_ = std::make_unique<VehicleStopChecker>(this);\n}\n\nstd::unique_ptr<VehicleStopChecker> vehicle_stop_checker_;\n\nbool sampleFunc();\n\n...\n}\n

2.Check the vehicle state.

bool SampleNode::sampleFunc()\n{\n...\n\nconst auto result_1 = vehicle_stop_checker_->isVehicleStopped();\n\n...\n\nconst auto result_2 = vehicle_stop_checker_->isVehicleStopped(3.0);\n\n...\n}\n
"},{"location":"common/motion_utils/docs/vehicle/vehicle/#vehicle_arrival_checker","title":"vehicle_arrival_checker","text":"

This class check whether the vehicle arrive at stop point based on localization and planning result.

"},{"location":"common/motion_utils/docs/vehicle/vehicle/#subscribed-topics_1","title":"Subscribed Topics","text":"Name Type Description /localization/kinematic_state nav_msgs::msg::Odometry vehicle odometry /planning/scenario_planning/trajectory autoware_auto_planning_msgs::msg::Trajectory trajectory"},{"location":"common/motion_utils/docs/vehicle/vehicle/#parameters_1","title":"Parameters","text":"Name Type Default Value Explanation velocity_buffer_time_sec double 10.0 odometry buffering time [s] th_arrived_distance_m double 1.0 threshold distance to check if vehicle has arrived at target point [m]"},{"location":"common/motion_utils/docs/vehicle/vehicle/#member-functions_1","title":"Member functions","text":"
bool isVehicleStopped(const double stop_duration)\n
bool isVehicleStoppedAtStopPoint(const double stop_duration)\n
"},{"location":"common/motion_utils/docs/vehicle/vehicle/#example-usage_1","title":"Example Usage","text":"

Necessary includes:

#include <tier4_autoware_utils/vehicle/vehicle_state_checker.hpp>\n

1.Create a checker instance.

class SampleNode : public rclcpp::Node\n{\npublic:\nSampleNode() : Node(\"sample_node\")\n{\nvehicle_arrival_checker_ = std::make_unique<VehicleArrivalChecker>(this);\n}\n\nstd::unique_ptr<VehicleArrivalChecker> vehicle_arrival_checker_;\n\nbool sampleFunc();\n\n...\n}\n

2.Check the vehicle state.

bool SampleNode::sampleFunc()\n{\n...\n\nconst auto result_1 = vehicle_arrival_checker_->isVehicleStopped();\n\n...\n\nconst auto result_2 = vehicle_arrival_checker_->isVehicleStopped(3.0);\n\n...\n\nconst auto result_3 = vehicle_arrival_checker_->isVehicleStoppedAtStopPoint();\n\n...\n\nconst auto result_4 = vehicle_arrival_checker_->isVehicleStoppedAtStopPoint(3.0);\n\n...\n}\n
"},{"location":"common/motion_utils/docs/vehicle/vehicle/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

vehicle_stop_checker and vehicle_arrival_checker cannot check whether the vehicle is stopped more than velocity_buffer_time_sec second.

"},{"location":"common/osqp_interface/design/osqp_interface-design/","title":"Interface for the OSQP library","text":"

This is the design document for the osqp_interface package.

"},{"location":"common/osqp_interface/design/osqp_interface-design/#purpose-use-cases","title":"Purpose / Use cases","text":"

This packages provides a C++ interface for the OSQP library.

"},{"location":"common/osqp_interface/design/osqp_interface-design/#design","title":"Design","text":"

The class OSQPInterface takes a problem formulation as Eigen matrices and vectors, converts these objects into C-style Compressed-Column-Sparse matrices and dynamic arrays, loads the data into the OSQP workspace dataholder, and runs the optimizer.

"},{"location":"common/osqp_interface/design/osqp_interface-design/#inputs-outputs-api","title":"Inputs / Outputs / API","text":"

The interface can be used in several ways:

  1. Initialize the interface WITHOUT data. Load the problem formulation at the optimization call.

        osqp_interface = OSQPInterface();\nosqp_interface.optimize(P, A, q, l, u);\n
  2. Initialize the interface WITH data.

        osqp_interface = OSQPInterface(P, A, q, l, u);\nosqp_interface.optimize();\n
  3. WARM START OPTIMIZATION by modifying the problem formulation between optimization runs.

        osqp_interface = OSQPInterface(P, A, q, l, u);\nosqp_interface.optimize();\nosqp.initializeProblem(P_new, A_new, q_new, l_new, u_new);\nosqp_interface.optimize();\n

    The optimization results are returned as a vector by the optimization function.

    std::tuple<std::vector<double>, std::vector<double>> result = osqp_interface.optimize();\nstd::vector<double> param = std::get<0>(result);\ndouble x_0 = param[0];\ndouble x_1 = param[1];\n
"},{"location":"common/osqp_interface/design/osqp_interface-design/#references-external-links","title":"References / External links","text":""},{"location":"common/osqp_interface/design/osqp_interface-design/#related-issues","title":"Related issues","text":""},{"location":"common/path_distance_calculator/Readme/","title":"path_distance_calculator","text":""},{"location":"common/path_distance_calculator/Readme/#purpose","title":"Purpose","text":"

This node publishes a distance from the closest path point from the self-position to the end point of the path. Note that the distance means the arc-length along the path, not the Euclidean distance between the two points.

"},{"location":"common/path_distance_calculator/Readme/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"common/path_distance_calculator/Readme/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"common/path_distance_calculator/Readme/#input","title":"Input","text":"Name Type Description /planning/scenario_planning/lane_driving/behavior_planning/path autoware_auto_planning_msgs::msg::Path Reference path /tf tf2_msgs/TFMessage TF (self-pose)"},{"location":"common/path_distance_calculator/Readme/#output","title":"Output","text":"Name Type Description ~/distance tier4_debug_msgs::msg::Float64Stamped Publish a distance from the closest path point from the self-position to the end point of the path[m]"},{"location":"common/path_distance_calculator/Readme/#parameters","title":"Parameters","text":""},{"location":"common/path_distance_calculator/Readme/#node-parameters","title":"Node Parameters","text":"

None.

"},{"location":"common/path_distance_calculator/Readme/#core-parameters","title":"Core Parameters","text":"

None.

"},{"location":"common/path_distance_calculator/Readme/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"common/polar_grid/Readme/","title":"Polar Grid","text":""},{"location":"common/polar_grid/Readme/#purpose","title":"Purpose","text":"

This plugin displays polar grid around ego vehicle in Rviz.

"},{"location":"common/polar_grid/Readme/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Explanation Max Range float 200.0f max range for polar grid. [m] Wave Velocity float 100.0f wave ring velocity. [m/s] Delta Range float 10.0f wave ring distance for polar grid. [m]"},{"location":"common/polar_grid/Readme/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"common/rtc_manager_rviz_plugin/","title":"rtc_manager_rviz_plugin","text":""},{"location":"common/rtc_manager_rviz_plugin/#purpose","title":"Purpose","text":"

The purpose of this Rviz plugin is

  1. To display each content of RTC status.

  2. To switch each module of RTC auto mode.

  3. To change RTC cooperate commands by button.

"},{"location":"common/rtc_manager_rviz_plugin/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"common/rtc_manager_rviz_plugin/#input","title":"Input","text":"Name Type Description /api/external/get/rtc_status tier4_rtc_msgs::msg::CooperateStatusArray The statuses of each Cooperate Commands"},{"location":"common/rtc_manager_rviz_plugin/#output","title":"Output","text":"Name Type Description /api/external/set/rtc_commands tier4_rtc_msgs::src::CooperateCommands The Cooperate Commands for each planning /planning/enable_auto_mode/* tier4_rtc_msgs::src::AutoMode The Cooperate Commands mode for each planning module"},{"location":"common/rtc_manager_rviz_plugin/#howtouse","title":"HowToUse","text":"
  1. Start rviz and select panels/Add new panel.

  2. tier4_state_rviz_plugin/RTCManagerPanel and press OK.

"},{"location":"common/signal_processing/","title":"Signal Processing Methods","text":"

In this package, we present signal processing related methods for the Autoware applications. The following functionalities are available in the current version.

low-pass filter currently supports only the 1-D low pass filtering.

"},{"location":"common/signal_processing/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"common/signal_processing/documentation/ButterworthFilter/","title":"ButterworthFilter","text":""},{"location":"common/signal_processing/documentation/ButterworthFilter/#butterworth-low-pass-filter-design-tool-class","title":"Butterworth Low-pass Filter Design Tool Class","text":"

This Butterworth low-pass filter design tool can be used to design a Butterworth filter in continuous and discrete-time from the given specifications of the filter performance. The Butterworth filter is a class implementation. A default constructor creates the object without any argument.

The filter can be prepared in three ways. If the filter specifications are known, such as the pass-band, and stop-band frequencies (Wp and Ws) together with the pass-band and stop-band ripple magnitudes (Ap and As), one can call the filter's buttord method with these arguments to obtain the recommended filter order (N) and cut-off frequency (Wc_rad_sec [rad/s]).

Figure 1. Butterworth Low-pass filter specification from [1].

An example call is demonstrated below;

ButterworthFilter bf();\n\nWp = 2.0; // pass-band frequency [rad/sec]\nWs = 3.0; // stop-band frequency [rad/sec]\nAp = 6.0; // pass-band ripple mag or loss [dB]\nAs = 20.0; // stop band ripple attenuation [dB]\n\n// Computing filter coefficients from the specs\nbf.Buttord(Wp, Ws, Ap, As);\n\n// Get the computed order and Cut-Off frequency\nsOrderCutOff NWc = bf.getOrderCutOff();]\n\ncout << \" The computed order is ;\" << NWc.N << endl;\ncout << \" The computed cut-off frequency is ;\" << NWc.Wc_rad_sec << endl;\n

The filter order and cut-off frequency can be obtained in a struct using bf.getOrderCutoff() method. These specs can be printed on the screen by calling PrintFilterSpecs() method. If the user would like to define the order and cut-off frequency manually, the setter methods for these variables can be called to set the filter order (N) and the desired cut-off frequency (Wc_rad_sec [rad/sec]) for the filter.

"},{"location":"common/signal_processing/documentation/ButterworthFilter/#obtaining-filter-transfer-functions","title":"Obtaining Filter Transfer Functions","text":"

The discrete transfer function of the filter requires the roots and gain of the continuous-time transfer function. Therefore, it is a must to call the first computeContinuousTimeTF() to create the continuous-time transfer function of the filter using;

bf.computeContinuousTimeTF();\n

The computed continuous-time transfer function roots can be printed on the screen using the methods;

bf.PrintFilter_ContinuousTimeRoots();\nbf.PrintContinuousTimeTF();\n

The resulting screen output for a 5th order filter is demonstrated below.

 Roots of Continuous Time Filter Transfer Function Denominator are :\n-0.585518 + j 1.80204\n-1.53291 + j 1.11372\n-1.89478 + j 2.32043e-16\n-1.53291 + j -1.11372\n-0.585518 + j -1.80204\n\n\nThe Continuous-Time Transfer Function of the Filter is ;\n\n                                   24.422\n-------------------------------------------------------------------------------\n1.000 *s[5] + 6.132 *s[4] + 18.798 *s[3] + 35.619 *s[2] + 41.711 *s[1] + 24.422\n
"},{"location":"common/signal_processing/documentation/ButterworthFilter/#discrete-time-transfer-function-difference-equations","title":"Discrete Time Transfer Function (Difference Equations)","text":"

The digital filter equivalent of the continuous-time definitions is produced by using the bi-linear transformation. When creating the discrete-time function of the ButterworthFilter object, its Numerator (Bn) and Denominator (An ) coefficients are stored in a vector of filter order size N.

The discrete transfer function method is called using ;

bf.computeDiscreteTimeTF();\nbf.PrintDiscreteTimeTF();\n

The results are printed on the screen like; The Discrete-Time Transfer Function of the Filter is ;

0.191 *z[-5] + 0.956 *z[-4] + 1.913 *z[-3] + 1.913 *z[-2] + 0.956 *z[-1] + 0.191\n--------------------------------------------------------------------------------\n1.000 *z[-5] + 1.885 *z[-4] + 1.888 *z[-3] + 1.014 *z[-2] + 0.298 *z[-1] + 0.037\n

and the associated difference coefficients An and Bn by withing a struct ;

sDifferenceAnBn AnBn = bf.getAnBn();\n

The difference coefficients appear in the filtering equation in the form of.

An * Y_filtered = Bn * Y_unfiltered\n

To filter a signal given in a vector form ;

"},{"location":"common/signal_processing/documentation/ButterworthFilter/#calling-filter-by-a-specified-cut-off-and-sampling-frequencies-in-hz","title":"Calling Filter by a specified cut-off and sampling frequencies [in Hz]","text":"

The Butterworth filter can also be created by defining the desired order (N), a cut-off frequency (fc in [Hz]), and a sampling frequency (fs in [Hz]). In this method, the cut-off frequency is pre-warped with respect to the sampling frequency [1, 2] to match the continuous and digital filter frequencies.

The filter is prepared by the following calling options;

 // 3rd METHOD defining a sampling frequency together with the cut-off fc, fs\n bf.setOrder(2);\n bf.setCutOffFrequency(10, 100);\n

At this step, we define a boolean variable whether to use the pre-warping option or not.

// Compute Continuous Time TF\nbool use_sampling_frequency = true;\nbf.computeContinuousTimeTF(use_sampling_frequency);\nbf.PrintFilter_ContinuousTimeRoots();\nbf.PrintContinuousTimeTF();\n\n// Compute Discrete Time TF\nbf.computeDiscreteTimeTF(use_sampling_frequency);\nbf.PrintDiscreteTimeTF();\n

References:

  1. Manolakis, Dimitris G., and Vinay K. Ingle. Applied digital signal processing: theory and practice. Cambridge University Press, 2011.

  2. https://en.wikibooks.org/wiki/Digital_Signal_Processing/Bilinear_Transform

"},{"location":"common/tier4_automatic_goal_rviz_plugin/","title":"tier4_automatic_goal_rviz_plugin","text":""},{"location":"common/tier4_automatic_goal_rviz_plugin/#purpose","title":"Purpose","text":"
  1. Defining a GoalsList by adding goals using RvizTool (Pose on the map).

  2. Automatic execution of the created GoalsList from the selected goal - it can be stopped and restarted.

  3. Looping the current GoalsList.

  4. Saving achieved goals to a file.

  5. Plan the route to one (single) selected goal and starting that route - it can be stopped and restarted.

  6. Remove any goal from the list or clear the current route.

  7. Save the current GoalsList to a file and load the list from the file.

  8. The application enables/disables access to options depending on the current state.

  9. The saved GoalsList can be executed without using a plugin - using a node automatic_goal_sender.

"},{"location":"common/tier4_automatic_goal_rviz_plugin/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"common/tier4_automatic_goal_rviz_plugin/#input","title":"Input","text":"Name Type Description /api/operation_mode/state autoware_adapi_v1_msgs::msg::OperationModeState The topic represents the state of operation mode /api/routing/state autoware_adapi_v1_msgs::msg::RouteState The topic represents the state of route /rviz2/automatic_goal/goal geometry_msgs::msgs::PoseStamped The topic for adding goals to GoalsList"},{"location":"common/tier4_automatic_goal_rviz_plugin/#output","title":"Output","text":"Name Type Description /api/operation_mode/change_to_autonomous autoware_adapi_v1_msgs::srv::ChangeOperationMode The service to change operation mode to autonomous /api/operation_mode/change_to_stop autoware_adapi_v1_msgs::srv::ChangeOperationMode The service to change operation mode to stop /api/routing/set_route_points autoware_adapi_v1_msgs::srv::SetRoutePoints The service to set route /api/routing/clear_route autoware_adapi_v1_msgs::srv::ClearRoute The service to clear route state /rviz2/automatic_goal/markers visualization_msgs::msg::MarkerArray The topic to visualize goals as rviz markers"},{"location":"common/tier4_automatic_goal_rviz_plugin/#howtouse","title":"HowToUse","text":"
  1. Start rviz and select panels/Add new panel.

  2. Select tier4_automatic_goal_rviz_plugin/AutowareAutomaticGoalPanel and press OK.

  3. Select Add a new tool.

  4. Select tier4_automatic_goal_rviz_plugin/AutowareAutomaticGoalTool and press OK.

  5. Add goals visualization as markers to Displays.

  6. Append goals to the GoalsList to be achieved using 2D Append Goal - in such a way that routes can be planned.

  7. Start sequential planning and goal achievement by clicking Send goals automatically

  8. You can save GoalsList by clicking Save to file.

  9. After saving, you can run the GoalsList without using a plugin also:

    • example: ros2 launch tier4_automatic_goal_rviz_plugin automatic_goal_sender.launch.xml goals_list_file_path:=\"/tmp/goals_list.yaml\" goals_achieved_dir_path:=\"/tmp/\"
      • goals_list_file_path - is the path to the saved GoalsList file to be loaded
      • goals_achieved_dir_path - is the path to the directory where the file goals_achieved.log will be created and the achieved goals will be written to it
"},{"location":"common/tier4_automatic_goal_rviz_plugin/#hints","title":"Hints","text":"

If the application (Engagement) goes into ERROR mode (usually returns to EDITING later), it means that one of the services returned a calling error (code!=0). In this situation, check the terminal output for more information.

"},{"location":"common/tier4_autoware_utils/","title":"tier4_autoware_utils","text":""},{"location":"common/tier4_autoware_utils/#purpose","title":"Purpose","text":"

This package contains many common functions used by other packages, so please refer to them as needed.

"},{"location":"common/tier4_control_rviz_plugin/","title":"tier4_control_rviz_plugin","text":"

This package is to mimic external control for simulation.

"},{"location":"common/tier4_control_rviz_plugin/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"common/tier4_control_rviz_plugin/#input","title":"Input","text":"Name Type Description /control/current_gate_mode tier4_control_msgs::msg::GateMode Current GATE mode /vehicle/status/velocity_status autoware_auto_vehicle_msgs::msg::VelocityReport Current velocity status /api/autoware/get/engage tier4_external_api_msgs::srv::Engage Getting Engage /vehicle/status/gear_status autoware_auto_vehicle_msgs::msg::GearReport The state of GEAR"},{"location":"common/tier4_control_rviz_plugin/#output","title":"Output","text":"Name Type Description /control/gate_mode_cmd tier4_control_msgs::msg::GateMode GATE mode /external/selected/control_cmd autoware_auto_control_msgs::msg::AckermannControlCommand AckermannControlCommand /external/selected/gear_cmd autoware_auto_vehicle_msgs::msg::GearCommand GEAR"},{"location":"common/tier4_control_rviz_plugin/#usage","title":"Usage","text":"
  1. Start rviz and select Panels.

  2. Select tier4_control_rviz_plugin/ManualController and press OK.

  3. Enter velocity in \"Set Cruise Velocity\" and Press the button to confirm. You can notice that GEAR shows D (DRIVE).

  4. Press \"Enable Manual Control\" and you can notice that \"GATE\" and \"Engage\" turn \"Ready\" and the vehicle starts!

"},{"location":"common/tier4_datetime_rviz_plugin/","title":"tier4_datetime_rviz_plugin","text":""},{"location":"common/tier4_datetime_rviz_plugin/#purpose","title":"Purpose","text":"

This plugin displays the ROS Time and Wall Time in rviz.

"},{"location":"common/tier4_datetime_rviz_plugin/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"common/tier4_datetime_rviz_plugin/#usage","title":"Usage","text":"
  1. Start rviz and select panels/Add new panel.
  2. Select tier4_datetime_rviz_plugin/AutowareDateTimePanel and press OK.
"},{"location":"common/tier4_debug_rviz_plugin/","title":"tier4_debug_rviz_plugin","text":"

This package is including jsk code. Note that jsk_overlay_utils.cpp and jsk_overlay_utils.hpp are BSD license.

"},{"location":"common/tier4_debug_rviz_plugin/#plugins","title":"Plugins","text":""},{"location":"common/tier4_debug_rviz_plugin/#float32multiarraystampedpiechart","title":"Float32MultiArrayStampedPieChart","text":"

Pie chart from tier4_debug_msgs::msg::Float32MultiArrayStamped.

"},{"location":"common/tier4_debug_tools/","title":"tier4_debug_tools","text":"

This package provides useful features for debugging Autoware.

"},{"location":"common/tier4_debug_tools/#usage","title":"Usage","text":""},{"location":"common/tier4_debug_tools/#tf2pose","title":"tf2pose","text":"

This tool converts any tf to pose topic. With this tool, for example, you can plot x values of tf in rqt_multiplot.

ros2 run tier4_debug_tools tf2pose {tf_from} {tf_to} {hz}\n

Example:

$ ros2 run tier4_debug_tools tf2pose base_link ndt_base_link 100\n\n$ ros2 topic echo /tf2pose/pose -n1\nheader:\n  seq: 13\nstamp:\n    secs: 1605168366\nnsecs: 549174070\nframe_id: \"base_link\"\npose:\n  position:\n    x: 0.0387684271191\n    y: -0.00320360406477\n    z: 0.000276674520819\n  orientation:\n    x: 0.000335221893885\n    y: 0.000122020672186\n    z: -0.00539673212896\n    w: 0.999985368502\n---\n
"},{"location":"common/tier4_debug_tools/#pose2tf","title":"pose2tf","text":"

This tool converts any pose topic to tf.

ros2 run tier4_debug_tools pose2tf {pose_topic_name} {tf_name}\n

Example:

$ ros2 run tier4_debug_tools pose2tf /localization/pose_estimator/pose ndt_pose\n\n$ ros2 run tf tf_echo ndt_pose ndt_base_link 100\nAt time 1605168365.449\n- Translation: [0.000, 0.000, 0.000]\n- Rotation: in Quaternion [0.000, 0.000, 0.000, 1.000]\nin RPY (radian) [0.000, -0.000, 0.000]\nin RPY (degree) [0.000, -0.000, 0.000]\n
"},{"location":"common/tier4_debug_tools/#stop_reason2pose","title":"stop_reason2pose","text":"

This tool extracts pose from stop_reasons. Topics without numbers such as /stop_reason2pose/pose/detection_area are the nearest stop_reasons, and topics with numbers are individual stop_reasons that are roughly matched with previous ones.

ros2 run tier4_debug_tools stop_reason2pose {stop_reason_topic_name}\n

Example:

$ ros2 run tier4_debug_tools stop_reason2pose /planning/scenario_planning/status/stop_reasons\n\n$ ros2 topic list | ag stop_reason2pose\n/stop_reason2pose/pose/detection_area\n/stop_reason2pose/pose/detection_area_1\n/stop_reason2pose/pose/obstacle_stop\n/stop_reason2pose/pose/obstacle_stop_1\n\n$ ros2 topic echo /stop_reason2pose/pose/detection_area -n1\nheader:\n  seq: 1\nstamp:\n    secs: 1605168355\nnsecs:    821713\nframe_id: \"map\"\npose:\n  position:\n    x: 60608.8433457\n    y: 43886.2410876\n    z: 44.9078212441\n  orientation:\n    x: 0.0\n    y: 0.0\n    z: -0.190261378408\n    w: 0.981733470901\n---\n
"},{"location":"common/tier4_debug_tools/#stop_reason2tf","title":"stop_reason2tf","text":"

This is an all-in-one script that uses tf2pose, pose2tf, and stop_reason2pose. With this tool, you can view the relative position from base_link to the nearest stop_reason.

ros2 run tier4_debug_tools stop_reason2tf {stop_reason_name}\n

Example:

$ ros2 run tier4_debug_tools stop_reason2tf obstacle_stop\nAt time 1605168359.501\n- Translation: [0.291, -0.095, 0.266]\n- Rotation: in Quaternion [0.007, 0.011, -0.005, 1.000]\nin RPY (radian) [0.014, 0.023, -0.010]\nin RPY (degree) [0.825, 1.305, -0.573]\n
"},{"location":"common/tier4_debug_tools/#lateral_error_publisher","title":"lateral_error_publisher","text":"

This node calculate the control error and localization error in the trajectory normal direction as shown in the figure below.

Set the reference trajectory, vehicle pose and ground truth pose in the launch file.

ros2 launch tier4_debug_tools lateral_error_publisher.launch.xml\n
"},{"location":"common/tier4_localization_rviz_plugin/","title":"tier4_localization_rviz_plugin","text":""},{"location":"common/tier4_localization_rviz_plugin/#purpose","title":"Purpose","text":"

This plugin can display the history of the localization obtained by ekf_localizer or ndt_scan_matching.

"},{"location":"common/tier4_localization_rviz_plugin/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"common/tier4_localization_rviz_plugin/#input","title":"Input","text":"Name Type Description input/pose geometry_msgs::msg::PoseStamped In input/pose, put the result of localization calculated by ekf_localizer or ndt_scan_matching"},{"location":"common/tier4_localization_rviz_plugin/#parameters","title":"Parameters","text":""},{"location":"common/tier4_localization_rviz_plugin/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description property_buffer_size_ int 100 Buffer size of topic property_line_view_ bool true Use Line property or not property_line_width_ float 0.1 Width of Line property [m] property_line_alpha_ float 1.0 Alpha of Line property property_line_color_ QColor Qt::white Color of Line property"},{"location":"common/tier4_localization_rviz_plugin/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"common/tier4_localization_rviz_plugin/#usage","title":"Usage","text":"
  1. Start rviz and select Add under the Displays panel.
  2. Select tier4_localization_rviz_plugin/PoseHistory and press OK.
  3. Enter the name of the topic where you want to view the trajectory.
"},{"location":"common/tier4_perception_rviz_plugin/","title":"tier4_perception_rviz_plugin","text":""},{"location":"common/tier4_perception_rviz_plugin/#purpose","title":"Purpose","text":"

This plugin is used to generate dummy pedestrians, cars, and obstacles in planning simulator.

"},{"location":"common/tier4_perception_rviz_plugin/#overview","title":"Overview","text":"

The CarInitialPoseTool sends a topic for generating a dummy car. The PedestrianInitialPoseTool sends a topic for generating a dummy pedestrian. The UnknownInitialPoseTool sends a topic for generating a dummy obstacle. The DeleteAllObjectsTool deletes the dummy cars, pedestrians, and obstacles displayed by the above three tools.

"},{"location":"common/tier4_perception_rviz_plugin/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"common/tier4_perception_rviz_plugin/#output","title":"Output","text":"Name Type Description /simulation/dummy_perception_publisher/object_info dummy_perception_publisher::msg::Object The topic on which to publish dummy object info"},{"location":"common/tier4_perception_rviz_plugin/#parameter","title":"Parameter","text":""},{"location":"common/tier4_perception_rviz_plugin/#core-parameters","title":"Core Parameters","text":""},{"location":"common/tier4_perception_rviz_plugin/#carpose","title":"CarPose","text":"Name Type Default Value Description topic_property_ string /simulation/dummy_perception_publisher/object_info The topic on which to publish dummy object info std_dev_x_ float 0.03 X standard deviation for initial pose [m] std_dev_y_ float 0.03 Y standard deviation for initial pose [m] std_dev_z_ float 0.03 Z standard deviation for initial pose [m] std_dev_theta_ float 5.0 * M_PI / 180.0 Theta standard deviation for initial pose [rad] length_ float 4.0 X standard deviation for initial pose [m] width_ float 1.8 Y standard deviation for initial pose [m] height_ float 2.0 Z standard deviation for initial pose [m] position_z_ float 0.0 Z position for initial pose [m] velocity_ float 0.0 Velocity [m/s]"},{"location":"common/tier4_perception_rviz_plugin/#buspose","title":"BusPose","text":"Name Type Default Value Description topic_property_ string /simulation/dummy_perception_publisher/object_info The topic on which to publish dummy object info std_dev_x_ float 0.03 X standard deviation for initial pose [m] std_dev_y_ float 0.03 Y standard deviation for initial pose [m] std_dev_z_ float 0.03 Z standard deviation for initial pose [m] std_dev_theta_ float 5.0 * M_PI / 180.0 Theta standard deviation for initial pose [rad] length_ float 10.5 X standard deviation for initial pose [m] width_ float 2.5 Y standard deviation for initial pose [m] height_ float 3.5 Z standard deviation for initial pose [m] position_z_ float 0.0 Z position for initial pose [m] velocity_ float 0.0 Velocity [m/s]"},{"location":"common/tier4_perception_rviz_plugin/#pedestrianpose","title":"PedestrianPose","text":"Name Type Default Value Description topic_property_ string /simulation/dummy_perception_publisher/object_info The topic on which to publish dummy object info std_dev_x_ float 0.03 X standard deviation for initial pose [m] std_dev_y_ float 0.03 Y standard deviation for initial pose [m] std_dev_z_ float 0.03 Z standard deviation for initial pose [m] std_dev_theta_ float 5.0 * M_PI / 180.0 Theta standard deviation for initial pose [rad] position_z_ float 0.0 Z position for initial pose [m] velocity_ float 0.0 Velocity [m/s]"},{"location":"common/tier4_perception_rviz_plugin/#unknownpose","title":"UnknownPose","text":"Name Type Default Value Description topic_property_ string /simulation/dummy_perception_publisher/object_info The topic on which to publish dummy object info std_dev_x_ float 0.03 X standard deviation for initial pose [m] std_dev_y_ float 0.03 Y standard deviation for initial pose [m] std_dev_z_ float 0.03 Z standard deviation for initial pose [m] std_dev_theta_ float 5.0 * M_PI / 180.0 Theta standard deviation for initial pose [rad] position_z_ float 0.0 Z position for initial pose [m] velocity_ float 0.0 Velocity [m/s]"},{"location":"common/tier4_perception_rviz_plugin/#deleteallobjects","title":"DeleteAllObjects","text":"Name Type Default Value Description topic_property_ string /simulation/dummy_perception_publisher/object_info The topic on which to publish dummy object info"},{"location":"common/tier4_perception_rviz_plugin/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

Using a planning simulator

"},{"location":"common/tier4_perception_rviz_plugin/#usage","title":"Usage","text":"
  1. Start rviz and select + on the tool tab.
  2. Select one of the following: tier4_perception_rviz_plugin and press OK.
  3. Select the new item in the tool tab (2D Dummy Car in the example) and click on it in rviz.
"},{"location":"common/tier4_perception_rviz_plugin/#interactive-manipulation","title":"Interactive manipulation","text":"

You can interactively manipulate the object.

  1. Select \"Tool Properties\" in rviz.
  2. Select the corresponding object tab in the Tool Properties.
  3. Turn the \"Interactive\" checkbox on.
  4. Select the item in the tool tab in you haven't chosen yet.
  5. Key commands are as follows.
action key command ADD Shift + Click Right Button MOVE Hold down Right Button + Drug and Drop DELETE Alt + Click Right Button"},{"location":"common/tier4_planning_rviz_plugin/","title":"tier4_planning_rviz_plugin","text":"

This package is including jsk code. Note that jsk_overlay_utils.cpp and jsk_overlay_utils.hpp are BSD license.

"},{"location":"common/tier4_planning_rviz_plugin/#purpose","title":"Purpose","text":"

This plugin displays the path, trajectory, and maximum speed.

"},{"location":"common/tier4_planning_rviz_plugin/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"common/tier4_planning_rviz_plugin/#input","title":"Input","text":"Name Type Description /input/path autoware_auto_planning_msgs::msg::Path The topic on which to subscribe path /input/trajectory autoware_auto_planning_msgs::msg::Trajectory The topic on which to subscribe trajectory /planning/scenario_planning/current_max_velocity tier4_planning_msgs/msg/VelocityLimit The topic on which to publish max velocity"},{"location":"common/tier4_planning_rviz_plugin/#output","title":"Output","text":"Name Type Description /planning/mission_planning/checkpoint geometry_msgs/msg/PoseStamped The topic on which to publish checkpoint"},{"location":"common/tier4_planning_rviz_plugin/#parameter","title":"Parameter","text":""},{"location":"common/tier4_planning_rviz_plugin/#core-parameters","title":"Core Parameters","text":""},{"location":"common/tier4_planning_rviz_plugin/#missioncheckpoint","title":"MissionCheckpoint","text":"Name Type Default Value Description pose_topic_property_ string mission_checkpoint The topic on which to publish checkpoint std_dev_x_ float 0.5 X standard deviation for checkpoint pose [m] std_dev_y_ float 0.5 Y standard deviation for checkpoint pose [m] std_dev_theta_ float M_PI / 12.0 Theta standard deviation for checkpoint pose [rad] position_z_ float 0.0 Z position for checkpoint pose [m]"},{"location":"common/tier4_planning_rviz_plugin/#path","title":"Path","text":"Name Type Default Value Description property_path_view_ bool true Use Path property or not property_path_width_ float 2.0 Width of Path property [m] property_path_alpha_ float 1.0 Alpha of Path property property_path_color_view_ bool false Use Constant Color or not property_path_color_ QColor Qt::black Color of Path property property_velocity_view_ bool true Use Velocity property or not property_velocity_alpha_ float 1.0 Alpha of Velocity property property_velocity_scale_ float 0.3 Scale of Velocity property property_velocity_color_view_ bool false Use Constant Color or not property_velocity_color_ QColor Qt::black Color of Velocity property property_vel_max_ float 3.0 Max velocity [m/s]"},{"location":"common/tier4_planning_rviz_plugin/#drivablearea","title":"DrivableArea","text":"Name Type Default Value Description color_scheme_property_ int 0 Color scheme of DrivableArea property alpha_property_ float 0.2 Alpha of DrivableArea property draw_under_property_ bool false Draw as background or not"},{"location":"common/tier4_planning_rviz_plugin/#pathfootprint","title":"PathFootprint","text":"Name Type Default Value Description property_path_footprint_view_ bool true Use Path Footprint property or not property_path_footprint_alpha_ float 1.0 Alpha of Path Footprint property property_path_footprint_color_ QColor Qt::black Color of Path Footprint property property_vehicle_length_ float 4.77 Vehicle length [m] property_vehicle_width_ float 1.83 Vehicle width [m] property_rear_overhang_ float 1.03 Rear overhang [m]"},{"location":"common/tier4_planning_rviz_plugin/#trajectory","title":"Trajectory","text":"Name Type Default Value Description property_path_view_ bool true Use Path property or not property_path_width_ float 2.0 Width of Path property [m] property_path_alpha_ float 1.0 Alpha of Path property property_path_color_view_ bool false Use Constant Color or not property_path_color_ QColor Qt::black Color of Path property property_velocity_view_ bool true Use Velocity property or not property_velocity_alpha_ float 1.0 Alpha of Velocity property property_velocity_scale_ float 0.3 Scale of Velocity property property_velocity_color_view_ bool false Use Constant Color or not property_velocity_color_ QColor Qt::black Color of Velocity property property_velocity_text_view_ bool false View text Velocity property_velocity_text_scale_ float 0.3 Scale of Velocity property property_vel_max_ float 3.0 Max velocity [m/s]"},{"location":"common/tier4_planning_rviz_plugin/#trajectoryfootprint","title":"TrajectoryFootprint","text":"Name Type Default Value Description property_trajectory_footprint_view_ bool true Use Trajectory Footprint property or not property_trajectory_footprint_alpha_ float 1.0 Alpha of Trajectory Footprint property property_trajectory_footprint_color_ QColor QColor(230, 230, 50) Color of Trajectory Footprint property property_vehicle_length_ float 4.77 Vehicle length [m] property_vehicle_width_ float 1.83 Vehicle width [m] property_rear_overhang_ float 1.03 Rear overhang [m] property_trajectory_point_view_ bool false Use Trajectory Point property or not property_trajectory_point_alpha_ float 1.0 Alpha of Trajectory Point property property_trajectory_point_color_ QColor QColor(0, 60, 255) Color of Trajectory Point property property_trajectory_point_radius_ float 0.1 Radius of Trajectory Point property"},{"location":"common/tier4_planning_rviz_plugin/#maxvelocity","title":"MaxVelocity","text":"Name Type Default Value Description property_topic_name_ string /planning/scenario_planning/current_max_velocity The topic on which to subscribe max velocity property_text_color_ QColor QColor(255, 255, 255) Text color property_left_ int 128 Left of the plotter window [px] property_top_ int 128 Top of the plotter window [px] property_length_ int 96 Length of the plotter window [px] property_value_scale_ float 1.0 / 4.0 Value scale"},{"location":"common/tier4_planning_rviz_plugin/#usage","title":"Usage","text":"
  1. Start rviz and select Add under the Displays panel.
  2. Select any one of the tier4_planning_rviz_plugin and press OK.
  3. Enter the name of the topic where you want to view the path or trajectory.
"},{"location":"common/tier4_screen_capture_rviz_plugin/","title":"tier4_screen_capture_rviz_plugin","text":""},{"location":"common/tier4_screen_capture_rviz_plugin/#purpose","title":"Purpose","text":"

This plugin captures the screen of rviz.

"},{"location":"common/tier4_screen_capture_rviz_plugin/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

This is only for debug or analyze. The capture screen button is still beta version which can slow frame rate. set lower frame rate according to PC spec.

"},{"location":"common/tier4_screen_capture_rviz_plugin/#usage","title":"Usage","text":"
  1. Start rviz and select panels/Add new panel.
"},{"location":"common/tier4_simulated_clock_rviz_plugin/","title":"tier4_simulated_clock_rviz_plugin","text":""},{"location":"common/tier4_simulated_clock_rviz_plugin/#purpose","title":"Purpose","text":"

This plugin allows publishing and controlling the simulated ROS time.

"},{"location":"common/tier4_simulated_clock_rviz_plugin/#output","title":"Output","text":"Name Type Description /clock rosgraph_msgs::msg::Clock the current simulated time"},{"location":"common/tier4_simulated_clock_rviz_plugin/#howtouse","title":"HowToUse","text":"
  1. Start rviz and select panels/Add new panel.
  2. Select tier4_clock_rviz_plugin/SimulatedClock and press OK.
  3. Use the added panel to control how the simulated clock is published.

    • Pause button: pause/resume the clock.
    • Speed: speed of the clock relative to the system clock.
    • Rate: publishing rate of the clock.
    • Step button: advance the clock by the specified time step.
    • Time step: value used to advance the clock when pressing the step button d).
    • Time unit: time unit associated with the value from e).
"},{"location":"common/tier4_state_rviz_plugin/","title":"tier4_state_rviz_plugin","text":""},{"location":"common/tier4_state_rviz_plugin/#purpose","title":"Purpose","text":"

This plugin displays the current status of autoware. This plugin also can engage from the panel.

"},{"location":"common/tier4_state_rviz_plugin/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"common/tier4_state_rviz_plugin/#input","title":"Input","text":"Name Type Description /api/operation_mode/state autoware_adapi_v1_msgs::msg::OperationModeState The topic represents the state of operation mode /api/routing/state autoware_adapi_v1_msgs::msg::RouteState The topic represents the state of route /api/localization/initialization_state autoware_adapi_v1_msgs::msg::LocalizationInitializationState The topic represents the state of localization initialization /api/motion/state autoware_adapi_v1_msgs::msg::MotionState The topic represents the state of motion /api/autoware/get/emergency tier4_external_api_msgs::msg::Emergency The topic represents the state of external emergency /vehicle/status/gear_status autoware_auto_vehicle_msgs::msg::GearReport The topic represents the state of gear"},{"location":"common/tier4_state_rviz_plugin/#output","title":"Output","text":"Name Type Description /api/operation_mode/change_to_autonomous autoware_adapi_v1_msgs::srv::ChangeOperationMode The service to change operation mode to autonomous /api/operation_mode/change_to_stop autoware_adapi_v1_msgs::srv::ChangeOperationMode The service to change operation mode to stop /api/operation_mode/change_to_local autoware_adapi_v1_msgs::srv::ChangeOperationMode The service to change operation mode to local /api/operation_mode/change_to_remote autoware_adapi_v1_msgs::srv::ChangeOperationMode The service to change operation mode to remote /api/operation_mode/enable_autoware_control autoware_adapi_v1_msgs::srv::ChangeOperationMode The service to enable vehicle control by Autoware /api/operation_mode/disable_autoware_control autoware_adapi_v1_msgs::srv::ChangeOperationMode The service to disable vehicle control by Autoware /api/routing/clear_route autoware_adapi_v1_msgs::srv::ClearRoute The service to clear route state /api/motion/accept_start autoware_adapi_v1_msgs::srv::AcceptStart The service to accept the vehicle to start /api/autoware/set/emergency tier4_external_api_msgs::srv::SetEmergency The service to set external emergency /planning/scenario_planning/max_velocity_default tier4_planning_msgs::msg::VelocityLimit The topic to set maximum speed of the vehicle"},{"location":"common/tier4_state_rviz_plugin/#howtouse","title":"HowToUse","text":"
  1. Start rviz and select panels/Add new panel.

  2. Select tier4_state_rviz_plugin/AutowareStatePanel and press OK.

  3. If the auto button is activated, can engage by clicking it.

"},{"location":"common/tier4_traffic_light_rviz_plugin/","title":"tier4_traffic_light_rviz_plugin","text":""},{"location":"common/tier4_traffic_light_rviz_plugin/#purpose","title":"Purpose","text":"

This plugin panel publishes dummy traffic light signals.

"},{"location":"common/tier4_traffic_light_rviz_plugin/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"common/tier4_traffic_light_rviz_plugin/#output","title":"Output","text":"Name Type Description /perception/traffic_light_recognition/traffic_signals autoware_auto_perception_msgs::msg::TrafficSignalArray Publish traffic light signals"},{"location":"common/tier4_traffic_light_rviz_plugin/#howtouse","title":"HowToUse","text":"
  1. Start rviz and select panels/Add new panel.
  2. Select TrafficLightPublishPanel and press OK.
  3. Set Traffic Light ID & Traffic Light Status and press SET button.
  4. Traffic light signals are published, while PUBLISH button is pushed.
"},{"location":"common/tier4_vehicle_rviz_plugin/","title":"tier4_vehicle_rviz_plugin","text":"

This package is including jsk code. Note that jsk_overlay_utils.cpp and jsk_overlay_utils.hpp are BSD license.

"},{"location":"common/tier4_vehicle_rviz_plugin/#purpose","title":"Purpose","text":"

This plugin provides a visual and easy-to-understand display of vehicle speed, turn signal and steering status.

"},{"location":"common/tier4_vehicle_rviz_plugin/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"common/tier4_vehicle_rviz_plugin/#input","title":"Input","text":"Name Type Description /vehicle/status/velocity_status autoware_auto_vehicle_msgs::msg::VelocityReport The topic is vehicle twist /control/turn_signal_cmd autoware_auto_vehicle_msgs::msg::TurnIndicatorsReport The topic is status of turn signal /vehicle/status/steering_status autoware_auto_vehicle_msgs::msg::SteeringReport The topic is status of steering"},{"location":"common/tier4_vehicle_rviz_plugin/#parameter","title":"Parameter","text":""},{"location":"common/tier4_vehicle_rviz_plugin/#core-parameters","title":"Core Parameters","text":""},{"location":"common/tier4_vehicle_rviz_plugin/#consolemeter","title":"ConsoleMeter","text":"Name Type Default Value Description property_text_color_ QColor QColor(25, 255, 240) Text color property_left_ int 128 Left of the plotter window [px] property_top_ int 128 Top of the plotter window [px] property_length_ int 256 Height of the plotter window [px] property_value_height_offset_ int 0 Height offset of the plotter window [px] property_value_scale_ float 1.0 / 6.667 Value scale"},{"location":"common/tier4_vehicle_rviz_plugin/#steeringangle","title":"SteeringAngle","text":"Name Type Default Value Description property_text_color_ QColor QColor(25, 255, 240) Text color property_left_ int 128 Left of the plotter window [px] property_top_ int 128 Top of the plotter window [px] property_length_ int 256 Height of the plotter window [px] property_value_height_offset_ int 0 Height offset of the plotter window [px] property_value_scale_ float 1.0 / 6.667 Value scale property_handle_angle_scale_ float 3.0 Scale is steering angle to handle angle"},{"location":"common/tier4_vehicle_rviz_plugin/#turnsignal","title":"TurnSignal","text":"Name Type Default Value Description property_left_ int 128 Left of the plotter window [px] property_top_ int 128 Top of the plotter window [px] property_width_ int 256 Left of the plotter window [px] property_height_ int 256 Width of the plotter window [px]"},{"location":"common/tier4_vehicle_rviz_plugin/#velocityhistory","title":"VelocityHistory","text":"Name Type Default Value Description property_velocity_timeout_ float 10.0 Timeout of velocity [s] property_velocity_alpha_ float 1.0 Alpha of velocity property_velocity_scale_ float 0.3 Scale of velocity property_velocity_color_view_ bool false Use Constant Color or not property_velocity_color_ QColor Qt::black Color of velocity history property_vel_max_ float 3.0 Color Border Vel Max [m/s]"},{"location":"common/tier4_vehicle_rviz_plugin/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"common/tier4_vehicle_rviz_plugin/#usage","title":"Usage","text":"
  1. Start rviz and select Add under the Displays panel.
  2. Select any one of the tier4_vehicle_rviz_plugin and press OK.
  3. Enter the name of the topic where you want to view the status.
"},{"location":"common/tvm_utility/artifacts/","title":"Macro Syntax Error","text":"

File: common/tvm_utility/artifacts/README.md

Line 1 in Markdown file: Missing end of comment tag

# TVM Utility Artifacts {#tvm-utility-artifacts-readme}\n

"},{"location":"common/tvm_utility/design/tvm-utility-design/","title":"Macro Syntax Error","text":"

File: common/tvm_utility/design/tvm-utility-design.md

Line 1 in Markdown file: Missing end of comment tag

# TVM Utility {#tvm-utility-design}\n

"},{"location":"common/tvm_utility/design/tvm-utility-yolo-v2-tiny-tests/","title":"Macro Syntax Error","text":"

File: common/tvm_utility/design/tvm-utility-yolo-v2-tiny-tests.md

Line 1 in Markdown file: Missing end of comment tag

# YOLOv2 Tiny Example Pipeline {#tvm-utility-yolo-v2-tiny-tests}\n

"},{"location":"control/control_performance_analysis/","title":"control_performance_analysis","text":""},{"location":"control/control_performance_analysis/#purpose","title":"Purpose","text":"

control_performance_analysis is the package to analyze the tracking performance of a control module and monitor the driving status of the vehicle.

This package is used as a tool to quantify the results of the control module. That's why it doesn't interfere with the core logic of autonomous driving.

Based on the various input from planning, control, and vehicle, it publishes the result of analysis as control_performance_analysis::msg::ErrorStamped defined in this package.

All results in ErrorStamped message are calculated in Frenet Frame of curve. Errors and velocity errors are calculated by using paper below.

Werling, Moritz & Groell, Lutz & Bretthauer, Georg. (2010). Invariant Trajectory Tracking With a Full-Size Autonomous Road Vehicle. IEEE Transactions on Robotics. 26. 758 - 765. 10.1109/TRO.2010.2052325.

If you are interested in calculations, you can see the error and error velocity calculations in section C. Asymptotical Trajectory Tracking With Orientation Control.

Error acceleration calculations are made based on the velocity calculations above. You can see below the calculation of error acceleration.

"},{"location":"control/control_performance_analysis/#input-output","title":"Input / Output","text":""},{"location":"control/control_performance_analysis/#input-topics","title":"Input topics","text":"Name Type Description /planning/scenario_planning/trajectory autoware_auto_planning_msgs::msg::Trajectory Output trajectory from planning module. /control/command/control_cmd autoware_auto_control_msgs::msg::AckermannControlCommand Output control command from control module. /vehicle/status/steering_status autoware_auto_vehicle_msgs::msg::SteeringReport Steering information from vehicle. /localization/kinematic_state nav_msgs::msg::Odometry Use twist from odometry. /tf tf2_msgs::msg::TFMessage Extract ego pose from tf."},{"location":"control/control_performance_analysis/#output-topics","title":"Output topics","text":"Name Type Description /control_performance/performance_vars control_performance_analysis::msg::ErrorStamped The result of the performance analysis. /control_performance/driving_status control_performance_analysis::msg::DrivingMonitorStamped Driving status (acceleration, jerk etc.) monitoring"},{"location":"control/control_performance_analysis/#outputs","title":"Outputs","text":""},{"location":"control/control_performance_analysis/#control_performance_analysismsgdrivingmonitorstamped","title":"control_performance_analysis::msg::DrivingMonitorStamped","text":"Name Type Description longitudinal_acceleration float [m / s^2] longitudinal_jerk float [m / s^3] lateral_acceleration float [m / s^2] lateral_jerk float [m / s^3] desired_steering_angle float [rad] controller_processing_time float Timestamp between last two control command messages [ms]"},{"location":"control/control_performance_analysis/#control_performance_analysismsgerrorstamped","title":"control_performance_analysis::msg::ErrorStamped","text":"Name Type Description lateral_error float [m] lateral_error_velocity float [m / s] lateral_error_acceleration float [m / s^2] longitudinal_error float [m] longitudinal_error_velocity float [m / s] longitudinal_error_acceleration float [m / s^2] heading_error float [rad] heading_error_velocity float [rad / s] control_effort_energy float [u * R * u^T] error_energy float lateral_error^2 + heading_error^2 value_approximation float V = xPx' ; Value function from DARE Lyap matrix P curvature_estimate float [1 / m] curvature_estimate_pp float [1 / m] vehicle_velocity_error float [m / s] tracking_curvature_discontinuity_ability float Measures the ability to tracking the curvature changes [abs(delta(curvature)) / (1 + abs(delta(lateral_error))]"},{"location":"control/control_performance_analysis/#parameters","title":"Parameters","text":"Name Type Description curvature_interval_length double Used for estimating current curvature prevent_zero_division_value double Value to avoid zero division. Default is 0.001 odom_interval unsigned integer Interval between odom messages, increase it for smoother curve. acceptable_max_distance_to_waypoint double Maximum distance between trajectory point and vehicle [m] acceptable_max_yaw_difference_rad double Maximum yaw difference between trajectory point and vehicle [rad] low_pass_filter_gain double Low pass filter gain"},{"location":"control/control_performance_analysis/#usage","title":"Usage","text":" "},{"location":"control/control_performance_analysis/#future-improvements","title":"Future Improvements","text":""},{"location":"control/external_cmd_selector/","title":"external_cmd_selector","text":""},{"location":"control/external_cmd_selector/#purpose","title":"Purpose","text":"

external_cmd_selector is the package to publish external_control_cmd, gear_cmd, hazard_lights_cmd, heartbeat and turn_indicators_cmd, according to the current mode, which is remote or local.

The current mode is set via service, remote is remotely operated, local is to use the values calculated by Autoware.

"},{"location":"control/external_cmd_selector/#input-output","title":"Input / Output","text":""},{"location":"control/external_cmd_selector/#input-topics","title":"Input topics","text":"Name Type Description /api/external/set/command/local/control TBD Local. Calculated control value. /api/external/set/command/local/heartbeat TBD Local. Heartbeat. /api/external/set/command/local/shift TBD Local. Gear shift like drive, rear and etc. /api/external/set/command/local/turn_signal TBD Local. Turn signal like left turn, right turn and etc. /api/external/set/command/remote/control TBD Remote. Calculated control value. /api/external/set/command/remote/heartbeat TBD Remote. Heartbeat. /api/external/set/command/remote/shift TBD Remote. Gear shift like drive, rear and etc. /api/external/set/command/remote/turn_signal TBD Remote. Turn signal like left turn, right turn and etc."},{"location":"control/external_cmd_selector/#output-topics","title":"Output topics","text":"Name Type Description /control/external_cmd_selector/current_selector_mode TBD Current selected mode, remote or local. /diagnostics diagnostic_msgs::msg::DiagnosticArray Check if node is active or not. /external/selected/external_control_cmd TBD Pass through control command with current mode. /external/selected/gear_cmd autoware_auto_vehicle_msgs::msg::GearCommand Pass through gear command with current mode. /external/selected/hazard_lights_cmd autoware_auto_vehicle_msgs::msg::HazardLightsCommand Pass through hazard light with current mode. /external/selected/heartbeat TBD Pass through heartbeat with current mode. /external/selected/turn_indicators_cmd autoware_auto_vehicle_msgs::msg::TurnIndicatorsCommand Pass through turn indicator with current mode."},{"location":"control/joy_controller/","title":"joy_controller","text":""},{"location":"control/joy_controller/#role","title":"Role","text":"

joy_controller is the package to convert a joy msg to autoware commands (e.g. steering wheel, shift, turn signal, engage) for a vehicle.

"},{"location":"control/joy_controller/#input-output","title":"Input / Output","text":""},{"location":"control/joy_controller/#input-topics","title":"Input topics","text":"Name Type Description ~/input/joy sensor_msgs::msg::Joy joy controller command ~/input/odometry nav_msgs::msg::Odometry ego vehicle odometry to get twist"},{"location":"control/joy_controller/#output-topics","title":"Output topics","text":"Name Type Description ~/output/control_command autoware_auto_control_msgs::msg::AckermannControlCommand lateral and longitudinal control command ~/output/external_control_command tier4_external_api_msgs::msg::ControlCommandStamped lateral and longitudinal control command ~/output/shift tier4_external_api_msgs::msg::GearShiftStamped gear command ~/output/turn_signal tier4_external_api_msgs::msg::TurnSignalStamped turn signal command ~/output/gate_mode tier4_control_msgs::msg::GateMode gate mode (Auto or External) ~/output/heartbeat tier4_external_api_msgs::msg::Heartbeat heartbeat ~/output/vehicle_engage autoware_auto_vehicle_msgs::msg::Engage vehicle engage"},{"location":"control/joy_controller/#parameters","title":"Parameters","text":"Parameter Type Description joy_type string joy controller type (default: DS4) update_rate double update rate to publish control commands accel_ratio double ratio to calculate acceleration (commanded acceleration is ratio * operation) brake_ratio double ratio to calculate deceleration (commanded acceleration is -ratio * operation) steer_ratio double ratio to calculate deceleration (commanded steer is ratio * operation) steering_angle_velocity double steering angle velocity for operation accel_sensitivity double sensitivity to calculate acceleration for external API (commanded acceleration is pow(operation, 1 / sensitivity)) brake_sensitivity double sensitivity to calculate deceleration for external API (commanded acceleration is pow(operation, 1 / sensitivity)) velocity_gain double ratio to calculate velocity by acceleration max_forward_velocity double absolute max velocity to go forward max_backward_velocity double absolute max velocity to go backward backward_accel_ratio double ratio to calculate deceleration (commanded acceleration is -ratio * operation)"},{"location":"control/joy_controller/#p65-joystick-key-map","title":"P65 Joystick Key Map","text":"Action Button Acceleration R2 Brake L2 Steering Left Stick Left Right Shift up Cursor Up Shift down Cursor Down Shift Drive Cursor Left Shift Reverse Cursor Right Turn Signal Left L1 Turn Signal Right R1 Clear Turn Signal A Gate Mode B Emergency Stop Select Clear Emergency Stop Start Autoware Engage X Autoware Disengage Y Vehicle Engage PS Vehicle Disengage Right Trigger"},{"location":"control/joy_controller/#ds4-joystick-key-map","title":"DS4 Joystick Key Map","text":"Action Button Acceleration R2, \u00d7, or Right Stick Up Brake L2, \u25a1, or Right Stick Down Steering Left Stick Left Right Shift up Cursor Up Shift down Cursor Down Shift Drive Cursor Left Shift Reverse Cursor Right Turn Signal Left L1 Turn Signal Right R1 Clear Turn Signal SHARE Gate Mode OPTIONS Emergency Stop PS Clear Emergency Stop PS Autoware Engage \u25cb Autoware Disengage \u25cb Vehicle Engage \u25b3 Vehicle Disengage \u25b3"},{"location":"control/lane_departure_checker/","title":"Lane Departure Checker","text":"

The Lane Departure Checker checks if vehicle follows a trajectory. If it does not follow the trajectory, it reports its status via diagnostic_updater.

"},{"location":"control/lane_departure_checker/#features","title":"Features","text":"

This package includes the following features:

"},{"location":"control/lane_departure_checker/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"control/lane_departure_checker/#how-to-extend-footprint-by-covariance","title":"How to extend footprint by covariance","text":"
  1. Calculate the standard deviation of error ellipse(covariance) in vehicle coordinate.

    1.Transform covariance into vehicle coordinate.

    Calculate covariance in vehicle coordinate.

    2.The longitudinal length we want to expand is correspond to marginal distribution of x_{vehicle}, which is represented in Cov_{vehicle}(0,0). In the same way, the lateral length is represented in Cov_{vehicle}(1,1). Wikipedia reference here.

  2. Expand footprint based on the standard deviation multiplied with footprint_margin_scale.

"},{"location":"control/lane_departure_checker/#interface","title":"Interface","text":""},{"location":"control/lane_departure_checker/#input","title":"Input","text":""},{"location":"control/lane_departure_checker/#output","title":"Output","text":""},{"location":"control/lane_departure_checker/#parameters","title":"Parameters","text":""},{"location":"control/lane_departure_checker/#node-parameters","title":"Node Parameters","text":"Name Type Description Default value update_rate double Frequency for publishing [Hz] 10.0 visualize_lanelet bool Flag for visualizing lanelet False"},{"location":"control/lane_departure_checker/#core-parameters","title":"Core Parameters","text":"Name Type Description Default value footprint_margin_scale double Coefficient for expanding footprint margin. Multiplied by 1 standard deviation. 1.0 resample_interval double Minimum Euclidean distance between points when resample trajectory.[m] 0.3 max_deceleration double Maximum deceleration when calculating braking distance. 2.8 delay_time double Delay time which took to actuate brake when calculating braking distance. [second] 1.3 max_lateral_deviation double Maximum lateral deviation in vehicle coordinate. [m] 2.0 max_longitudinal_deviation double Maximum longitudinal deviation in vehicle coordinate. [m] 2.0 max_yaw_deviation_deg double Maximum ego yaw deviation from trajectory. [deg] 60.0"},{"location":"control/mpc_lateral_controller/","title":"MPC Lateral Controller","text":"

This is the design document for the lateral controller node in the trajectory_follower_node package.

"},{"location":"control/mpc_lateral_controller/#purpose-use-cases","title":"Purpose / Use cases","text":"

This node is used to general lateral control commands (steering angle and steering rate) when following a path.

"},{"location":"control/mpc_lateral_controller/#design","title":"Design","text":"

The node uses an implementation of linear model predictive control (MPC) for accurate path tracking. The MPC uses a model of the vehicle to simulate the trajectory resulting from the control command. The optimization of the control command is formulated as a Quadratic Program (QP).

Different vehicle models are implemented:

For the optimization, a Quadratic Programming (QP) solver is used and two options are currently implemented:

"},{"location":"control/mpc_lateral_controller/#filtering","title":"Filtering","text":"

Filtering is required for good noise reduction. A Butterworth filter is used for the yaw and lateral errors used as input of the MPC as well as for the output steering angle. Other filtering methods can be considered as long as the noise reduction performances are good enough. The moving average filter for example is not suited and can yield worse results than without any filtering.

"},{"location":"control/mpc_lateral_controller/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

The tracking is not accurate if the first point of the reference trajectory is at or in front of the current ego pose.

"},{"location":"control/mpc_lateral_controller/#inputs-outputs-api","title":"Inputs / Outputs / API","text":""},{"location":"control/mpc_lateral_controller/#inputs","title":"Inputs","text":"

Set the following from the controller_node

"},{"location":"control/mpc_lateral_controller/#outputs","title":"Outputs","text":"

Return LateralOutput which contains the following to the controller node

"},{"location":"control/mpc_lateral_controller/#mpc-class","title":"MPC class","text":"

The MPC class (defined in mpc.hpp) provides the interface with the MPC algorithm. Once a vehicle model, a QP solver, and the reference trajectory to follow have been set (using setVehicleModel(), setQPSolver(), setReferenceTrajectory()), a lateral control command can be calculated by providing the current steer, velocity, and pose to function calculateMPC().

"},{"location":"control/mpc_lateral_controller/#parameter-description","title":"Parameter description","text":"

The default parameters defined in param/lateral_controller_defaults.param.yaml are adjusted to the AutonomouStuff Lexus RX 450h for under 40 km/h driving.

Name Type Description Default value show_debug_info bool display debug info false traj_resample_dist double distance of waypoints in resampling [m] 0.1 enable_path_smoothing bool path smoothing flag. This should be true when uses path resampling to reduce resampling noise. true path_filter_moving_ave_num int number of data points moving average filter for path smoothing 35 path_smoothing_times int number of times of applying path smoothing filter 1 curvature_smoothing_num_ref_steer double index distance of points used in curvature calculation for reference steer command: p(i-num), p(i), p(i+num). larger num makes less noisy values. 35 curvature_smoothing_num_traj double index distance of points used in curvature calculation for trajectory: p(i-num), p(i), p(i+num). larger num makes less noisy values. 1 extend_trajectory_for_end_yaw_control bool trajectory extending flag for end yaw control. true steering_lpf_cutoff_hz double cutoff frequency of lowpass filter for steering output command [hz] 3.0 admissible_position_error double stop vehicle when following position error is larger than this value [m]. 5.0 admissible_yaw_error_rad double stop vehicle when following yaw angle error is larger than this value [rad]. 1.57 stop_state_entry_ego_speed *1 double threshold value of the ego vehicle speed used to the stop state entry condition 0.0 stop_state_entry_target_speed *1 double threshold value of the target speed used to the stop state entry condition 0.0 converged_steer_rad double threshold value of the steer convergence 0.1 keep_steer_control_until_converged bool keep steer control until steer is converged true new_traj_duration_time double threshold value of the time to be considered as new trajectory 1.0 new_traj_end_dist double threshold value of the distance between trajectory ends to be considered as new trajectory 0.3

(*1) To prevent unnecessary steering movement, the steering command is fixed to the previous value in the stop state.

"},{"location":"control/mpc_lateral_controller/#mpc-algorithm","title":"MPC algorithm","text":"Name Type Description Default value qp_solver_type string QP solver option. described below in detail. unconstraint_fast vehicle_model_type string vehicle model option. described below in detail. kinematics prediction_horizon int total prediction step for MPC 70 prediction_sampling_time double prediction period for one step [s] 0.1 weight_lat_error double weight for lateral error 0.1 weight_heading_error double weight for heading error 0.0 weight_heading_error_squared_vel_coeff double weight for heading error * velocity 5.0 weight_steering_input double weight for steering error (steer command - reference steer) 1.0 weight_steering_input_squared_vel_coeff double weight for steering error (steer command - reference steer) * velocity 0.25 weight_lat_jerk double weight for lateral jerk (steer(i) - steer(i-1)) * velocity 0.0 weight_terminal_lat_error double terminal cost weight for lateral error 1.0 weight_terminal_heading_error double terminal cost weight for heading error 0.1 zero_ff_steer_deg double threshold of feedforward angle [deg]. feedforward angle smaller than this value is set to zero. 2.0"},{"location":"control/mpc_lateral_controller/#vehicle","title":"Vehicle","text":"Name Type Description Default value cg_to_front_m double distance from baselink to the front axle[m] 1.228 cg_to_rear_m double distance from baselink to the rear axle [m] 1.5618 mass_fl double mass applied to front left tire [kg] 600 mass_fr double mass applied to front right tire [kg] 600 mass_rl double mass applied to rear left tire [kg] 600 mass_rr double mass applied to rear right tire [kg] 600 cf double front cornering power [N/rad] 155494.663 cr double rear cornering power [N/rad] 155494.663 steering_tau double steering dynamics time constant (1d approximation) for vehicle model [s] 0.3"},{"location":"control/mpc_lateral_controller/#how-to-tune-mpc-parameters","title":"How to tune MPC parameters","text":"
  1. Set appropriate vehicle kinematics parameters for distance to front and rear axle and max steer angle. Also check that the input VehicleKinematicState has appropriate values (speed: vehicle rear-wheels-center velocity [km/h], angle: steering (tire) angle [rad]). These values give a vehicle information to the controller for path following. Errors in these values cause fundamental tracking error.

  2. Set appropriate vehicle dynamics parameters of steering_tau, which is the approximated delay from steering angle command to actual steering angle.

  3. Set weight_steering_input = 1.0, weight_lat_error = 0.1, and other weights to 0. If the vehicle oscillates when driving with low speed, set weight_lat_error smaller.

  4. Adjust other weights. One of the simple way for tuning is to increase weight_lat_error until oscillation occurs. If the vehicle is unstable with very small weight_lat_error, increase terminal weight : weight_terminal_lat_error and weight_terminal_heading_error to improve tracking stability. Larger prediction_horizon and smaller prediction_sampling_time is effective for tracking performance, but it is a trade-off between computational costs. Other parameters can be adjusted like below.

"},{"location":"control/mpc_lateral_controller/#references-external-links","title":"References / External links","text":""},{"location":"control/mpc_lateral_controller/#related-issues","title":"Related issues","text":""},{"location":"control/obstacle_collision_checker/","title":"obstacle_collision_checker","text":""},{"location":"control/obstacle_collision_checker/#purpose","title":"Purpose","text":"

obstacle_collision_checker is a module to check obstacle collision for predicted trajectory and publish diagnostic errors if collision is found.

"},{"location":"control/obstacle_collision_checker/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"control/obstacle_collision_checker/#flow-chart","title":"Flow chart","text":""},{"location":"control/obstacle_collision_checker/#algorithms","title":"Algorithms","text":""},{"location":"control/obstacle_collision_checker/#check-data","title":"Check data","text":"

Check that obstacle_collision_checker receives no ground pointcloud, predicted_trajectory, reference trajectory, and current velocity data.

"},{"location":"control/obstacle_collision_checker/#diagnostic-update","title":"Diagnostic update","text":"

If any collision is found on predicted path, this module sets ERROR level as diagnostic status else sets OK.

"},{"location":"control/obstacle_collision_checker/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"control/obstacle_collision_checker/#input","title":"Input","text":"Name Type Description ~/input/trajectory autoware_auto_planning_msgs::msg::Trajectory Reference trajectory ~/input/trajectory autoware_auto_planning_msgs::msg::Trajectory Predicted trajectory /perception/obstacle_segmentation/pointcloud sensor_msgs::msg::PointCloud2 Pointcloud of obstacles which the ego-vehicle should stop or avoid /tf tf2_msgs::msg::TFMessage TF /tf_static tf2_msgs::msg::TFMessage TF static"},{"location":"control/obstacle_collision_checker/#output","title":"Output","text":"Name Type Description ~/debug/marker visualization_msgs::msg::MarkerArray Marker for visualization"},{"location":"control/obstacle_collision_checker/#parameters","title":"Parameters","text":"Name Type Description Default value delay_time double Delay time of vehicle [s] 0.3 footprint_margin double Foot print margin [m] 0.0 max_deceleration double Max deceleration for ego vehicle to stop [m/s^2] 2.0 resample_interval double Interval for resampling trajectory [m] 0.3 search_radius double Search distance from trajectory to point cloud [m] 5.0"},{"location":"control/obstacle_collision_checker/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

To perform proper collision check, it is necessary to get probably predicted trajectory and obstacle pointclouds without noise.

"},{"location":"control/operation_mode_transition_manager/","title":"operation_mode_transition_manager","text":""},{"location":"control/operation_mode_transition_manager/#purpose-use-cases","title":"Purpose / Use cases","text":"

This module is responsible for managing the different modes of operation for the Autoware system. The possible modes are:

There is also an In Transition state that occurs during each mode transitions. During this state, the transition to the new operator is not yet complete, and the previous operator is still responsible for controlling the system until the transition is complete. Some actions may be restricted during the In Transition state, such as sudden braking or steering. (This is restricted by the vehicle_cmd_gate).

"},{"location":"control/operation_mode_transition_manager/#features","title":"Features","text":" "},{"location":"control/operation_mode_transition_manager/#design","title":"Design","text":""},{"location":"control/operation_mode_transition_manager/#inputs-outputs-api","title":"Inputs / Outputs / API","text":""},{"location":"control/operation_mode_transition_manager/#inputs","title":"Inputs","text":"

For the mode transition:

For the transition availability/completion check:

For the backward compatibility (to be removed):

"},{"location":"control/operation_mode_transition_manager/#outputs","title":"Outputs","text":" "},{"location":"control/operation_mode_transition_manager/#parameters","title":"Parameters","text":"Name Type Description Default value transition_timeout double If the state transition is not completed within this time, it is considered a transition failure. 10.0 frequency_hz double running hz 10.0 check_engage_condition double If false, autonomous transition is always available 0.1 nearest_dist_deviation_threshold double distance threshold used to find nearest trajectory point 3.0 nearest_yaw_deviation_threshold double angle threshold used to find nearest trajectory point 1.57

For engage_acceptable_limits related parameters:

Name Type Description Default value allow_autonomous_in_stopped bool If true, autonomous transition is available when the vehicle is stopped even if other checks fail. true dist_threshold double the distance between the trajectory and ego vehicle must be within this distance for Autonomous transition. 1.5 yaw_threshold double the yaw angle between trajectory and ego vehicle must be within this threshold for Autonomous transition. 0.524 speed_upper_threshold double the velocity deviation between control command and ego vehicle must be within this threshold for Autonomous transition. 10.0 speed_lower_threshold double the velocity deviation between the control command and ego vehicle must be within this threshold for Autonomous transition. -10.0 acc_threshold double the control command acceleration must be less than this threshold for Autonomous transition. 1.5 lateral_acc_threshold double the control command lateral acceleration must be less than this threshold for Autonomous transition. 1.0 lateral_acc_diff_threshold double the lateral acceleration deviation between the control command must be less than this threshold for Autonomous transition. 0.5

For stable_check related parameters:

Name Type Description Default value duration double the stable condition must be satisfied for this duration to complete the transition. 0.1 dist_threshold double the distance between the trajectory and ego vehicle must be within this distance to complete Autonomous transition. 1.5 yaw_threshold double the yaw angle between trajectory and ego vehicle must be within this threshold to complete Autonomous transition. 0.262 speed_upper_threshold double the velocity deviation between control command and ego vehicle must be within this threshold to complete Autonomous transition. 2.0 speed_lower_threshold double the velocity deviation between control command and ego vehicle must be within this threshold to complete Autonomous transition. 2.0"},{"location":"control/operation_mode_transition_manager/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":""},{"location":"control/pid_longitudinal_controller/","title":"PID Longitudinal Controller","text":""},{"location":"control/pid_longitudinal_controller/#purpose-use-cases","title":"Purpose / Use cases","text":"

The longitudinal_controller computes the target acceleration to achieve the target velocity set at each point of the target trajectory using a feed-forward/back control.

It also contains a slope force correction that takes into account road slope information, and a delay compensation function. It is assumed that the target acceleration calculated here will be properly realized by the vehicle interface.

Note that the use of this module is not mandatory for Autoware if the vehicle supports the \"target speed\" interface.

"},{"location":"control/pid_longitudinal_controller/#design-inner-workings-algorithms","title":"Design / Inner-workings / Algorithms","text":""},{"location":"control/pid_longitudinal_controller/#states","title":"States","text":"

This module has four state transitions as shown below in order to handle special processing in a specific situation.

The state transition diagram is shown below.

"},{"location":"control/pid_longitudinal_controller/#logics","title":"Logics","text":""},{"location":"control/pid_longitudinal_controller/#control-block-diagram","title":"Control Block Diagram","text":""},{"location":"control/pid_longitudinal_controller/#feedforward-ff","title":"FeedForward (FF)","text":"

The reference acceleration set in the trajectory and slope compensation terms are output as a feedforward. Under ideal conditions with no modeling error, this FF term alone should be sufficient for velocity tracking.

Tracking errors causing modeling or discretization errors are removed by the feedback control (now using PID).

"},{"location":"control/pid_longitudinal_controller/#brake-keeping","title":"Brake keeping","text":"

From the viewpoint of ride comfort, stopping with 0 acceleration is important because it reduces the impact of braking. However, if the target acceleration when stopping is 0, the vehicle may cross over the stop line or accelerate a little in front of the stop line due to vehicle model error or gradient estimation error.

For reliable stopping, the target acceleration calculated by the FeedForward system is limited to a negative acceleration when stopping.

"},{"location":"control/pid_longitudinal_controller/#slope-compensation","title":"Slope compensation","text":"

Based on the slope information, a compensation term is added to the target acceleration.

There are two sources of the slope information, which can be switched by a parameter.

"},{"location":"control/pid_longitudinal_controller/#pid-control","title":"PID control","text":"

For deviations that cannot be handled by FeedForward control, such as model errors, PID control is used to construct a feedback system.

This PID control calculates the target acceleration from the deviation between the current ego-velocity and the target velocity.

This PID logic has a maximum value for the output of each term. This is to prevent the following:

Also, the integral term is not accumulated when the vehicle is stopped. This is to prevent unintended accumulation of the integral term in cases such as Autoware assumes that the vehicle is engaged, but an external system has locked the vehicle to start. On the other hand, if the vehicle gets stuck in a depression in the road surface when starting, the vehicle will not start forever, which is currently being addressed by developers.

At present, PID control is implemented from the viewpoint of trade-off between development/maintenance cost and performance. This may be replaced by a higher performance controller (adaptive control or robust control) in future development.

"},{"location":"control/pid_longitudinal_controller/#time-delay-compensation","title":"Time delay compensation","text":"

At high speeds, the delay of actuator systems such as gas pedals and brakes has a significant impact on driving accuracy. Depending on the actuating principle of the vehicle, the mechanism that physically controls the gas pedal and brake typically has a delay of about a hundred millisecond.

In this controller, the predicted ego-velocity and the target velocity after the delay time are calculated and used for the feedback to address the time delay problem.

"},{"location":"control/pid_longitudinal_controller/#slope-compensation_1","title":"Slope compensation","text":"

Based on the slope information, a compensation term is added to the target acceleration.

There are two sources of the slope information, which can be switched by a parameter.

"},{"location":"control/pid_longitudinal_controller/#assumptions-known-limits","title":"Assumptions / Known limits","text":"
  1. Smoothed target velocity and its acceleration shall be set in the trajectory
    1. The velocity command is not smoothed inside the controller (only noise may be removed).
    2. For step-like target signal, tracking is performed as fast as possible.
  2. The vehicle velocity must be an appropriate value
    1. The ego-velocity must be a signed-value corresponding to the forward/backward direction
    2. The ego-velocity should be given with appropriate noise processing.
    3. If there is a large amount of noise in the ego-velocity, the tracking performance will be significantly reduced.
  3. The output of this controller must be achieved by later modules (e.g. vehicle interface).
    1. If the vehicle interface does not have the target velocity or acceleration interface (e.g., the vehicle only has a gas pedal and brake interface), an appropriate conversion must be done after this controller.
"},{"location":"control/pid_longitudinal_controller/#inputs-outputs-api","title":"Inputs / Outputs / API","text":""},{"location":"control/pid_longitudinal_controller/#input","title":"Input","text":"

Set the following from the controller_node

"},{"location":"control/pid_longitudinal_controller/#output","title":"Output","text":"

Return LongitudinalOutput which contains the following to the controller node

"},{"location":"control/pid_longitudinal_controller/#pidcontroller-class","title":"PIDController class","text":"

The PIDController class is straightforward to use. First, gains and limits must be set (using setGains() and setLimits()) for the proportional (P), integral (I), and derivative (D) components. Then, the velocity can be calculated by providing the current error and time step duration to the calculate() function.

"},{"location":"control/pid_longitudinal_controller/#parameter-description","title":"Parameter description","text":"

The default parameters defined in param/lateral_controller_defaults.param.yaml are adjusted to the AutonomouStuff Lexus RX 450h for under 40 km/h driving.

Name Type Description Default value delay_compensation_time double delay for longitudinal control [s] 0.17 enable_smooth_stop bool flag to enable transition to STOPPING true enable_overshoot_emergency bool flag to enable transition to EMERGENCY when the ego is over the stop line with a certain distance. See emergency_state_overshoot_stop_dist. true enable_large_tracking_error_emergency bool flag to enable transition to EMERGENCY when the closest trajectory point search is failed due to a large deviation between trajectory and ego pose. true enable_slope_compensation bool flag to modify output acceleration for slope compensation. The source of the slope angle can be selected from ego-pose or trajectory angle. See use_trajectory_for_pitch_calculation. true enable_brake_keeping_before_stop bool flag to keep a certain acceleration during DRIVE state before the ego stops. See Brake keeping. false enable_keep_stopped_until_steer_convergence bool flag to keep stopped condition until until the steer converges. true max_acc double max value of output acceleration [m/s^2] 3.0 min_acc double min value of output acceleration [m/s^2] -5.0 max_jerk double max value of jerk of output acceleration [m/s^3] 2.0 min_jerk double min value of jerk of output acceleration [m/s^3] -5.0 use_trajectory_for_pitch_calculation bool If true, the slope is estimated from trajectory z-level. Otherwise the pitch angle of the ego pose is used. false lpf_pitch_gain double gain of low-pass filter for pitch estimation 0.95 max_pitch_rad double max value of estimated pitch [rad] 0.1 min_pitch_rad double min value of estimated pitch [rad] -0.1"},{"location":"control/pid_longitudinal_controller/#state-transition","title":"State transition","text":"Name Type Description Default value drive_state_stop_dist double The state will transit to DRIVE when the distance to the stop point is longer than drive_state_stop_dist + drive_state_offset_stop_dist [m] 0.5 drive_state_offset_stop_dist double The state will transit to DRIVE when the distance to the stop point is longer than drive_state_stop_dist + drive_state_offset_stop_dist [m] 1.0 stopping_state_stop_dist double The state will transit to STOPPING when the distance to the stop point is shorter than stopping_state_stop_dist [m] 0.5 stopped_state_entry_vel double threshold of the ego velocity in transition to the STOPPED state [m/s] 0.01 stopped_state_entry_acc double threshold of the ego acceleration in transition to the STOPPED state [m/s^2] 0.1 emergency_state_overshoot_stop_dist double If enable_overshoot_emergency is true and the ego is emergency_state_overshoot_stop_dist-meter ahead of the stop point, the state will transit to EMERGENCY. [m] 1.5 emergency_state_traj_trans_dev double If the ego's position is emergency_state_traj_tran_dev meter away from the nearest trajectory point, the state will transit to EMERGENCY. [m] 3.0 emergency_state_traj_rot_dev double If the ego's orientation is emergency_state_traj_rot_dev rad away from the nearest trajectory point orientation, the state will transit to EMERGENCY. [rad] 0.784"},{"location":"control/pid_longitudinal_controller/#drive-parameter","title":"DRIVE Parameter","text":"Name Type Description Default value kp double p gain for longitudinal control 1.0 ki double i gain for longitudinal control 0.1 kd double d gain for longitudinal control 0.0 max_out double max value of PID's output acceleration during DRIVE state [m/s^2] 1.0 min_out double min value of PID's output acceleration during DRIVE state [m/s^2] -1.0 max_p_effort double max value of acceleration with p gain 1.0 min_p_effort double min value of acceleration with p gain -1.0 max_i_effort double max value of acceleration with i gain 0.3 min_i_effort double min value of acceleration with i gain -0.3 max_d_effort double max value of acceleration with d gain 0.0 min_d_effort double min value of acceleration with d gain 0.0 lpf_vel_error_gain double gain of low-pass filter for velocity error 0.9 current_vel_threshold_pid_integration double Velocity error is integrated for I-term only when the absolute value of current velocity is larger than this parameter. [m/s] 0.5 brake_keeping_acc double If enable_brake_keeping_before_stop is true, a certain acceleration is kept during DRIVE state before the ego stops [m/s^2] See Brake keeping. 0.2"},{"location":"control/pid_longitudinal_controller/#stopping-parameter-smooth-stop","title":"STOPPING Parameter (smooth stop)","text":"

Smooth stop is enabled if enable_smooth_stop is true. In smooth stop, strong acceleration (strong_acc) will be output first to decrease the ego velocity. Then weak acceleration (weak_acc) will be output to stop smoothly by decreasing the ego jerk. If the ego does not stop in a certain time or some-meter over the stop point, weak acceleration to stop right (weak_stop_acc) now will be output. If the ego is still running, strong acceleration (strong_stop_acc) to stop right now will be output.

Name Type Description Default value smooth_stop_max_strong_acc double max strong acceleration [m/s^2] -0.5 smooth_stop_min_strong_acc double min strong acceleration [m/s^2] -0.8 smooth_stop_weak_acc double weak acceleration [m/s^2] -0.3 smooth_stop_weak_stop_acc double weak acceleration to stop right now [m/s^2] -0.8 smooth_stop_strong_stop_acc double strong acceleration to be output when the ego is smooth_stop_strong_stop_dist-meter over the stop point. [m/s^2] -3.4 smooth_stop_max_fast_vel double max fast vel to judge the ego is running fast [m/s]. If the ego is running fast, strong acceleration will be output. 0.5 smooth_stop_min_running_vel double min ego velocity to judge if the ego is running or not [m/s] 0.01 smooth_stop_min_running_acc double min ego acceleration to judge if the ego is running or not [m/s^2] 0.01 smooth_stop_weak_stop_time double max time to output weak acceleration [s]. After this, strong acceleration will be output. 0.8 smooth_stop_weak_stop_dist double Weak acceleration will be output when the ego is smooth_stop_weak_stop_dist-meter before the stop point. [m] -0.3 smooth_stop_strong_stop_dist double Strong acceleration will be output when the ego is smooth_stop_strong_stop_dist-meter over the stop point. [m] -0.5"},{"location":"control/pid_longitudinal_controller/#stopped-parameter","title":"STOPPED Parameter","text":"Name Type Description Default value stopped_vel double target velocity in STOPPED state [m/s] 0.0 stopped_acc double target acceleration in STOPPED state [m/s^2] -3.4 stopped_jerk double target jerk in STOPPED state [m/s^3] -5.0"},{"location":"control/pid_longitudinal_controller/#emergency-parameter","title":"EMERGENCY Parameter","text":"Name Type Description Default value emergency_vel double target velocity in EMERGENCY state [m/s] 0.0 emergency_acc double target acceleration in an EMERGENCY state [m/s^2] -5.0 emergency_jerk double target jerk in an EMERGENCY state [m/s^3] -3.0"},{"location":"control/pid_longitudinal_controller/#references-external-links","title":"References / External links","text":""},{"location":"control/pid_longitudinal_controller/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":""},{"location":"control/pid_longitudinal_controller/#related-issues","title":"Related issues","text":""},{"location":"control/shift_decider/","title":"Shift Decider","text":""},{"location":"control/shift_decider/#purpose","title":"Purpose","text":"

shift_decider is a module to decide shift from ackermann control command.

"},{"location":"control/shift_decider/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"control/shift_decider/#flow-chart","title":"Flow chart","text":""},{"location":"control/shift_decider/#algorithms","title":"Algorithms","text":""},{"location":"control/shift_decider/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"control/shift_decider/#input","title":"Input","text":"Name Type Description ~/input/control_cmd autoware_auto_control_msgs::msg::AckermannControlCommand Control command for vehicle."},{"location":"control/shift_decider/#output","title":"Output","text":"Name Type Description ~output/gear_cmd autoware_auto_vehicle_msgs::msg::GearCommand Gear for drive forward / backward."},{"location":"control/shift_decider/#parameters","title":"Parameters","text":"

none.

"},{"location":"control/shift_decider/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"control/trajectory_follower_base/","title":"Trajectory Follower","text":"

This is the design document for the trajectory_follower package.

"},{"location":"control/trajectory_follower_base/#purpose-use-cases","title":"Purpose / Use cases","text":"

This package provides the interface of longitudinal and lateral controllers used by the node of the trajectory_follower_node package. We can implement a detailed controller by deriving the longitudinal and lateral base interfaces.

"},{"location":"control/trajectory_follower_base/#design","title":"Design","text":"

There are lateral and longitudinal base interface classes and each algorithm inherits from this class to implement. The interface class has the following base functions.

See the Design of Trajectory Follower Nodes for how these functions work in the node.

"},{"location":"control/trajectory_follower_base/#separated-lateral-steering-and-longitudinal-velocity-controls","title":"Separated lateral (steering) and longitudinal (velocity) controls","text":"

This longitudinal controller assumes that the roles of lateral and longitudinal control are separated as follows.

Ideally, dealing with the lateral and longitudinal control as a single mixed problem can achieve high performance. In contrast, there are two reasons to provide velocity controller as a stand-alone function, described below.

"},{"location":"control/trajectory_follower_base/#complex-requirements-for-longitudinal-motion","title":"Complex requirements for longitudinal motion","text":"

The longitudinal vehicle behavior that humans expect is difficult to express in a single logic. For example, the expected behavior just before stopping differs depending on whether the ego-position is ahead/behind of the stop line, or whether the current speed is higher/lower than the target speed to achieve a human-like movement.

In addition, some vehicles have difficulty measuring the ego-speed at extremely low speeds. In such cases, a configuration that can improve the functionality of the longitudinal control without affecting the lateral control is important.

There are many characteristics and needs that are unique to longitudinal control. Designing them separately from the lateral control keeps the modules less coupled and improves maintainability.

"},{"location":"control/trajectory_follower_base/#nonlinear-coupling-of-lateral-and-longitudinal-motion","title":"Nonlinear coupling of lateral and longitudinal motion","text":"

The lat-lon mixed control problem is very complex and uses nonlinear optimization to achieve high performance. Since it is difficult to guarantee the convergence of the nonlinear optimization, a simple control logic is also necessary for development.

Also, the benefits of simultaneous longitudinal and lateral control are small if the vehicle doesn't move at high speed.

"},{"location":"control/trajectory_follower_base/#related-issues","title":"Related issues","text":""},{"location":"control/trajectory_follower_node/","title":"Trajectory Follower Nodes","text":""},{"location":"control/trajectory_follower_node/#purpose","title":"Purpose","text":"

Generate control commands to follow a given Trajectory.

"},{"location":"control/trajectory_follower_node/#design","title":"Design","text":"

This is a node of the functionalities implemented in the controller class derived from trajectory_follower_base package. It has instances of those functionalities, gives them input data to perform calculations, and publishes control commands.

By default, the controller instance with the Controller class as follows is used.

The process flow of Controller class is as follows.

// 1. create input data\nconst auto input_data = createInputData(*get_clock());\nif (!input_data) {\nreturn;\n}\n\n// 2. check if controllers are ready\nconst bool is_lat_ready = lateral_controller_->isReady(*input_data);\nconst bool is_lon_ready = longitudinal_controller_->isReady(*input_data);\nif (!is_lat_ready || !is_lon_ready) {\nreturn;\n}\n\n// 3. run controllers\nconst auto lat_out = lateral_controller_->run(*input_data);\nconst auto lon_out = longitudinal_controller_->run(*input_data);\n\n// 4. sync with each other controllers\nlongitudinal_controller_->sync(lat_out.sync_data);\nlateral_controller_->sync(lon_out.sync_data);\n\n// 5. publish control command\ncontrol_cmd_pub_->publish(out);\n

Giving the longitudinal controller information about steer convergence allows it to control steer when stopped if following parameters are true

"},{"location":"control/trajectory_follower_node/#inputs-outputs-api","title":"Inputs / Outputs / API","text":""},{"location":"control/trajectory_follower_node/#inputs","title":"Inputs","text":""},{"location":"control/trajectory_follower_node/#outputs","title":"Outputs","text":""},{"location":"control/trajectory_follower_node/#parameter","title":"Parameter","text":""},{"location":"control/trajectory_follower_node/#debugging","title":"Debugging","text":"

Debug information are published by the lateral and longitudinal controller using tier4_debug_msgs/Float32MultiArrayStamped messages.

A configuration file for PlotJuggler is provided in the config folder which, when loaded, allow to automatically subscribe and visualize information useful for debugging.

In addition, the predicted MPC trajectory is published on topic output/lateral/predicted_trajectory and can be visualized in Rviz.

"},{"location":"control/trajectory_follower_node/design/simple_trajectory_follower-design/","title":"Simple Trajectory Follower","text":""},{"location":"control/trajectory_follower_node/design/simple_trajectory_follower-design/#purpose","title":"Purpose","text":"

Provide a base trajectory follower code that is simple and flexible to use. This node calculates control command based on a reference trajectory and an ego vehicle kinematics.

"},{"location":"control/trajectory_follower_node/design/simple_trajectory_follower-design/#design","title":"Design","text":""},{"location":"control/trajectory_follower_node/design/simple_trajectory_follower-design/#inputs-outputs","title":"Inputs / Outputs","text":"

Inputs

"},{"location":"control/trajectory_follower_node/design/simple_trajectory_follower-design/#parameters","title":"Parameters","text":"Name Type Description Default value use_external_target_vel bool use external target velocity defined by parameter when true, else follow the velocity on target trajectory points. false external_target_vel float target velocity used when use_external_target_vel is true. 0.0 lateral_deviation float target lateral deviation when following. 0.0"},{"location":"control/vehicle_cmd_gate/","title":"vehicle_cmd_gate","text":""},{"location":"control/vehicle_cmd_gate/#purpose","title":"Purpose","text":"

vehicle_cmd_gate is the package to get information from emergency handler, planning module, external controller, and send a msg to vehicle.

"},{"location":"control/vehicle_cmd_gate/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"control/vehicle_cmd_gate/#input","title":"Input","text":"Name Type Description ~/input/steering autoware_auto_vehicle_msgs::msg::SteeringReport steering status ~/input/auto/control_cmd autoware_auto_control_msgs::msg::AckermannControlCommand command for lateral and longitudinal velocity from planning module ~/input/auto/turn_indicators_cmd autoware_auto_vehicle_msgs::msg::TurnIndicatorsCommand turn indicators command from planning module ~/input/auto/hazard_lights_cmd autoware_auto_vehicle_msgs::msg::HazardLightsCommand hazard lights command from planning module ~/input/auto/gear_cmd autoware_auto_vehicle_msgs::msg::GearCommand gear command from planning module ~/input/external/control_cmd autoware_auto_control_msgs::msg::AckermannControlCommand command for lateral and longitudinal velocity from external ~/input/external/turn_indicators_cmd autoware_auto_vehicle_msgs::msg::TurnIndicatorsCommand turn indicators command from external ~/input/external/hazard_lights_cmd autoware_auto_vehicle_msgs::msg::HazardLightsCommand hazard lights command from external ~/input/external/gear_cmd autoware_auto_vehicle_msgs::msg::GearCommand gear command from external ~/input/external_emergency_stop_heartbeat tier4_external_api_msgs::msg::Heartbeat heartbeat ~/input/gate_mode tier4_control_msgs::msg::GateMode gate mode (AUTO or EXTERNAL) ~/input/emergency/control_cmd autoware_auto_control_msgs::msg::AckermannControlCommand command for lateral and longitudinal velocity from emergency handler ~/input/emergency/hazard_lights_cmd autoware_auto_vehicle_msgs::msg::HazardLightsCommand hazard lights command from emergency handler ~/input/emergency/gear_cmd autoware_auto_vehicle_msgs::msg::GearCommand gear command from emergency handler ~/input/engage autoware_auto_vehicle_msgs::msg::Engage engage signal ~/input/operation_mode autoware_adapi_v1_msgs::msg::OperationModeState operation mode of Autoware"},{"location":"control/vehicle_cmd_gate/#output","title":"Output","text":"Name Type Description ~/output/vehicle_cmd_emergency autoware_auto_system_msgs::msg::EmergencyState emergency state which was originally in vehicle command ~/output/command/control_cmd autoware_auto_control_msgs::msg::AckermannControlCommand command for lateral and longitudinal velocity to vehicle ~/output/command/turn_indicators_cmd autoware_auto_vehicle_msgs::msg::TurnIndicatorsCommand turn indicators command to vehicle ~/output/command/hazard_lights_cmd autoware_auto_vehicle_msgs::msg::HazardLightsCommand hazard lights command to vehicle ~/output/command/gear_cmd autoware_auto_vehicle_msgs::msg::GearCommand gear command to vehicle ~/output/gate_mode tier4_control_msgs::msg::GateMode gate mode (AUTO or EXTERNAL) ~/output/engage autoware_auto_vehicle_msgs::msg::Engage engage signal ~/output/external_emergency tier4_external_api_msgs::msg::Emergency external emergency signal ~/output/operation_mode tier4_system_msgs::msg::OperationMode current operation mode of the vehicle_cmd_gate"},{"location":"control/vehicle_cmd_gate/#parameters","title":"Parameters","text":"Parameter Type Description update_period double update period use_emergency_handling bool true when emergency handler is used check_external_emergency_heartbeat bool true when checking heartbeat for emergency stop system_emergency_heartbeat_timeout double timeout for system emergency external_emergency_stop_heartbeat_timeout double timeout for external emergency stop_hold_acceleration double longitudinal acceleration cmd when vehicle should stop emergency_acceleration double longitudinal acceleration cmd when vehicle stop with emergency nominal.vel_lim double limit of longitudinal velocity (activated in AUTONOMOUS operation mode) nominal.lon_acc_lim double limit of longitudinal acceleration (activated in AUTONOMOUS operation mode) nominal.lon_jerk_lim double limit of longitudinal jerk (activated in AUTONOMOUS operation mode) nominal.lat_acc_lim double limit of lateral acceleration (activated in AUTONOMOUS operation mode) nominal.lat_jerk_lim double limit of lateral jerk (activated in AUTONOMOUS operation mode) on_transition.vel_lim double limit of longitudinal velocity (activated in TRANSITION operation mode) on_transition.lon_acc_lim double limit of longitudinal acceleration (activated in TRANSITION operation mode) on_transition.lon_jerk_lim double limit of longitudinal jerk (activated in TRANSITION operation mode) on_transition.lat_acc_lim double limit of lateral acceleration (activated in TRANSITION operation mode) on_transition.lat_jerk_lim double limit of lateral jerk (activated in TRANSITION operation mode)"},{"location":"control/vehicle_cmd_gate/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

The parameter check_external_emergency_heartbeat (true by default) enables an emergency stop request from external modules. This feature requires a ~/input/external_emergency_stop_heartbeat topic for health monitoring of the external module, and the vehicle_cmd_gate module will not start without the topic. The check_external_emergency_heartbeat parameter must be false when the \"external emergency stop\" function is not used.

"},{"location":"evaluator/diagnostic_converter/","title":"Planning Evaluator","text":""},{"location":"evaluator/diagnostic_converter/#purpose","title":"Purpose","text":"

This package provides a node to convert diagnostic_msgs::msg::DiagnosticArray messages into tier4_simulation_msgs::msg::UserDefinedValue messages.

"},{"location":"evaluator/diagnostic_converter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

The node subscribes to all topics listed in the parameters and assumes they publish DiagnosticArray messages. Each time such message is received, it is converted into as many UserDefinedValue messages as the number of KeyValue objects. The format of the output topic is detailed in the output section.

"},{"location":"evaluator/diagnostic_converter/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"evaluator/diagnostic_converter/#inputs","title":"Inputs","text":"

The node listens to DiagnosticArray messages on the topics specified in the parameters.

"},{"location":"evaluator/diagnostic_converter/#outputs","title":"Outputs","text":"

The node outputs UserDefinedValue messages that are converted from the received DiagnosticArray.

The name of the output topics are generated from the corresponding input topic, the name of the diagnostic status, and the key of the diagnostic. For example, we might listen to topic /diagnostic_topic and receive a DiagnosticArray with 2 status:

The resulting topics to publish the UserDefinedValue are as follows:

"},{"location":"evaluator/diagnostic_converter/#parameters","title":"Parameters","text":"Name Type Description diagnostic_topics list of string list of DiagnosticArray topics to convert to UserDefinedValue"},{"location":"evaluator/diagnostic_converter/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

Values in the KeyValue objects of a DiagnosticStatus are assumed to be of type double.

"},{"location":"evaluator/diagnostic_converter/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":""},{"location":"evaluator/kinematic_evaluator/","title":"Kinematic Evaluator","text":"

TBD

"},{"location":"evaluator/localization_evaluator/","title":"Localization Evaluator","text":"

TBD

"},{"location":"evaluator/planning_evaluator/","title":"Planning Evaluator","text":""},{"location":"evaluator/planning_evaluator/#purpose","title":"Purpose","text":"

This package provides nodes that generate metrics to evaluate the quality of planning and control.

"},{"location":"evaluator/planning_evaluator/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

The evaluation node calculates metrics each time it receives a trajectory T(0). Metrics are calculated using the following information:

These information are maintained by an instance of class MetricsCalculator which is also responsible for calculating metrics.

"},{"location":"evaluator/planning_evaluator/#stat","title":"Stat","text":"

Each metric is calculated using a Stat instance which contains the minimum, maximum, and mean values calculated for the metric as well as the number of values measured.

"},{"location":"evaluator/planning_evaluator/#metric-calculation-and-adding-more-metrics","title":"Metric calculation and adding more metrics","text":"

All possible metrics are defined in the Metric enumeration defined include/planning_evaluator/metrics/metric.hpp. This file also defines conversions from/to string as well as human readable descriptions to be used as header of the output file.

The MetricsCalculator is responsible for calculating metric statistics through calls to function:

Stat<double> MetricsCalculator::calculate(const Metric metric, const Trajectory & traj) const;\n

Adding a new metric M requires the following steps:

"},{"location":"evaluator/planning_evaluator/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"evaluator/planning_evaluator/#inputs","title":"Inputs","text":"Name Type Description ~/input/trajectory autoware_auto_planning_msgs::msg::Trajectory Main trajectory to evaluate ~/input/reference_trajectory autoware_auto_planning_msgs::msg::Trajectory Reference trajectory to use for deviation metrics ~/input/objects autoware_auto_perception_msgs::msg::PredictedObjects Obstacles"},{"location":"evaluator/planning_evaluator/#outputs","title":"Outputs","text":"

Each metric is published on a topic named after the metric name.

Name Type Description ~/metrics diagnostic_msgs::msg::DiagnosticArray DiagnosticArray with a DiagnosticStatus for each metric

When shut down, the evaluation node writes the values of the metrics measured during its lifetime to a file as specified by the output_file parameter.

"},{"location":"evaluator/planning_evaluator/#parameters","title":"Parameters","text":"Name Type Description output_file string file used to write metrics ego_frame string frame used for the ego pose selected_metrics List metrics to measure and publish trajectory.min_point_dist_m double minimum distance between two successive points to use for angle calculation trajectory.lookahead.max_dist_m double maximum distance from ego along the trajectory to use for calculation trajectory.lookahead.max_time_m double maximum time ahead of ego along the trajectory to use for calculation obstacle.dist_thr_m double distance between ego and the obstacle below which a collision is considered"},{"location":"evaluator/planning_evaluator/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

There is a strong assumption that when receiving a trajectory T(0), it has been generated using the last received reference trajectory and objects. This can be wrong if a new reference trajectory or objects are published while T(0) is being calculated.

Precision is currently limited by the resolution of the trajectories. It is possible to interpolate the trajectory and reference trajectory to increase precision but would make computation significantly more expensive.

"},{"location":"evaluator/planning_evaluator/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":""},{"location":"launch/tier4_autoware_api_launch/","title":"tier4_autoware_api_launch","text":""},{"location":"launch/tier4_autoware_api_launch/#description","title":"Description","text":"

This package contains launch files that run nodes to convert Autoware internal topics into consistent API used by external software (e.g., fleet management system, simulator).

"},{"location":"launch/tier4_autoware_api_launch/#package-dependencies","title":"Package Dependencies","text":"

Please see <exec_depend> in package.xml.

"},{"location":"launch/tier4_autoware_api_launch/#usage","title":"Usage","text":"

You can include as follows in *.launch.xml to use autoware_api.launch.xml.

  <include file=\"$(find-pkg-share tier4_autoware_api_launch)/launch/autoware_api.launch.xml\"/>\n
"},{"location":"launch/tier4_autoware_api_launch/#notes","title":"Notes","text":"

For reducing processing load, we use the Component feature in ROS2 (similar to Nodelet in ROS1 )

"},{"location":"launch/tier4_control_launch/","title":"tier4_control_launch","text":""},{"location":"launch/tier4_control_launch/#structure","title":"Structure","text":""},{"location":"launch/tier4_control_launch/#package-dependencies","title":"Package Dependencies","text":"

Please see <exec_depend> in package.xml.

"},{"location":"launch/tier4_control_launch/#usage","title":"Usage","text":"

You can include as follows in *.launch.xml to use control.launch.py.

Note that you should provide parameter paths as PACKAGE_param_path. The list of parameter paths you should provide is written at the top of planning.launch.xml.

<include file=\"$(find-pkg-share tier4_control_launch)/launch/control.launch.py\">\n<!-- options for lateral_controller_mode: mpc_follower, pure_pursuit -->\n<!-- Parameter files -->\n<arg name=\"FOO_NODE_param_path\" value=\"...\"/>\n<arg name=\"BAR_NODE_param_path\" value=\"...\"/>\n...\n  <arg name=\"lateral_controller_mode\" value=\"mpc_follower\" />\n</include>\n
"},{"location":"launch/tier4_control_launch/#notes","title":"Notes","text":"

For reducing processing load, we use the Component feature in ROS2 (similar to Nodelet in ROS1 )

"},{"location":"launch/tier4_localization_launch/","title":"tier4_localization_launch","text":""},{"location":"launch/tier4_localization_launch/#structure","title":"Structure","text":""},{"location":"launch/tier4_localization_launch/#package-dependencies","title":"Package Dependencies","text":"

Please see <exec_depend> in package.xml.

"},{"location":"launch/tier4_localization_launch/#usage","title":"Usage","text":"

Include localization.launch.xml in other launch files as follows.

Note that you should provide parameter paths as PACKAGE_param_path. The list of parameter paths you should provide is written at the top of localization.launch.xml.

  <include file=\"$(find-pkg-share tier4_localization_launch)/launch/localization.launch.xml\">\n<!-- Parameter files -->\n<arg name=\"FOO_param_path\" value=\"...\"/>\n<arg name=\"BAR_param_path\" value=\"...\"/>\n...\n  </include>\n
"},{"location":"launch/tier4_map_launch/","title":"tier4_map_launch","text":""},{"location":"launch/tier4_map_launch/#structure","title":"Structure","text":""},{"location":"launch/tier4_map_launch/#package-dependencies","title":"Package Dependencies","text":"

Please see <exec_depend> in package.xml.

"},{"location":"launch/tier4_map_launch/#usage","title":"Usage","text":"

You can include as follows in *.launch.xml to use map.launch.py.

Note that you should provide parameter paths as PACKAGE_param_path. The list of parameter paths you should provide is written at the top of map.launch.xml.

<arg name=\"map_path\" description=\"point cloud and lanelet2 map directory path\"/>\n<arg name=\"lanelet2_map_file\" default=\"lanelet2_map.osm\" description=\"lanelet2 map file name\"/>\n<arg name=\"pointcloud_map_file\" default=\"pointcloud_map.pcd\" description=\"pointcloud map file name\"/>\n\n<include file=\"$(find-pkg-share tier4_map_launch)/launch/map.launch.py\">\n<arg name=\"lanelet2_map_path\" value=\"$(var map_path)/$(var lanelet2_map_file)\" />\n<arg name=\"pointcloud_map_path\" value=\"$(var map_path)/$(var pointcloud_map_file)\"/>\n\n<!-- Parameter files -->\n<arg name=\"FOO_param_path\" value=\"...\"/>\n<arg name=\"BAR_param_path\" value=\"...\"/>\n...\n</include>\n
"},{"location":"launch/tier4_map_launch/#notes","title":"Notes","text":"

For reducing processing load, we use the Component feature in ROS2 (similar to Nodelet in ROS1 )

"},{"location":"launch/tier4_perception_launch/","title":"tier4_perception_launch","text":""},{"location":"launch/tier4_perception_launch/#structure","title":"Structure","text":""},{"location":"launch/tier4_perception_launch/#package-dependencies","title":"Package Dependencies","text":"

Please see <exec_depend> in package.xml.

"},{"location":"launch/tier4_perception_launch/#usage","title":"Usage","text":"

You can include as follows in *.launch.xml to use perception.launch.xml.

Note that you should provide parameter paths as PACKAGE_param_path. The list of parameter paths you should provide is written at the top of perception.launch.xml.

  <include file=\"$(find-pkg-share tier4_perception_launch)/launch/perception.launch.xml\">\n<!-- options for mode: camera_lidar_fusion, lidar, camera -->\n<arg name=\"mode\" value=\"lidar\" />\n\n<!-- Parameter files -->\n<arg name=\"FOO_param_path\" value=\"...\"/>\n<arg name=\"BAR_param_path\" value=\"...\"/>\n...\n  </include>\n
"},{"location":"launch/tier4_planning_launch/","title":"tier4_planning_launch","text":""},{"location":"launch/tier4_planning_launch/#structure","title":"Structure","text":""},{"location":"launch/tier4_planning_launch/#package-dependencies","title":"Package Dependencies","text":"

Please see <exec_depend> in package.xml.

"},{"location":"launch/tier4_planning_launch/#usage","title":"Usage","text":"

Note that you should provide parameter paths as PACKAGE_param_path. The list of parameter paths you should provide is written at the top of planning.launch.xml.

<include file=\"$(find-pkg-share tier4_planning_launch)/launch/planning.launch.xml\">\n<!-- Parameter files -->\n<arg name=\"FOO_NODE_param_path\" value=\"...\"/>\n<arg name=\"BAR_NODE_param_path\" value=\"...\"/>\n...\n</include>\n
"},{"location":"launch/tier4_sensing_launch/","title":"tier4_sensing_launch","text":""},{"location":"launch/tier4_sensing_launch/#structure","title":"Structure","text":""},{"location":"launch/tier4_sensing_launch/#package-dependencies","title":"Package Dependencies","text":"

Please see <exec_depend> in package.xml.

"},{"location":"launch/tier4_sensing_launch/#usage","title":"Usage","text":"

You can include as follows in *.launch.xml to use sensing.launch.xml.

  <include file=\"$(find-pkg-share tier4_sensing_launch)/launch/sensing.launch.xml\">\n<arg name=\"launch_driver\" value=\"true\"/>\n<arg name=\"sensor_model\" value=\"$(var sensor_model)\"/>\n<arg name=\"vehicle_param_file\" value=\"$(find-pkg-share $(var vehicle_model)_description)/config/vehicle_info.param.yaml\"/>\n<arg name=\"vehicle_mirror_param_file\" value=\"$(find-pkg-share $(var vehicle_model)_description)/config/mirror.param.yaml\"/>\n</include>\n
"},{"location":"launch/tier4_sensing_launch/#launch-directory-structure","title":"Launch Directory Structure","text":"

This package finds sensor settings of specified sensor model in launch.

launch/\n\u251c\u2500\u2500 aip_x1 # Sensor model name\n\u2502   \u251c\u2500\u2500 camera.launch.xml # Camera\n\u2502   \u251c\u2500\u2500 gnss.launch.xml # GNSS\n\u2502   \u251c\u2500\u2500 imu.launch.xml # IMU\n\u2502   \u251c\u2500\u2500 lidar.launch.xml # LiDAR\n\u2502   \u2514\u2500\u2500 pointcloud_preprocessor.launch.py # for preprocessing pointcloud\n...\n
"},{"location":"launch/tier4_sensing_launch/#notes","title":"Notes","text":"

This package finds settings with variables.

ex.)

<include file=\"$(find-pkg-share tier4_sensing_launch)/launch/$(var sensor_model)/lidar.launch.xml\">\n
"},{"location":"launch/tier4_simulator_launch/","title":"tier4_simulator_launch","text":""},{"location":"launch/tier4_simulator_launch/#structure","title":"Structure","text":""},{"location":"launch/tier4_simulator_launch/#package-dependencies","title":"Package Dependencies","text":"

Please see <exec_depend> in package.xml.

"},{"location":"launch/tier4_simulator_launch/#usage","title":"Usage","text":"
  <include file=\"$(find-pkg-share tier4_simulator_launch)/launch/simulator.launch.xml\">\n<arg name=\"vehicle_info_param_file\" value=\"VEHICLE_INFO_PARAM_FILE\" />\n<arg name=\"vehicle_model\" value=\"VEHICLE_MODEL\"/>\n</include>\n

The simulator model used in simple_planning_simulator is loaded from \"config/simulator_model.param.yaml\" in the \"VEHICLE_MODEL_description\" package.

"},{"location":"launch/tier4_system_launch/","title":"tier4_system_launch","text":""},{"location":"launch/tier4_system_launch/#structure","title":"Structure","text":""},{"location":"launch/tier4_system_launch/#package-dependencies","title":"Package Dependencies","text":"

Please see <exec_depend> in package.xml.

"},{"location":"launch/tier4_system_launch/#usage","title":"Usage","text":"

Note that you should provide parameter paths as PACKAGE_param_path. The list of parameter paths you should provide is written at the top of system.launch.xml.

  <include file=\"$(find-pkg-share tier4_system_launch)/launch/system.launch.xml\">\n<arg name=\"run_mode\" value=\"online\"/>\n<arg name=\"sensor_model\" value=\"SENSOR_MODEL\"/>\n\n<!-- Parameter files -->\n<arg name=\"FOO_param_path\" value=\"...\"/>\n<arg name=\"BAR_param_path\" value=\"...\"/>\n...\n  </include>\n

The sensing configuration parameters used in system_error_monitor are loaded from \"config/diagnostic_aggregator/sensor_kit.param.yaml\" in the \"SENSOR_MODEL_description\" package.

"},{"location":"launch/tier4_vehicle_launch/","title":"tier4_vehicle_launch","text":""},{"location":"launch/tier4_vehicle_launch/#structure","title":"Structure","text":""},{"location":"launch/tier4_vehicle_launch/#package-dependencies","title":"Package Dependencies","text":"

Please see <exec_depend> in package.xml.

"},{"location":"launch/tier4_vehicle_launch/#usage","title":"Usage","text":"

You can include as follows in *.launch.xml to use vehicle.launch.xml.

  <arg name=\"vehicle_model\" default=\"sample_vehicle\" description=\"vehicle model name\"/>\n<arg name=\"sensor_model\" default=\"sample_sensor_kit\" description=\"sensor model name\"/>\n\n<include file=\"$(find-pkg-share tier4_vehicle_launch)/launch/vehicle.launch.xml\">\n<arg name=\"vehicle_model\" value=\"$(var vehicle_model)\"/>\n<arg name=\"sensor_model\" value=\"$(var sensor_model)\"/>\n</include>\n
"},{"location":"launch/tier4_vehicle_launch/#notes","title":"Notes","text":"

This package finds some external packages and settings with variables and package names.

ex.)

<let name=\"vehicle_model_pkg\" value=\"$(find-pkg-share $(var vehicle_model)_description)\"/>\n
<arg name=\"config_dir\" default=\"$(find-pkg-share individual_params)/config/$(var vehicle_id)/$(var sensor_model)\"/>\n
"},{"location":"launch/tier4_vehicle_launch/#vehiclexacro","title":"vehicle.xacro","text":""},{"location":"launch/tier4_vehicle_launch/#arguments","title":"Arguments","text":"Name Type Description Default sensor_model String sensor model name \"\" vehicle_model String vehicle model name \"\""},{"location":"launch/tier4_vehicle_launch/#usage_1","title":"Usage","text":"

You can write as follows in *.launch.xml.

  <arg name=\"vehicle_model\" default=\"sample_vehicle\" description=\"vehicle model name\"/>\n<arg name=\"sensor_model\" default=\"sample_sensor_kit\" description=\"sensor model name\"/>\n<arg name=\"model\" default=\"$(find-pkg-share tier4_vehicle_launch)/urdf/vehicle.xacro\"/>\n\n<node name=\"robot_state_publisher\" pkg=\"robot_state_publisher\" exec=\"robot_state_publisher\">\n<param name=\"robot_description\" value=\"$(command 'xacro $(var model) vehicle_model:=$(var vehicle_model) sensor_model:=$(var sensor_model)')\"/>\n</node>\n
"},{"location":"localization/ekf_localizer/","title":"Overview","text":"

The Extend Kalman Filter Localizer estimates robust and less noisy robot pose and twist by integrating the 2D vehicle dynamics model with input ego-pose and ego-twist messages. The algorithm is designed especially for fast-moving robots such as autonomous driving systems.

"},{"location":"localization/ekf_localizer/#flowchart","title":"Flowchart","text":"

The overall flowchart of the ekf_localizer is described below.

"},{"location":"localization/ekf_localizer/#features","title":"Features","text":"

This package includes the following features:

"},{"location":"localization/ekf_localizer/#launch","title":"Launch","text":"

The ekf_localizer starts with the default parameters with the following command.

roslaunch ekf_localizer ekf_localizer.launch\n

The parameters and input topic names can be set in the ekf_localizer.launch file.

"},{"location":"localization/ekf_localizer/#node","title":"Node","text":""},{"location":"localization/ekf_localizer/#subscribed-topics","title":"Subscribed Topics","text":" "},{"location":"localization/ekf_localizer/#published-topics","title":"Published Topics","text":" "},{"location":"localization/ekf_localizer/#published-tf","title":"Published TF","text":""},{"location":"localization/ekf_localizer/#functions","title":"Functions","text":""},{"location":"localization/ekf_localizer/#predict","title":"Predict","text":"

The current robot state is predicted from previously estimated data using a given prediction model. This calculation is called at a constant interval (predict_frequency [Hz]). The prediction equation is described at the end of this page.

"},{"location":"localization/ekf_localizer/#measurement-update","title":"Measurement Update","text":"

Before the update, the Mahalanobis distance is calculated between the measured input and the predicted state, the measurement update is not performed for inputs where the Mahalanobis distance exceeds the given threshold.

The predicted state is updated with the latest measured inputs, measured_pose, and measured_twist. The updates are performed with the same frequency as prediction, usually at a high frequency, in order to enable smooth state estimation.

"},{"location":"localization/ekf_localizer/#parameter-description","title":"Parameter description","text":"

The parameters are set in launch/ekf_localizer.launch .

"},{"location":"localization/ekf_localizer/#for-node","title":"For Node","text":"Name Type Description Default value show_debug_info bool Flag to display debug info false predict_frequency double Frequency for filtering and publishing [Hz] 50.0 tf_rate double Frequency for tf broadcasting [Hz] 10.0 extend_state_step int Max delay step which can be dealt with in EKF. Large number increases computational cost. 50 enable_yaw_bias_estimation bool Flag to enable yaw bias estimation true"},{"location":"localization/ekf_localizer/#for-pose-measurement","title":"For pose measurement","text":"Name Type Description Default value pose_additional_delay double Additional delay time for pose measurement [s] 0.0 pose_measure_uncertainty_time double Measured time uncertainty used for covariance calculation [s] 0.01 pose_smoothing_steps int A value for smoothing steps 5 pose_gate_dist double Limit of Mahalanobis distance used for outliers detection 10000.0"},{"location":"localization/ekf_localizer/#for-twist-measurement","title":"For twist measurement","text":"Name Type Description Default value twist_additional_delay double Additional delay time for twist [s] 0.0 twist_smoothing_steps int A value for smoothing steps 2 twist_gate_dist double Limit of Mahalanobis distance used for outliers detection 10000.0"},{"location":"localization/ekf_localizer/#for-process-noise","title":"For process noise","text":"Name Type Description Default value proc_stddev_vx_c double Standard deviation of process noise in time differentiation expression of linear velocity x, noise for d_vx = 0 2.0 proc_stddev_wz_c double Standard deviation of process noise in time differentiation expression of angular velocity z, noise for d_wz = 0 0.2 proc_stddev_yaw_c double Standard deviation of process noise in time differentiation expression of yaw, noise for d_yaw = omega 0.005 proc_stddev_yaw_bias_c double Standard deviation of process noise in time differentiation expression of yaw_bias, noise for d_yaw_bias = 0 0.001

note: process noise for positions x & y are calculated automatically from nonlinear dynamics.

"},{"location":"localization/ekf_localizer/#how-to-tune-ekf-parameters","title":"How to tune EKF parameters","text":""},{"location":"localization/ekf_localizer/#0-preliminaries","title":"0. Preliminaries","text":""},{"location":"localization/ekf_localizer/#1-tune-sensor-parameters","title":"1. Tune sensor parameters","text":"

Set standard deviation for each sensor. The pose_measure_uncertainty_time is for the uncertainty of the header timestamp data. You can also tune a number of steps for smoothing for each observed sensor data by tuning *_smoothing_steps. Increasing the number will improve the smoothness of the estimation, but may have an adverse effect on the estimation performance.

"},{"location":"localization/ekf_localizer/#2-tune-process-model-parameters","title":"2. Tune process model parameters","text":""},{"location":"localization/ekf_localizer/#kalman-filter-model","title":"Kalman Filter Model","text":""},{"location":"localization/ekf_localizer/#kinematics-model-in-update-function","title":"kinematics model in update function","text":"

where b_k is the yawbias.

"},{"location":"localization/ekf_localizer/#time-delay-model","title":"time delay model","text":"

The measurement time delay is handled by an augmented state [1] (See, Section 7.3 FIXED-LAG SMOOTHING).

Note that, although the dimension gets larger since the analytical expansion can be applied based on the specific structures of the augmented states, the computational complexity does not significantly change.

"},{"location":"localization/ekf_localizer/#test-result-with-autoware-ndt","title":"Test Result with Autoware NDT","text":""},{"location":"localization/ekf_localizer/#known-issues","title":"Known issues","text":""},{"location":"localization/ekf_localizer/#reference","title":"reference","text":"

[1] Anderson, B. D. O., & Moore, J. B. (1979). Optimal filtering. Englewood Cliffs, NJ: Prentice-Hall.

"},{"location":"localization/gyro_odometer/","title":"gyro_odometer","text":""},{"location":"localization/gyro_odometer/#purpose","title":"Purpose","text":"

gyro_odometer is the package to estimate twist by combining imu and vehicle speed.

"},{"location":"localization/gyro_odometer/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"localization/gyro_odometer/#input","title":"Input","text":"Name Type Description vehicle/twist_with_covariance geometry_msgs::msg::TwistWithCovarianceStamped twist with covariance from vehicle imu sensor_msgs::msg::Imu imu from sensor"},{"location":"localization/gyro_odometer/#output","title":"Output","text":"Name Type Description twist_with_covariance geometry_msgs::msg::TwistWithCovarianceStamped estimated twist with covariance"},{"location":"localization/gyro_odometer/#parameters","title":"Parameters","text":"Parameter Type Description output_frame String output's frame id message_timeout_sec Double delay tolerance time for message"},{"location":"localization/gyro_odometer/#assumptions-known-limits","title":"Assumptions / Known limits","text":" "},{"location":"localization/initial_pose_button_panel/","title":"initial_pose_button_panel","text":""},{"location":"localization/initial_pose_button_panel/#role","title":"Role","text":"

initial_pose_button_panel is the package to send a request to the localization module to calculate the current ego pose.

It starts calculating the current ego pose by pushing the button on Rviz, implemented as an Rviz plugin. You can see the button on the right bottom of Rviz.

"},{"location":"localization/initial_pose_button_panel/#input-output","title":"Input / Output","text":""},{"location":"localization/initial_pose_button_panel/#input-topics","title":"Input topics","text":"Name Type Description /sensing/gnss/pose_with_covariance (default) geometry_msgs::msg::PoseWithCovarianceStamped initial pose with covariance to calculate the current ego pose"},{"location":"localization/localization_error_monitor/","title":"localization_error_monitor","text":""},{"location":"localization/localization_error_monitor/#purpose","title":"Purpose","text":"

localization_error_monitor is a package for diagnosing localization errors by monitoring uncertainty of the localization results. The package monitors the following two values:

"},{"location":"localization/localization_error_monitor/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"localization/localization_error_monitor/#input","title":"Input","text":"Name Type Description input/pose_with_cov geometry_msgs::msg::PoseWithCovarianceStamped localization result"},{"location":"localization/localization_error_monitor/#output","title":"Output","text":"Name Type Description debug/ellipse_marker visualization_msgs::msg::Marker ellipse marker diagnostics diagnostic_msgs::msg::DiagnosticArray diagnostics outputs"},{"location":"localization/localization_error_monitor/#parameters","title":"Parameters","text":"Name Type Description scale double scale factor for monitored values (default: 3.0) error_ellipse_size double error threshold for long radius of confidence ellipse [m] (default: 1.0) warn_ellipse_size double warning threshold for long radius of confidence ellipse [m] (default: 0.8) error_ellipse_size_lateral_direction double error threshold for size of confidence ellipse along lateral direction [m] (default: 0.3) warn_ellipse_size_lateral_direction double warning threshold for size of confidence ellipse along lateral direction [m] (default: 0.2)"},{"location":"localization/ndt_scan_matcher/","title":"ndt_scan_matcher","text":""},{"location":"localization/ndt_scan_matcher/#purpose","title":"Purpose","text":"

ndt_scan_matcher is a package for position estimation using the NDT scan matching method.

There are two main functions in this package:

One optional function is regularization. Please see the regularization chapter in the back for details. It is disabled by default.

"},{"location":"localization/ndt_scan_matcher/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"localization/ndt_scan_matcher/#input","title":"Input","text":"Name Type Description ekf_pose_with_covariance geometry_msgs::msg::PoseWithCovarianceStamped initial pose pointcloud_map sensor_msgs::msg::PointCloud2 map pointcloud points_raw sensor_msgs::msg::PointCloud2 sensor pointcloud sensing/gnss/pose_with_covariance sensor_msgs::msg::PoseWithCovarianceStamped base position for regularization term

sensing/gnss/pose_with_covariance is required only when regularization is enabled.

"},{"location":"localization/ndt_scan_matcher/#output","title":"Output","text":"Name Type Description ndt_pose geometry_msgs::msg::PoseStamped estimated pose ndt_pose_with_covariance geometry_msgs::msg::PoseWithCovarianceStamped estimated pose with covariance /diagnostics diagnostic_msgs::msg::DiagnosticArray diagnostics points_aligned sensor_msgs::msg::PointCloud2 [debug topic] pointcloud aligned by scan matching points_aligned_no_ground sensor_msgs::msg::PointCloud2 [debug topic] de-grounded pointcloud aligned by scan matching initial_pose_with_covariance geometry_msgs::msg::PoseWithCovarianceStamped [debug topic] initial pose used in scan matching exe_time_ms tier4_debug_msgs::msg::Float32Stamped [debug topic] execution time for scan matching [ms] transform_probability tier4_debug_msgs::msg::Float32Stamped [debug topic] score of scan matching no_ground_transform_probability tier4_debug_msgs::msg::Float32Stamped [debug topic] score of scan matching based on de-grounded LiDAR scan iteration_num tier4_debug_msgs::msg::Int32Stamped [debug topic] number of scan matching iterations initial_to_result_distance tier4_debug_msgs::msg::Float32Stamped [debug topic] distance difference between the initial point and the convergence point [m] initial_to_result_distance_old tier4_debug_msgs::msg::Float32Stamped [debug topic] distance difference between the older of the two initial points used in linear interpolation and the convergence point [m] initial_to_result_distance_new tier4_debug_msgs::msg::Float32Stamped [debug topic] distance difference between the newer of the two initial points used in linear interpolation and the convergence point [m] ndt_marker visualization_msgs::msg::MarkerArray [debug topic] markers for debugging monte_carlo_initial_pose_marker visualization_msgs::msg::MarkerArray [debug topic] particles used in initial position estimation"},{"location":"localization/ndt_scan_matcher/#service","title":"Service","text":"Name Type Description ndt_align_srv autoware_localization_srvs::srv::PoseWithCovarianceStamped service to estimate initial pose"},{"location":"localization/ndt_scan_matcher/#parameters","title":"Parameters","text":""},{"location":"localization/ndt_scan_matcher/#core-parameters","title":"Core Parameters","text":"Name Type Description base_frame string Vehicle reference frame input_sensor_points_queue_size int Subscriber queue size trans_epsilon double The maximum difference between two consecutive transformations in order to consider convergence step_size double The newton line search maximum step length resolution double The ND voxel grid resolution [m] max_iterations int The number of iterations required to calculate alignment converged_param_type int The type of indicators for scan matching score (0: TP, 1: NVTL) converged_param_transform_probability double Threshold for deciding whether to trust the estimation result num_threads int Number of threads used for parallel computing

(TP: Transform Probability, NVTL: Nearest Voxel Transform Probability)

"},{"location":"localization/ndt_scan_matcher/#regularization","title":"Regularization","text":""},{"location":"localization/ndt_scan_matcher/#abstract","title":"Abstract","text":"

This is a function that adds the regularization term to the NDT optimization problem as follows.

\\begin{align} \\min_{\\mathbf{R},\\mathbf{t}} \\mathrm{NDT}(\\mathbf{R},\\mathbf{t}) +\\mathrm{scale\\ factor}\\cdot \\left| \\mathbf{R}^\\top (\\mathbf{t_{base}-\\mathbf{t}}) \\cdot \\begin{pmatrix} 1\\\\ 0\\\\ 0 \\end{pmatrix} \\right|^2 \\end{align}

, where t_base is base position measured by GNSS or other means. NDT(R,t) stands for the pure NDT cost function. The regularization term shifts the optimal solution to the base position in the longitudinal direction of the vehicle. Only errors along the longitudinal direction with respect to the base position are considered; errors along Z-axis and lateral-axis error are not considered.

Although the regularization term has rotation as a parameter, the gradient and hessian associated with it is not computed to stabilize the optimization. Specifically, the gradients are computed as follows.

\\begin{align} &g_x=\\nabla_x \\mathrm{NDT}(\\mathbf{R},\\mathbf{t}) + 2 \\mathrm{scale\\ factor} \\cos\\theta_z\\cdot e_{\\mathrm{longitudinal}} \\\\ &g_y=\\nabla_y \\mathrm{NDT}(\\mathbf{R},\\mathbf{t}) + 2 \\mathrm{scale\\ factor} \\sin\\theta_z\\cdot e_{\\mathrm{longitudinal}} \\\\ &g_z=\\nabla_z \\mathrm{NDT}(\\mathbf{R},\\mathbf{t}) \\\\ &g_\\mathbf{R}=\\nabla_\\mathbf{R} \\mathrm{NDT}(\\mathbf{R},\\mathbf{t}) \\end{align}

Regularization is disabled by default. If you wish to use it, please edit the following parameters to enable it.

"},{"location":"localization/ndt_scan_matcher/#where-is-regularization-available","title":"Where is regularization available","text":"

This feature is effective on feature-less roads where GNSS is available, such as

By remapping the base position topic to something other than GNSS, as described below, it can be valid outside of these.

"},{"location":"localization/ndt_scan_matcher/#using-other-base-position","title":"Using other base position","text":"

Other than GNSS, you can give other global position topics obtained from magnetic markers, visual markers or etc. if they are available in your environment. (Currently Autoware does not provide a node that gives such pose.) To use your topic for regularization, you need to remap the input_regularization_pose_topic with your topic in ndt_scan_matcher.launch.xml. By default, it is remapped with /sensing/gnss/pose_with_covariance.

"},{"location":"localization/ndt_scan_matcher/#limitations","title":"Limitations","text":"

Since this function determines the base position by linear interpolation from the recently subscribed poses, topics that are published at a low frequency relative to the driving speed cannot be used. Inappropriate linear interpolation may result in bad optimization results.

When using GNSS for base location, the regularization can have negative effects in tunnels, indoors, and near skyscrapers. This is because if the base position is far off from the true value, NDT scan matching may converge to inappropriate optimal position.

"},{"location":"localization/ndt_scan_matcher/#parameters_1","title":"Parameters","text":"Name Type Description regularization_enabled bool Flag to add regularization term to NDT optimization (FALSE by default) regularization_scale_factor double Coefficient of the regularization term.

Regularization is disabled by default because GNSS is not always accurate enough to serve the appropriate base position in any scenes.

If the scale_factor is too large, the NDT will be drawn to the base position and scan matching may fail. Conversely, if it is too small, the regularization benefit will be lost.

Note that setting scale_factor to 0 is equivalent to disabling regularization.

"},{"location":"localization/ndt_scan_matcher/#example","title":"Example","text":"

The following figures show tested maps.

The following figures show the trajectories estimated on the feature-less map with standard NDT and regularization-enabled NDT, respectively. The color of the trajectory indicates the error (meter) from the reference trajectory, which is computed with the feature-rich map.

"},{"location":"localization/ndt_scan_matcher/#dynamic-map-loading","title":"Dynamic map loading","text":"

Autoware supports dynamic map loading feature for ndt_scan_matcher. Using this feature, NDT dynamically requests for the surrounding pointcloud map to pointcloud_map_loader, and then receive and preprocess the map in an online fashion.

Using the feature, ndt_scan_matcher can theoretically handle any large size maps in terms of memory usage. (Note that it is still possible that there exists a limitation due to other factors, e.g. floating-point error)

"},{"location":"localization/ndt_scan_matcher/#additional-interfaces","title":"Additional interfaces","text":""},{"location":"localization/ndt_scan_matcher/#additional-inputs","title":"Additional inputs","text":"Name Type Description input_ekf_odom nav_msgs::msg::Odometry Vehicle localization results (used for map update decision)"},{"location":"localization/ndt_scan_matcher/#additional-outputs","title":"Additional outputs","text":"Name Type Description debug/loaded_pointcloud_map sensor_msgs::msg::PointCloud2 pointcloud maps used for localization (for debug)"},{"location":"localization/ndt_scan_matcher/#additional-client","title":"Additional client","text":"Name Type Description client_map_loader autoware_map_msgs::srv::GetDifferentialPointCloudMap map loading client"},{"location":"localization/ndt_scan_matcher/#parameters_2","title":"Parameters","text":"Name Type Description use_dynamic_map_loading bool Flag to enable dynamic map loading feature for NDT (TRUE by default) dynamic_map_loading_update_distance double Distance traveled to load new map(s) dynamic_map_loading_map_radius double Map loading radius for every update lidar_radius double LiDAR radius used for localization (only used for diagnosis)"},{"location":"localization/ndt_scan_matcher/#enabling-the-dynamic-map-loading-feature","title":"Enabling the dynamic map loading feature","text":"

To use dynamic map loading feature for ndt_scan_matcher, you also need to appropriately configure some other settings outside of this node. Follow the next two instructions.

  1. enable dynamic map loading interface in pointcloud_map_loader (by setting enable_differential_load to true in the package)
  2. split the PCD files into grids (recommended size: 20[m] x 20[m])

Note that the dynamic map loading may FAIL if the map is split into two or more large size map (e.g. 1000[m] x 1000[m]). Please provide either of

Here is a split PCD map for sample-map-rosbag from Autoware tutorial: sample-map-rosbag_split.zip

PCD files use_dynamic_map_loading enable_differential_load How NDT loads map(s) single file true true at once (standard) single file true false does NOT work single file false true/false at once (standard) splitted true true dynamically splitted true false does NOT work splitted false true/false at once (standard)"},{"location":"localization/ndt_scan_matcher/#scan-matching-score-based-on-de-grounded-lidar-scan","title":"Scan matching score based on de-grounded LiDAR scan","text":""},{"location":"localization/ndt_scan_matcher/#abstract_1","title":"Abstract","text":"

This is a function that using de-grounded LiDAR scan estimate scan matching score.This score can more accurately reflect the current localization performance. related issue.

"},{"location":"localization/ndt_scan_matcher/#parameters_3","title":"Parameters","text":"Name Type Description estimate_scores_for_degrounded_scan bool Flag for using scan matching score based on de-grounded LiDAR scan (FALSE by default) z_margin_for_ground_removal double Z-value margin for removal ground points"},{"location":"localization/pose2twist/","title":"pose2twist","text":""},{"location":"localization/pose2twist/#purpose","title":"Purpose","text":"

This pose2twist calculates the velocity from the input pose history. In addition to the computed twist, this node outputs the linear-x and angular-z components as a float message to simplify debugging.

The twist.linear.x is calculated as sqrt(dx * dx + dy * dy + dz * dz) / dt, and the values in the y and z fields are zero. The twist.angular is calculated as d_roll / dt, d_pitch / dt and d_yaw / dt for each field.

"},{"location":"localization/pose2twist/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"localization/pose2twist/#input","title":"Input","text":"Name Type Description pose geometry_msgs::msg::PoseStamped pose source to used for the velocity calculation."},{"location":"localization/pose2twist/#output","title":"Output","text":"Name Type Description twist geometry_msgs::msg::TwistStamped twist calculated from the input pose history. linear_x tier4_debug_msgs::msg::Float32Stamped linear-x field of the output twist. angular_z tier4_debug_msgs::msg::Float32Stamped angular-z field of the output twist."},{"location":"localization/pose2twist/#parameters","title":"Parameters","text":"

none.

"},{"location":"localization/pose2twist/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

none.

"},{"location":"localization/pose_initializer/","title":"pose_initializer","text":""},{"location":"localization/pose_initializer/#purpose","title":"Purpose","text":"

The pose_initializer is the package to send an initial pose to ekf_localizer. It receives roughly estimated initial pose from GNSS/user. Passing the pose to ndt_scan_matcher, and it gets a calculated ego pose from ndt_scan_matcher via service. Finally, it publishes the initial pose to ekf_localizer. This node depends on the map height fitter library. See here for more details.

"},{"location":"localization/pose_initializer/#interfaces","title":"Interfaces","text":""},{"location":"localization/pose_initializer/#parameters","title":"Parameters","text":"Name Type Description ekf_enabled bool If true, EKF localizar is activated. ndt_enabled bool If true, the pose will be estimated by NDT scan matcher, otherwise it is passed through. stop_check_enabled bool If true, initialization is accepted only when the vehicle is stopped. stop_check_duration bool The duration used for the stop check above. gnss_enabled bool If true, use the GNSS pose when no pose is specified. gnss_pose_timeout bool The duration that the GNSS pose is valid."},{"location":"localization/pose_initializer/#services","title":"Services","text":"Name Type Description /localization/initialize autoware_adapi_v1_msgs::srv::InitializeLocalization initial pose from api"},{"location":"localization/pose_initializer/#clients","title":"Clients","text":"Name Type Description /localization/pose_estimator/ndt_align_srv tier4_localization_msgs::srv::PoseWithCovarianceStamped pose estimation service"},{"location":"localization/pose_initializer/#subscriptions","title":"Subscriptions","text":"Name Type Description /sensing/gnss/pose_with_covariance geometry_msgs::msg::PoseWithCovarianceStamped pose from gnss /sensing/vehicle_velocity_converter/twist_with_covariance geometry_msgs::msg::TwistStamped twist for stop check"},{"location":"localization/pose_initializer/#publications","title":"Publications","text":"Name Type Description /localization/initialization_state autoware_adapi_v1_msgs::msg::LocalizationInitializationState pose initialization state /initialpose3d geometry_msgs::msg::PoseWithCovarianceStamped calculated initial ego pose"},{"location":"localization/pose_initializer/#connection-with-default-ad-api","title":"Connection with Default AD API","text":"

This pose_initializer is used via default AD API. For detailed description of the API description, please refer to the description of default_ad_api.

"},{"location":"localization/stop_filter/","title":"stop_filter","text":""},{"location":"localization/stop_filter/#purpose","title":"Purpose","text":"

When this function did not exist, each node used a different criterion to determine whether the vehicle is stopping or not, resulting that some nodes were in operation of stopping the vehicle and some nodes continued running in the drive mode. This node aims to:

"},{"location":"localization/stop_filter/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"localization/stop_filter/#input","title":"Input","text":"Name Type Description input/odom nav_msgs::msg::Odometry localization odometry"},{"location":"localization/stop_filter/#output","title":"Output","text":"Name Type Description output/odom nav_msgs::msg::Odometry odometry with suppressed longitudinal and yaw twist debug/stop_flag tier4_debug_msgs::msg::BoolStamped flag to represent whether the vehicle is stopping or not"},{"location":"localization/stop_filter/#parameters","title":"Parameters","text":"Name Type Description vx_threshold double longitudinal velocity threshold to determine if the vehicle is stopping [m/s] (default: 0.01) wz_threshold double yaw velocity threshold to determine if the vehicle is stopping [rad/s] (default: 0.01)"},{"location":"localization/twist2accel/","title":"twist2accel","text":""},{"location":"localization/twist2accel/#purpose","title":"Purpose","text":"

This package is responsible for estimating acceleration using the output of ekf_localizer. It uses lowpass filter to mitigate the noise.

"},{"location":"localization/twist2accel/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"localization/twist2accel/#input","title":"Input","text":"Name Type Description input/odom nav_msgs::msg::Odometry localization odometry input/twist geometry_msgs::msg::TwistWithCovarianceStamped twist"},{"location":"localization/twist2accel/#output","title":"Output","text":"Name Type Description output/accel geometry_msgs::msg::AccelWithCovarianceStamped estimated acceleration"},{"location":"localization/twist2accel/#parameters","title":"Parameters","text":"Name Type Description use_odom bool use odometry if true, else use twist input (default: true) accel_lowpass_gain double lowpass gain for lowpass filter in estimating acceleration (default: 0.9)"},{"location":"localization/twist2accel/#future-work","title":"Future work","text":"

Future work includes integrating acceleration into the EKF state.

"},{"location":"map/map_height_fitter/","title":"map_height_fitter","text":"

This library fits the given point with the ground of the point cloud map. The map loading operation is switched by the parameter enable_partial_load of the node specified by map_loader_name. The node using this library must use multi thread executor.

Interface Local Name Description Parameter map_loader_name The point cloud map loader name. Subscription ~/pointcloud_map The topic name to load the whole map Client ~/partial_map_load The service name to load the partial map"},{"location":"map/map_loader/","title":"map_loader package","text":"

This package provides the features of loading various maps.

"},{"location":"map/map_loader/#pointcloud_map_loader","title":"pointcloud_map_loader","text":""},{"location":"map/map_loader/#feature","title":"Feature","text":"

pointcloud_map_loader provides pointcloud maps to the other Autoware nodes in various configurations. Currently, it supports the following two types:

"},{"location":"map/map_loader/#publish-raw-pointcloud-map-ros-2-topic","title":"Publish raw pointcloud map (ROS 2 topic)","text":"

The node publishes the raw pointcloud map loaded from the .pcd file(s).

"},{"location":"map/map_loader/#publish-downsampled-pointcloud-map-ros-2-topic","title":"Publish downsampled pointcloud map (ROS 2 topic)","text":"

The node publishes the downsampled pointcloud map loaded from the .pcd file(s). You can specify the downsample resolution by changing the leaf_size parameter.

"},{"location":"map/map_loader/#send-partial-pointcloud-map-ros-2-service","title":"Send partial pointcloud map (ROS 2 service)","text":"

Here, we assume that the pointcloud maps are divided into grids.

Given a query from a client node, the node sends a set of pointcloud maps that overlaps with the queried area. Please see the description of GetPartialPointCloudMap.srv for details.

"},{"location":"map/map_loader/#send-differential-pointcloud-map-ros-2-service","title":"Send differential pointcloud map (ROS 2 service)","text":"

Here, we assume that the pointcloud maps are divided into grids.

Given a query and set of map IDs, the node sends a set of pointcloud maps that overlap with the queried area and are not included in the set of map IDs. Please see the description of GetDifferentialPointCloudMap.srv for details.

"},{"location":"map/map_loader/#parameters","title":"Parameters","text":"Name Type Description Default value enable_whole_load bool A flag to enable raw pointcloud map publishing true enable_downsampled_whole_load bool A flag to enable downsampled pointcloud map publishing false enable_partial_load bool A flag to enable partial pointcloud map server false enable_differential_load bool A flag to enable differential pointcloud map server false leaf_size float Downsampling leaf size (only used when enable_downsampled_whole_load is set true) 3.0"},{"location":"map/map_loader/#interfaces","title":"Interfaces","text":""},{"location":"map/map_loader/#lanelet2_map_loader","title":"lanelet2_map_loader","text":""},{"location":"map/map_loader/#feature_1","title":"Feature","text":"

lanelet2_map_loader loads Lanelet2 file and publishes the map data as autoware_auto_mapping_msgs/HADMapBin message. The node projects lan/lon coordinates into MGRS coordinates.

"},{"location":"map/map_loader/#how-to-run","title":"How to run","text":"

ros2 run map_loader lanelet2_map_loader --ros-args -p lanelet2_map_path:=path/to/map.osm

"},{"location":"map/map_loader/#published-topics","title":"Published Topics","text":""},{"location":"map/map_loader/#lanelet2_map_visualization","title":"lanelet2_map_visualization","text":""},{"location":"map/map_loader/#feature_2","title":"Feature","text":"

lanelet2_map_visualization visualizes autoware_auto_mapping_msgs/HADMapBin messages into visualization_msgs/MarkerArray.

"},{"location":"map/map_loader/#how-to-run_1","title":"How to Run","text":"

ros2 run map_loader lanelet2_map_visualization

"},{"location":"map/map_loader/#subscribed-topics","title":"Subscribed Topics","text":""},{"location":"map/map_loader/#published-topics_1","title":"Published Topics","text":""},{"location":"map/map_tf_generator/Readme/","title":"map_tf_generator","text":""},{"location":"map/map_tf_generator/Readme/#purpose","title":"Purpose","text":"

The nodes in this package broadcast the viewer frame for visualization of the map in RViz.

Note that there is no module to need the viewer frame and this is used only for visualization.

The following are the supported methods to calculate the position of the viewer frame:

"},{"location":"map/map_tf_generator/Readme/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"map/map_tf_generator/Readme/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"map/map_tf_generator/Readme/#input","title":"Input","text":""},{"location":"map/map_tf_generator/Readme/#pcd_map_tf_generator","title":"pcd_map_tf_generator","text":"Name Type Description /map/pointcloud_map sensor_msgs::msg::PointCloud2 Subscribe pointcloud map to calculate position of viewer frames"},{"location":"map/map_tf_generator/Readme/#vector_map_tf_generator","title":"vector_map_tf_generator","text":"Name Type Description /map/vector_map autoware_auto_mapping_msgs::msg::HADMapBin Subscribe vector map to calculate position of viewer frames"},{"location":"map/map_tf_generator/Readme/#output","title":"Output","text":"Name Type Description /tf_static tf2_msgs/msg/TFMessage Broadcast viewer frames"},{"location":"map/map_tf_generator/Readme/#parameters","title":"Parameters","text":""},{"location":"map/map_tf_generator/Readme/#node-parameters","title":"Node Parameters","text":"

None

"},{"location":"map/map_tf_generator/Readme/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Explanation viewer_frame string viewer Name of viewer frame map_frame string map The parent frame name of viewer frame"},{"location":"map/map_tf_generator/Readme/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"perception/compare_map_segmentation/","title":"compare_map_segmentation","text":""},{"location":"perception/compare_map_segmentation/#purpose","title":"Purpose","text":"

The compare_map_segmentation is a node that filters the ground points from the input pointcloud by using map info (e.g. pcd, elevation map).

"},{"location":"perception/compare_map_segmentation/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"perception/compare_map_segmentation/#compare-elevation-map-filter","title":"Compare Elevation Map Filter","text":"

Compare the z of the input points with the value of elevation_map. The height difference is calculated by the binary integration of neighboring cells. Remove points whose height difference is below the height_diff_thresh.

"},{"location":"perception/compare_map_segmentation/#distance-based-compare-map-filter","title":"Distance Based Compare Map Filter","text":"

WIP

"},{"location":"perception/compare_map_segmentation/#voxel-based-approximate-compare-map-filter","title":"Voxel Based Approximate Compare Map Filter","text":"

WIP

"},{"location":"perception/compare_map_segmentation/#voxel-based-compare-map-filter","title":"Voxel Based Compare Map Filter","text":"

WIP

"},{"location":"perception/compare_map_segmentation/#voxel-distance-based-compare-map-filter","title":"Voxel Distance based Compare Map Filter","text":"

WIP

"},{"location":"perception/compare_map_segmentation/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/compare_map_segmentation/#compare-elevation-map-filter_1","title":"Compare Elevation Map Filter","text":""},{"location":"perception/compare_map_segmentation/#input","title":"Input","text":"Name Type Description ~/input/points sensor_msgs::msg::PointCloud2 reference points ~/input/elevation_map grid_map::msg::GridMap elevation map"},{"location":"perception/compare_map_segmentation/#output","title":"Output","text":"Name Type Description ~/output/points sensor_msgs::msg::PointCloud2 filtered points"},{"location":"perception/compare_map_segmentation/#other-filters","title":"Other Filters","text":""},{"location":"perception/compare_map_segmentation/#input_1","title":"Input","text":"Name Type Description ~/input/points sensor_msgs::msg::PointCloud2 reference points ~/input/map grid_map::msg::GridMap map"},{"location":"perception/compare_map_segmentation/#output_1","title":"Output","text":"Name Type Description ~/output/points sensor_msgs::msg::PointCloud2 filtered points"},{"location":"perception/compare_map_segmentation/#parameters","title":"Parameters","text":""},{"location":"perception/compare_map_segmentation/#core-parameters","title":"Core Parameters","text":"Name Type Description Default value map_layer_name string elevation map layer name elevation map_frame float frame_id of the map that is temporarily used before elevation_map is subscribed map height_diff_thresh float Remove points whose height difference is below this value [m] 0.15"},{"location":"perception/compare_map_segmentation/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/compare_map_segmentation/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/compare_map_segmentation/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/compare_map_segmentation/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/compare_map_segmentation/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/crosswalk_traffic_light_estimator/","title":"crosswalk_traffic_light_estimator","text":""},{"location":"perception/crosswalk_traffic_light_estimator/#purpose","title":"Purpose","text":"

crosswalk_traffic_light_estimator is a module that estimates pedestrian traffic signals from HDMap and detected vehicle traffic signals.

"},{"location":"perception/crosswalk_traffic_light_estimator/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/crosswalk_traffic_light_estimator/#input","title":"Input","text":"Name Type Description ~/input/vector_map autoware_auto_mapping_msgs::msg::HADMapBin vector map ~/input/route autoware_planning_msgs::msg::LaneletRoute route ~/input/classified/traffic_signals autoware_auto_perception_msgs::msg::TrafficSignalArray classified signals"},{"location":"perception/crosswalk_traffic_light_estimator/#output","title":"Output","text":"Name Type Description ~/output/traffic_signals autoware_auto_perception_msgs::msg::TrafficSignalArray output that contains estimated pedestrian traffic signals"},{"location":"perception/crosswalk_traffic_light_estimator/#parameters","title":"Parameters","text":"Name Type Description Default value use_last_detect_color bool If this parameter is true, this module estimates pedestrian's traffic signal as RED not only when vehicle's traffic signal is detected as GREEN/AMBER but also when detection results change GREEN/AMBER to UNKNOWN. (If detection results change RED or AMBER to UNKNOWN, this module estimates pedestrian's traffic signal as UNKNOWN.) If this parameter is false, this module use only latest detection results for estimation. (Only when the detection result is GREEN/AMBER, this module estimates pedestrian's traffic signal as RED.) true"},{"location":"perception/crosswalk_traffic_light_estimator/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

If traffic between pedestrians and vehicles is controlled by traffic signals, the crosswalk traffic signal maybe RED in order to prevent pedestrian from crossing when the following conditions are satisfied.

"},{"location":"perception/crosswalk_traffic_light_estimator/#situation1","title":"Situation1","text":""},{"location":"perception/crosswalk_traffic_light_estimator/#situation2","title":"Situation2","text":""},{"location":"perception/crosswalk_traffic_light_estimator/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/crosswalk_traffic_light_estimator/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":""},{"location":"perception/detected_object_feature_remover/","title":"detected_object_feature_remover","text":""},{"location":"perception/detected_object_feature_remover/#purpose","title":"Purpose","text":"

The detected_object_feature_remover is a package to convert topic-type from DetectedObjectWithFeatureArray to DetectedObjects.

"},{"location":"perception/detected_object_feature_remover/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"perception/detected_object_feature_remover/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/detected_object_feature_remover/#input","title":"Input","text":"Name Type Description ~/input tier4_perception_msgs::msg::DetectedObjectWithFeatureArray detected objects with feature field"},{"location":"perception/detected_object_feature_remover/#output","title":"Output","text":"Name Type Description ~/output autoware_auto_perception_msgs::msg::DetectedObjects detected objects"},{"location":"perception/detected_object_feature_remover/#parameters","title":"Parameters","text":"

None

"},{"location":"perception/detected_object_feature_remover/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/detected_object_validation/","title":"detected_object_validation","text":""},{"location":"perception/detected_object_validation/#purpose","title":"Purpose","text":"

The purpose of this package is to eliminate obvious false positives of DetectedObjects.

"},{"location":"perception/detected_object_validation/#referencesexternal-links","title":"References/External links","text":""},{"location":"perception/detected_object_validation/object-lanelet-filter/","title":"object_lanelet_filter","text":""},{"location":"perception/detected_object_validation/object-lanelet-filter/#purpose","title":"Purpose","text":"

The object_lanelet_filter is a node that filters detected object by using vector map. The objects only inside of the vector map will be published.

"},{"location":"perception/detected_object_validation/object-lanelet-filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"perception/detected_object_validation/object-lanelet-filter/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/detected_object_validation/object-lanelet-filter/#input","title":"Input","text":"Name Type Description input/vector_map autoware_auto_mapping_msgs::msg::HADMapBin vector map input/object autoware_auto_perception_msgs::msg::DetectedObjects input detected objects"},{"location":"perception/detected_object_validation/object-lanelet-filter/#output","title":"Output","text":"Name Type Description output/object autoware_auto_perception_msgs::msg::DetectedObjects filtered detected objects"},{"location":"perception/detected_object_validation/object-lanelet-filter/#parameters","title":"Parameters","text":""},{"location":"perception/detected_object_validation/object-lanelet-filter/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description filter_target_label.UNKNOWN bool false If true, unknown objects are filtered. filter_target_label.CAR bool false If true, car objects are filtered. filter_target_label.TRUCK bool false If true, truck objects are filtered. filter_target_label.BUS bool false If true, bus objects are filtered. filter_target_label.TRAILER bool false If true, trailer objects are filtered. filter_target_label.MOTORCYCLE bool false If true, motorcycle objects are filtered. filter_target_label.BICYCLE bool false If true, bicycle objects are filtered. filter_target_label.PEDESTRIAN bool false If true, pedestrian objects are filtered."},{"location":"perception/detected_object_validation/object-lanelet-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

Filtering is performed based on the shape polygon of the object.

"},{"location":"perception/detected_object_validation/object-lanelet-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/detected_object_validation/object-lanelet-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/detected_object_validation/object-lanelet-filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/detected_object_validation/object-lanelet-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/detected_object_validation/object-position-filter/","title":"object_position_filter","text":""},{"location":"perception/detected_object_validation/object-position-filter/#purpose","title":"Purpose","text":"

The object_position_filter is a node that filters detected object based on x,y values. The objects only inside of the x, y bound will be published.

"},{"location":"perception/detected_object_validation/object-position-filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"perception/detected_object_validation/object-position-filter/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/detected_object_validation/object-position-filter/#input","title":"Input","text":"Name Type Description input/object autoware_auto_perception_msgs::msg::DetectedObjects input detected objects"},{"location":"perception/detected_object_validation/object-position-filter/#output","title":"Output","text":"Name Type Description output/object autoware_auto_perception_msgs::msg::DetectedObjects filtered detected objects"},{"location":"perception/detected_object_validation/object-position-filter/#parameters","title":"Parameters","text":""},{"location":"perception/detected_object_validation/object-position-filter/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description filter_target_label.UNKNOWN bool false If true, unknown objects are filtered. filter_target_label.CAR bool false If true, car objects are filtered. filter_target_label.TRUCK bool false If true, truck objects are filtered. filter_target_label.BUS bool false If true, bus objects are filtered. filter_target_label.TRAILER bool false If true, trailer objects are filtered. filter_target_label.MOTORCYCLE bool false If true, motorcycle objects are filtered. filter_target_label.BICYCLE bool false If true, bicycle objects are filtered. filter_target_label.PEDESTRIAN bool false If true, pedestrian objects are filtered. upper_bound_x float 100.00 Bound for filtering. Only used if filter_by_xy_position is true lower_bound_x float 0.00 Bound for filtering. Only used if filter_by_xy_position is true upper_bound_y float 50.00 Bound for filtering. Only used if filter_by_xy_position is true lower_bound_y float -50.00 Bound for filtering. Only used if filter_by_xy_position is true"},{"location":"perception/detected_object_validation/object-position-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

Filtering is performed based on the center position of the object.

"},{"location":"perception/detected_object_validation/object-position-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/detected_object_validation/object-position-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/detected_object_validation/object-position-filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/detected_object_validation/object-position-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/detected_object_validation/obstacle-pointcloud-based-validator/","title":"obstacle pointcloud based validator","text":""},{"location":"perception/detected_object_validation/obstacle-pointcloud-based-validator/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

If the number of obstacle point groups in the DetectedObjects is small, it is considered a false positive and removed. The obstacle point cloud can be a point cloud after compare map filtering or a ground filtered point cloud.

In the debug image above, the red DetectedObject is the validated object. The blue object is the deleted object.

"},{"location":"perception/detected_object_validation/obstacle-pointcloud-based-validator/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/detected_object_validation/obstacle-pointcloud-based-validator/#input","title":"Input","text":"Name Type Description ~/input/detected_objects autoware_auto_perception_msgs::msg::DetectedObjects DetectedObjects ~/input/obstacle_pointcloud sensor_msgs::msg::PointCloud2 Obstacle point cloud of dynamic objects"},{"location":"perception/detected_object_validation/obstacle-pointcloud-based-validator/#output","title":"Output","text":"Name Type Description ~/output/objects autoware_auto_perception_msgs::msg::DetectedObjects validated DetectedObjects"},{"location":"perception/detected_object_validation/obstacle-pointcloud-based-validator/#parameters","title":"Parameters","text":"Name Type Description min_points_num int The minimum number of obstacle point clouds in DetectedObjects max_points_num int The max number of obstacle point clouds in DetectedObjects min_points_and_distance_ratio float Threshold value of the number of point clouds per object when the distance from baselink is 1m, because the number of point clouds varies with the distance from baselink. enable_debugger bool Whether to create debug topics or not?"},{"location":"perception/detected_object_validation/obstacle-pointcloud-based-validator/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

Currently, only represented objects as BoundingBox or Cylinder are supported.

"},{"location":"perception/detected_object_validation/occupancy-grid-based-validator/","title":"occupancy grid based validator","text":""},{"location":"perception/detected_object_validation/occupancy-grid-based-validator/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

Compare the occupancy grid map with the DetectedObject, and if a larger percentage of obstacles are in freespace, delete them.

Basically, it takes an occupancy grid map as input and generates a binary image of freespace or other.

A mask image is generated for each DetectedObject and the average value (percentage) in the mask image is calculated. If the percentage is low, it is deleted.

"},{"location":"perception/detected_object_validation/occupancy-grid-based-validator/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/detected_object_validation/occupancy-grid-based-validator/#input","title":"Input","text":"Name Type Description ~/input/detected_objects autoware_auto_perception_msgs::msg::DetectedObjects DetectedObjects ~/input/occupancy_grid_map nav_msgs::msg::OccupancyGrid OccupancyGrid with no time series calculation is preferred."},{"location":"perception/detected_object_validation/occupancy-grid-based-validator/#output","title":"Output","text":"Name Type Description ~/output/objects autoware_auto_perception_msgs::msg::DetectedObjects validated DetectedObjects"},{"location":"perception/detected_object_validation/occupancy-grid-based-validator/#parameters","title":"Parameters","text":"Name Type Description mean_threshold float The percentage threshold of allowed non-freespace. enable_debug bool Whether to display debug images or not?"},{"location":"perception/detected_object_validation/occupancy-grid-based-validator/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

Currently, only vehicle represented as BoundingBox are supported.

"},{"location":"perception/detection_by_tracker/","title":"detection_by_tracker","text":""},{"location":"perception/detection_by_tracker/#purpose","title":"Purpose","text":"

This package feeds back the tracked objects to the detection module to keep it stable and keep detecting objects.

The detection by tracker takes as input an unknown object containing a cluster of points and a tracker. The unknown object is optimized to fit the size of the tracker so that it can continue to be detected.

"},{"location":"perception/detection_by_tracker/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

The detection by tracker receives an unknown object containing a point cloud and a tracker, where the unknown object is mainly shape-fitted using euclidean clustering. Shape fitting using euclidean clustering and other methods has a problem called under segmentation and over segmentation.

Adapted from [3]

Simply looking at the overlap between the unknown object and the tracker does not work. We need to take measures for under segmentation and over segmentation.

"},{"location":"perception/detection_by_tracker/#policy-for-dealing-with-over-segmentation","title":"Policy for dealing with over segmentation","text":"
  1. Merge the unknown objects in the tracker as a single object.
  2. Shape fitting using the tracker information such as angle and size as reference information.
"},{"location":"perception/detection_by_tracker/#policy-for-dealing-with-under-segmentation","title":"Policy for dealing with under segmentation","text":"
  1. Compare the tracker and unknown objects, and determine that those with large recall and small precision are under segmented objects.
  2. In order to divide the cluster of under segmented objects, it iterate the parameters to make small clusters.
  3. Adjust the parameters several times and adopt the one with the highest IoU.
"},{"location":"perception/detection_by_tracker/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/detection_by_tracker/#input","title":"Input","text":"Name Type Description ~/input/initial_objects tier4_perception_msgs::msg::DetectedObjectsWithFeature unknown objects ~/input/tracked_objects tier4_perception_msgs::msg::TrackedObjects trackers"},{"location":"perception/detection_by_tracker/#output","title":"Output","text":"Name Type Description ~/output autoware_auto_perception_msgs::msg::DetectedObjects objects"},{"location":"perception/detection_by_tracker/#parameters","title":"Parameters","text":""},{"location":"perception/detection_by_tracker/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/detection_by_tracker/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/detection_by_tracker/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/detection_by_tracker/#optional-referencesexternal-links","title":"(Optional) References/External links","text":"

[1] M. Himmelsbach, et al. \"Tracking and classification of arbitrary objects with bottom-up/top-down detection.\" (2012).

[2] Arya Senna Abdul Rachman, Arya. \"3D-LIDAR Multi Object Tracking for Autonomous Driving: Multi-target Detection and Tracking under Urban Road Uncertainties.\" (2017).

[3] David Held, et al. \"A Probabilistic Framework for Real-time 3D Segmentation using Spatial, Temporal, and Semantic Cues.\" (2016).

"},{"location":"perception/detection_by_tracker/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/elevation_map_loader/","title":"elevation_map_loader","text":""},{"location":"perception/elevation_map_loader/#purpose","title":"Purpose","text":"

This package provides elevation map for compare_map_segmentation.

"},{"location":"perception/elevation_map_loader/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

Generate elevation_map from subscribed pointcloud_map and vector_map and publish it. Save the generated elevation_map locally and load it from next time.

The elevation value of each cell is the average value of z of the points of the lowest cluster. Cells with No elevation value can be inpainted using the values of neighboring cells.

"},{"location":"perception/elevation_map_loader/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/elevation_map_loader/#input","title":"Input","text":"Name Type Description input/pointcloud_map sensor_msgs::msg::PointCloud2 The point cloud map input/vector_map autoware_auto_mapping_msgs::msg::HADMapBin (Optional) The binary data of lanelet2 map"},{"location":"perception/elevation_map_loader/#output","title":"Output","text":"Name Type Description output/elevation_map grid_map_msgs::msg::GridMap The elevation map output/elevation_map_cloud sensor_msgs::msg::PointCloud2 (Optional) The point cloud generated from the value of elevation map"},{"location":"perception/elevation_map_loader/#parameters","title":"Parameters","text":""},{"location":"perception/elevation_map_loader/#node-parameters","title":"Node parameters","text":"Name Type Description Default value map_layer_name std::string elevation_map layer name elevation param_file_path std::string GridMap parameters config path_default elevation_map_file_path std::string elevation_map file (bag2) path_default map_frame std::string map_frame when loading elevation_map file map use_inpaint bool Whether to inpaint empty cells true inpaint_radius float Radius of a circular neighborhood of each point inpainted that is considered by the algorithm [m] 0.3 use_elevation_map_cloud_publisher bool Whether to publish output/elevation_map_cloud false use_lane_filter bool Whether to filter elevation_map with vector_map false lane_margin float Value of how much to expand the range of vector_map [m] 0.5 lane_height_diff_thresh float Only point clouds in the height range of this value from vector_map are used to generate elevation_map [m] 1.0 lane_filter_voxel_size_x float Voxel size x for calculating point clouds in vector_map [m] 0.04 lane_filter_voxel_size_y float Voxel size y for calculating point clouds in vector_map [m] 0.04 lane_filter_voxel_size_z float Voxel size z for calculating point clouds in vector_map [m] 0.04"},{"location":"perception/elevation_map_loader/#gridmap-parameters","title":"GridMap parameters","text":"

The parameters are described on config/elevation_map_parameters.yaml.

"},{"location":"perception/elevation_map_loader/#general-parameters","title":"General parameters","text":"Name Type Description Default value pcl_grid_map_extraction/num_processing_threads int Number of threads for processing grid map cells. Filtering of the raw input point cloud is not parallelized. 12"},{"location":"perception/elevation_map_loader/#grid-map-parameters","title":"Grid map parameters","text":"

See: https://github.com/ANYbotics/grid_map/tree/ros2/grid_map_pcl

Resulting grid map parameters.

Name Type Description Default value pcl_grid_map_extraction/grid_map/min_num_points_per_cell int Minimum number of points in the point cloud that have to fall within any of the grid map cells. Otherwise the cell elevation will be set to NaN. 3 pcl_grid_map_extraction/grid_map/resolution float Resolution of the grid map. Width and length are computed automatically. 0.3 pcl_grid_map_extraction/grid_map/height_type int The parameter that determine the elevation of a cell 0: Smallest value among the average values of each cluster, 1: Mean value of the cluster with the most points 1 pcl_grid_map_extraction/grid_map/height_thresh float Height range from the smallest cluster (Only for height_type 1) 1.0"},{"location":"perception/elevation_map_loader/#point-cloud-pre-processing-parameters","title":"Point Cloud Pre-processing Parameters","text":""},{"location":"perception/elevation_map_loader/#rigid-body-transform-parameters","title":"Rigid body transform parameters","text":"

Rigid body transform that is applied to the point cloud before computing elevation.

Name Type Description Default value pcl_grid_map_extraction/cloud_transform/translation float Translation (xyz) that is applied to the input point cloud before computing elevation. 0.0 pcl_grid_map_extraction/cloud_transform/rotation float Rotation (intrinsic rotation, convention X-Y'-Z'') that is applied to the input point cloud before computing elevation. 0.0"},{"location":"perception/elevation_map_loader/#cluster-extraction-parameters","title":"Cluster extraction parameters","text":"

Cluster extraction is based on pcl algorithms. See https://pointclouds.org/documentation/tutorials/cluster_extraction.html for more details.

Name Type Description Default value pcl_grid_map_extraction/cluster_extraction/cluster_tolerance float Distance between points below which they will still be considered part of one cluster. 0.2 pcl_grid_map_extraction/cluster_extraction/min_num_points int Min number of points that a cluster needs to have (otherwise it will be discarded). 3 pcl_grid_map_extraction/cluster_extraction/max_num_points int Max number of points that a cluster can have (otherwise it will be discarded). 1000000"},{"location":"perception/elevation_map_loader/#outlier-removal-parameters","title":"Outlier removal parameters","text":"

See https://pointclouds.org/documentation/tutorials/statistical_outlier.html for more explanation on outlier removal.

Name Type Description Default value pcl_grid_map_extraction/outlier_removal/is_remove_outliers float Whether to perform statistical outlier removal. false pcl_grid_map_extraction/outlier_removal/mean_K float Number of neighbors to analyze for estimating statistics of a point. 10 pcl_grid_map_extraction/outlier_removal/stddev_threshold float Number of standard deviations under which points are considered to be inliers. 1.0"},{"location":"perception/elevation_map_loader/#subsampling-parameters","title":"Subsampling parameters","text":"

See https://pointclouds.org/documentation/tutorials/voxel_grid.html for more explanation on point cloud downsampling.

Name Type Description Default value pcl_grid_map_extraction/downsampling/is_downsample_cloud bool Whether to perform downsampling or not. false pcl_grid_map_extraction/downsampling/voxel_size float Voxel sizes (xyz) in meters. 0.02"},{"location":"perception/euclidean_cluster/","title":"euclidean_cluster","text":""},{"location":"perception/euclidean_cluster/#purpose","title":"Purpose","text":"

euclidean_cluster is a package for clustering points into smaller parts to classify objects.

This package has two clustering methods: euclidean_cluster and voxel_grid_based_euclidean_cluster.

"},{"location":"perception/euclidean_cluster/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"perception/euclidean_cluster/#euclidean_cluster_1","title":"euclidean_cluster","text":"

pcl::EuclideanClusterExtraction is applied to points. See official document for details.

"},{"location":"perception/euclidean_cluster/#voxel_grid_based_euclidean_cluster","title":"voxel_grid_based_euclidean_cluster","text":"
  1. A centroid in each voxel is calculated by pcl::VoxelGrid.
  2. The centroids are clustered by pcl::EuclideanClusterExtraction.
  3. The input points are clustered based on the clustered centroids.
"},{"location":"perception/euclidean_cluster/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/euclidean_cluster/#input","title":"Input","text":"Name Type Description input sensor_msgs::msg::PointCloud2 input pointcloud"},{"location":"perception/euclidean_cluster/#output","title":"Output","text":"Name Type Description output tier4_perception_msgs::msg::DetectedObjectsWithFeature cluster pointcloud debug/clusters sensor_msgs::msg::PointCloud2 colored cluster pointcloud for visualization"},{"location":"perception/euclidean_cluster/#parameters","title":"Parameters","text":""},{"location":"perception/euclidean_cluster/#core-parameters","title":"Core Parameters","text":""},{"location":"perception/euclidean_cluster/#euclidean_cluster_2","title":"euclidean_cluster","text":"Name Type Description use_height bool use point.z for clustering min_cluster_size int the minimum number of points that a cluster needs to contain in order to be considered valid max_cluster_size int the maximum number of points that a cluster needs to contain in order to be considered valid tolerance float the spatial cluster tolerance as a measure in the L2 Euclidean space"},{"location":"perception/euclidean_cluster/#voxel_grid_based_euclidean_cluster_1","title":"voxel_grid_based_euclidean_cluster","text":"Name Type Description use_height bool use point.z for clustering min_cluster_size int the minimum number of points that a cluster needs to contain in order to be considered valid max_cluster_size int the maximum number of points that a cluster needs to contain in order to be considered valid tolerance float the spatial cluster tolerance as a measure in the L2 Euclidean space voxel_leaf_size float the voxel leaf size of x and y min_points_number_per_voxel int the minimum number of points for a voxel"},{"location":"perception/euclidean_cluster/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/euclidean_cluster/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/euclidean_cluster/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/euclidean_cluster/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/euclidean_cluster/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":"

The use_height option of voxel_grid_based_euclidean_cluster isn't implemented yet.

"},{"location":"perception/front_vehicle_velocity_estimator/","title":"front_vehicle_velocity_estimator","text":"

This package contains a front vehicle velocity estimation for offline perception module analysis. This package can:

"},{"location":"perception/front_vehicle_velocity_estimator/#algorithm","title":"Algorithm","text":""},{"location":"perception/front_vehicle_velocity_estimator/#input","title":"Input","text":"Name Type Description ~/input/objects autoware_auto_perception_msgs/msg/DetectedObject.msg 3D detected objects. ~/input/pointcloud sensor_msgs/msg/PointCloud2.msg LiDAR pointcloud. ~/input/odometry nav_msgs::msg::Odometry.msg Odometry data."},{"location":"perception/front_vehicle_velocity_estimator/#output","title":"Output","text":"Name Type Description ~/output/objects autoware_auto_perception_msgs/msg/DetectedObjects.msg 3D detected object with twist. ~/debug/nearest_neighbor_pointcloud sensor_msgs/msg/PointCloud2.msg The pointcloud msg of nearest neighbor point."},{"location":"perception/front_vehicle_velocity_estimator/#node-parameter","title":"Node parameter","text":"Name Type Description Default value update_rate_hz double The update rate [hz]. 10.0"},{"location":"perception/front_vehicle_velocity_estimator/#core-parameter","title":"Core parameter","text":"Name Type Description Default value moving_average_num int The moving average number for velocity estimation. 1 threshold_pointcloud_z_high float The threshold for z position value of point when choosing nearest neighbor point within front vehicle [m]. If z > threshold_pointcloud_z_high, the point is considered to noise. 1.0f threshold_pointcloud_z_low float The threshold for z position value of point when choosing nearest neighbor point within front vehicle [m]. If z < threshold_pointcloud_z_low, the point is considered to noise like ground. 0.6f threshold_relative_velocity double The threshold for min and max of estimated relative velocity (v_{re}) [m/s]. If v_{re} < - threshold_relative_velocity , then v_{re} = - threshold_relative_velocity. If v_{re} > threshold_relative_velocity, then v_{re} = threshold_relative_velocity. 10.0 threshold_absolute_velocity double The threshold for max of estimated absolute velocity (v_{ae}) [m/s]. If v_{ae} > threshold_absolute_velocity, then v_{ae} = threshold_absolute_velocity. 20.0"},{"location":"perception/ground_segmentation/","title":"ground_segmentation","text":""},{"location":"perception/ground_segmentation/#purpose","title":"Purpose","text":"

The ground_segmentation is a node that remove the ground points from the input pointcloud.

"},{"location":"perception/ground_segmentation/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

Detail description of each ground segmentation algorithm is in the following links.

Filter Name Description Detail ray_ground_filter A method of removing the ground based on the geometrical relationship between points lined up on radiation link scan_ground_filter Almost the same method as ray_ground_filter, but with slightly improved performance link ransac_ground_filter A method of removing the ground by approximating the ground to a plane link"},{"location":"perception/ground_segmentation/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/ground_segmentation/#input","title":"Input","text":"Name Type Description ~/input/points sensor_msgs::msg::PointCloud2 reference points ~/input/indices pcl_msgs::msg::Indices reference indices"},{"location":"perception/ground_segmentation/#output","title":"Output","text":"Name Type Description ~/output/points sensor_msgs::msg::PointCloud2 filtered points"},{"location":"perception/ground_segmentation/#parameters","title":"Parameters","text":""},{"location":"perception/ground_segmentation/#node-parameters","title":"Node Parameters","text":"Name Type Default Value Description input_frame string \" \" input frame id output_frame string \" \" output frame id max_queue_size int 5 max queue size of input/output topics use_indices bool false flag to use pointcloud indices latched_indices bool false flag to latch pointcloud indices approximate_sync bool false flag to use approximate sync option"},{"location":"perception/ground_segmentation/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

pointcloud_preprocessor::Filter is implemented based on pcl_perception [1] because of this issue.

"},{"location":"perception/ground_segmentation/#referencesexternal-links","title":"References/External links","text":"

[1] https://github.com/ros-perception/perception_pcl/blob/ros2/pcl_ros/src/pcl_ros/filters/filter.cpp

"},{"location":"perception/ground_segmentation/docs/ransac-ground-filter/","title":"RANSAC Ground Filter","text":""},{"location":"perception/ground_segmentation/docs/ransac-ground-filter/#purpose","title":"Purpose","text":"

The purpose of this node is that remove the ground points from the input pointcloud.

"},{"location":"perception/ground_segmentation/docs/ransac-ground-filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

Apply the input points to the plane, and set the points at a certain distance from the plane as points other than the ground. Normally, whn using this method, the input points is filtered so that it is almost flat before use. Since the drivable area is often flat, there are methods such as filtering by lane.

"},{"location":"perception/ground_segmentation/docs/ransac-ground-filter/#inputs-outputs","title":"Inputs / Outputs","text":"

This implementation inherits pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"perception/ground_segmentation/docs/ransac-ground-filter/#parameters","title":"Parameters","text":""},{"location":"perception/ground_segmentation/docs/ransac-ground-filter/#node-parameters","title":"Node Parameters","text":"

This implementation inherits pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"perception/ground_segmentation/docs/ransac-ground-filter/#core-parameters","title":"Core Parameters","text":"Name Type Description base_frame string base_link frame unit_axis string The axis which we need to search ground plane max_iterations int The maximum number of iterations outlier_threshold double The distance threshold to the model [m] plane_slope_threshold double The slope threshold to prevent mis-fitting [deg] voxel_size_x double voxel size x [m] voxel_size_y double voxel size y [m] voxel_size_z double voxel size z [m] height_threshold double The height threshold from ground plane for no ground points [m] debug bool whether to output debug information"},{"location":"perception/ground_segmentation/docs/ransac-ground-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/ground_segmentation/docs/ransac-ground-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/ground_segmentation/docs/ransac-ground-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/ground_segmentation/docs/ransac-ground-filter/#referencesexternal-links","title":"References/External links","text":"

https://pcl.readthedocs.io/projects/tutorials/en/latest/planar_segmentation.html

"},{"location":"perception/ground_segmentation/docs/ransac-ground-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/ground_segmentation/docs/ray-ground-filter/","title":"Ray Ground Filter","text":""},{"location":"perception/ground_segmentation/docs/ray-ground-filter/#purpose","title":"Purpose","text":"

The purpose of this node is that remove the ground points from the input pointcloud.

"},{"location":"perception/ground_segmentation/docs/ray-ground-filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

The points is separated radially (Ray), and the ground is classified for each Ray sequentially from the point close to ego-vehicle based on the geometric information such as the distance and angle between the points.

"},{"location":"perception/ground_segmentation/docs/ray-ground-filter/#inputs-outputs","title":"Inputs / Outputs","text":"

This implementation inherits pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"perception/ground_segmentation/docs/ray-ground-filter/#parameters","title":"Parameters","text":""},{"location":"perception/ground_segmentation/docs/ray-ground-filter/#node-parameters","title":"Node Parameters","text":"

This implementation inherits pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"perception/ground_segmentation/docs/ray-ground-filter/#core-parameters","title":"Core Parameters","text":"Name Type Description input_frame string frame id of input pointcloud output_frame string frame id of output pointcloud general_max_slope double The triangle created by general_max_slope is called the global cone. If the point is outside the global cone, it is judged to be a point that is no on the ground initial_max_slope double Generally, the point where the object first hits is far from ego-vehicle because of sensor blind spot, so resolution is different from that point and thereafter, so this parameter exists to set a separate local_max_slope local_max_slope double The triangle created by local_max_slope is called the local cone. This parameter for classification based on the continuity of points min_height_threshold double This parameter is used instead of height_threshold because it's difficult to determine continuity in the local cone when the points are too close to each other. radial_divider_angle double The angle of ray concentric_divider_distance double Only check points which radius is larger than concentric_divider_distance reclass_distance_threshold double To check if point is to far from previous one, if so classify again min_x double The parameter to set vehicle footprint manually max_x double The parameter to set vehicle footprint manually min_y double The parameter to set vehicle footprint manually max_y double The parameter to set vehicle footprint manually"},{"location":"perception/ground_segmentation/docs/ray-ground-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

The input_frame is set as parameter but it must be fixed as base_link for the current algorithm.

"},{"location":"perception/ground_segmentation/docs/ray-ground-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/ground_segmentation/docs/ray-ground-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/ground_segmentation/docs/ray-ground-filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/ground_segmentation/docs/ray-ground-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/ground_segmentation/docs/scan-ground-filter/","title":"Scan Ground Filter","text":""},{"location":"perception/ground_segmentation/docs/scan-ground-filter/#purpose","title":"Purpose","text":"

The purpose of this node is that remove the ground points from the input pointcloud.

"},{"location":"perception/ground_segmentation/docs/scan-ground-filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

This algorithm works by following steps,

  1. Divide whole pointclouds into groups by horizontal angle and sort by xy-distance.
  2. Divide sorted pointclouds of each ray into grids
  3. Check the xy distance to previous pointcloud, if the distance is large and previous pointcloud is \"no ground\" and the height level of current point greater than previous point, the current pointcloud is classified as no ground.
  4. Check vertical angle of the point compared with previous ground grid
  5. Check the height of the point compared with predicted ground level
  6. If vertical angle is greater than local_slope_max and related height to predicted ground level is greater than \"non ground height threshold\", the point is classified as \"non ground\"
  7. If the vertical angle is in range of [-local_slope_max, local_slope_max] or related height to predicted ground level is smaller than non_ground_height_threshold, the point is classified as \"ground\"
  8. If the vertical angle is lower than -local_slope_max or the related height to ground level is greater than detection_range_z_max, the point will be classified as out of range
"},{"location":"perception/ground_segmentation/docs/scan-ground-filter/#inputs-outputs","title":"Inputs / Outputs","text":"

This implementation inherits pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"perception/ground_segmentation/docs/scan-ground-filter/#parameters","title":"Parameters","text":""},{"location":"perception/ground_segmentation/docs/scan-ground-filter/#node-parameters","title":"Node Parameters","text":"

This implementation inherits pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"perception/ground_segmentation/docs/scan-ground-filter/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description input_frame string \"base_link\" frame id of input pointcloud output_frame string \"base_link\" frame id of output pointcloud global_slope_max_angle_deg double 8.0 The global angle to classify as the ground or object [deg] local_slope_max_angle_deg double 10.0 The local angle to classify as the ground or object [deg] radial_divider_angle_deg double 1.0 The angle which divide the whole pointcloud to sliced group [deg] split_points_distance_tolerance double 0.2 The xy-distance threshold to to distinguishing far and near [m] split_height_distance double 0.2 The height threshold to distinguishing far and near [m] use_virtual_ground_point bool true whether to use the ground center of front wheels as the virtual ground point. detection_range_z_max float 2.5 Maximum height of detection range [m], applied only for elevation_grid_mode center_pcl_shift float 0.0 The x-axis offset of addition LiDARs from vehicle center of mass [m], recommended to use only for additional LiDARs in elevation_grid_mode non_ground_height_threshold float 0.2 Height threshold of non ground objects [m], applied only for elevation_grid_mode grid_mode_switch_radius float 20.0 The distance where grid division mode change from by distance to by vertical angle [m], applied only for elevation_grid_mode grid_size_m float 0.5 The first grid size [m], applied only for elevation_grid_mode gnd_grid_buffer_size uint16 4 Number of grids using to estimate local ground slope, applied only for elevation_grid_mode low_priority_region_x float -20.0 The non-zero x threshold in back side from which small objects detection is low priority [m] elevation_grid_mode bool true Elevation grid scan mode option"},{"location":"perception/ground_segmentation/docs/scan-ground-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

The input_frame is set as parameter but it must be fixed as base_link for the current algorithm.

"},{"location":"perception/ground_segmentation/docs/scan-ground-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/ground_segmentation/docs/scan-ground-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/ground_segmentation/docs/scan-ground-filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":"

The elevation grid idea is referred from \"Shen Z, Liang H, Lin L, Wang Z, Huang W, Yu J. Fast Ground Segmentation for 3D LiDAR Point Cloud Based on Jump-Convolution-Process. Remote Sensing. 2021; 13(16):3239. https://doi.org/10.3390/rs13163239\"

"},{"location":"perception/ground_segmentation/docs/scan-ground-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/heatmap_visualizer/","title":"heatmap_visualizer","text":""},{"location":"perception/heatmap_visualizer/#purpose","title":"Purpose","text":"

heatmap_visualizer is a package for visualizing heatmap of detected 3D objects' positions on the BEV space.

This package is used for qualitative evaluation and trend analysis of the detector, it means, for instance, the heatmap shows \"This detector performs good for near around of our vehicle, but far is bad\".

"},{"location":"perception/heatmap_visualizer/#how-to-run","title":"How to run","text":"
ros2 launch heatmap_visualizer heatmap_visualizer.launch.xml input/objects:=<DETECTED_OBJECTS_TOPIC>\n
"},{"location":"perception/heatmap_visualizer/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

In this implementation, create heatmap of the center position of detected objects for each classes, for instance, CAR, PEDESTRIAN, etc, and publish them as occupancy grid maps.

In the above figure, the pink represents high detection frequency area and blue one is low, or black represents there is no detection.

As inner-workings, add center positions of detected objects to index of each corresponding grid map cell in a buffer. The created heatmap will be published by each specific frame, which can be specified with frame_count. Note that the buffer to be add the positions is not reset per publishing. When publishing, firstly these values are normalized to [0, 1] using maximum and minimum values in the buffer. Secondly, they are scaled to integer in [0, 100] because nav_msgs::msg::OccupancyGrid only allow the value in [0, 100].

"},{"location":"perception/heatmap_visualizer/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/heatmap_visualizer/#input","title":"Input","text":"Name Type Description ~/input/objects autoware_auto_perception_msgs::msg::DetectedObjects detected objects"},{"location":"perception/heatmap_visualizer/#output","title":"Output","text":"Name Type Description ~/output/objects/<CLASS_NAME> nav_msgs::msg::OccupancyGrid visualized heatmap"},{"location":"perception/heatmap_visualizer/#parameters","title":"Parameters","text":""},{"location":"perception/heatmap_visualizer/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description frame_count int 50 The number of frames to publish heatmap map_frame string base_link the frame of heatmap to be respected map_length float 200.0 the length of map in meter map_resolution float 0.8 the resolution of map use_confidence bool false the flag if use confidence score as heatmap value rename_car_to_truck_and_bus bool true the flag if rename car to truck or bus"},{"location":"perception/heatmap_visualizer/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

The heatmap depends on the data to be used, so if the objects in data are sparse the heatmap will be sparse.

"},{"location":"perception/heatmap_visualizer/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/heatmap_visualizer/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/heatmap_visualizer/#referencesexternal-links","title":"References/External links","text":""},{"location":"perception/heatmap_visualizer/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/image_projection_based_fusion/","title":"image_projection_based_fusion","text":""},{"location":"perception/image_projection_based_fusion/#purpose","title":"Purpose","text":"

The image_projection_based_fusion is a package to fuse detected obstacles (bounding box or segmentation) from image and 3d pointcloud or obstacles (bounding box, cluster or segmentation).

"},{"location":"perception/image_projection_based_fusion/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"perception/image_projection_based_fusion/#sync-algorithm","title":"Sync Algorithm","text":""},{"location":"perception/image_projection_based_fusion/#matching","title":"matching","text":"

The offset between each camera and the lidar is set according to their shutter timing. After applying the offset to the timestamp, if the interval between the timestamp of pointcloud topic and the roi message is less than the match threshold, the two messages are matched.

current default value at autoware.universe for TIER IV Robotaxi are: - input_offset_ms: [61.67, 111.67, 45.0, 28.33, 78.33, 95.0] - match_threshold_ms: 30.0

"},{"location":"perception/image_projection_based_fusion/#fusion-and-timer","title":"fusion and timer","text":"

The subscription status of the message is signed with 'O'. 1.if a pointcloud message is subscribed under the below condition: | | pointcloud | roi msg 1 | roi msg 2 | roi msg 3 | | :-----------------: | :--------: | :-------: | :-------: | :-------: | | subscription status | | O | O | O |

If the roi msgs can be matched, fuse them and postprocess the pointcloud message. Otherwise, fuse the matched roi msgs and cache the pointcloud. 2.if a pointcloud message is subscribed under the below condition: | | pointcloud | roi msg 1 | roi msg 2 | roi msg 3 | | :-----------------: | :--------: | :-------: | :-------: | :-------: | | subscription status | | O | O | |

if the roi msgs can be matched, fuse them and cache the pointcloud. 3.if a pointcloud message is subscribed under the below condition: | | pointcloud | roi msg 1 | roi msg 2 | roi msg 3 | | :-----------------: | :--------: | :-------: | :-------: | :-------: | | subscription status | O | O | O | |

If the roi msg 3 is subscribed before the next pointcloud messge coming or timeout, fuse it if matched, otherwise wait for the next roi msg 3. If the roi msg 3 is not subscribed before the next pointcloud messge coming or timeout, postprocess the pointcloud messege as it is.

The timeout threshold should be set according to the postprocessing time. E.g, if the postprocessing time is around 50ms, the timeout threshold should be set smaller than 50ms, so that the whole processing time could be less than 100ms. current default value at autoware.universe for XX1: - timeout_ms: 50.0

"},{"location":"perception/image_projection_based_fusion/#known-limits","title":"Known Limits","text":"

The rclcpp::TimerBase timer could not break a for loop, therefore even if time is out when fusing a roi msg at the middle, the program will run until all msgs are fused.

"},{"location":"perception/image_projection_based_fusion/#detail-description-of-each-fusions-algorithm-is-in-the-following-links","title":"Detail description of each fusion's algorithm is in the following links","text":"Fusion Name Description Detail roi_cluster_fusion Overwrite a classification label of clusters by that of ROIs from a 2D object detector. link roi_detected_object_fusion Overwrite a classification label of detected objects by that of ROIs from a 2D object detector. link pointpainting_fusion Paint the point cloud with the ROIs from a 2D object detector and feed to a 3D object detector. link"},{"location":"perception/image_projection_based_fusion/docs/pointpainting-fusion/","title":"pointpainting_fusion","text":""},{"location":"perception/image_projection_based_fusion/docs/pointpainting-fusion/#purpose","title":"Purpose","text":"

The pointpainting_fusion is a package for utilizing the class information detected by a 2D object detection in 3D object detection.

"},{"location":"perception/image_projection_based_fusion/docs/pointpainting-fusion/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

The lidar points are projected onto the output of an image-only 2d object detection network and the class scores are appended to each point. The painted point cloud can then be fed to the centerpoint network.

"},{"location":"perception/image_projection_based_fusion/docs/pointpainting-fusion/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/image_projection_based_fusion/docs/pointpainting-fusion/#input","title":"Input","text":"Name Type Description input sensor_msgs::msg::PointCloud2 pointcloud input/camera_info[0-7] sensor_msgs::msg::CameraInfo camera information to project 3d points onto image planes input/rois[0-7] tier4_perception_msgs::msg::DetectedObjectsWithFeature ROIs from each image input/image_raw[0-7] sensor_msgs::msg::Image images for visualization"},{"location":"perception/image_projection_based_fusion/docs/pointpainting-fusion/#output","title":"Output","text":"Name Type Description output sensor_msgs::msg::PointCloud2 painted pointcloud ~/output/objects autoware_auto_perception_msgs::msg::DetectedObjects detected objects ~/debug/image_raw[0-7] sensor_msgs::msg::Image images for visualization"},{"location":"perception/image_projection_based_fusion/docs/pointpainting-fusion/#parameters","title":"Parameters","text":""},{"location":"perception/image_projection_based_fusion/docs/pointpainting-fusion/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description score_threshold float 0.4 detected objects with score less than threshold are ignored densification_world_frame_id string map the world frame id to fuse multi-frame pointcloud densification_num_past_frames int 0 the number of past frames to fuse with the current frame trt_precision string fp16 TensorRT inference precision: fp32 or fp16 encoder_onnx_path string \"\" path to VoxelFeatureEncoder ONNX file encoder_engine_path string \"\" path to VoxelFeatureEncoder TensorRT Engine file head_onnx_path string \"\" path to DetectionHead ONNX file head_engine_path string \"\" path to DetectionHead TensorRT Engine file"},{"location":"perception/image_projection_based_fusion/docs/pointpainting-fusion/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/image_projection_based_fusion/docs/pointpainting-fusion/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/image_projection_based_fusion/docs/pointpainting-fusion/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/image_projection_based_fusion/docs/pointpainting-fusion/#referencesexternal-links","title":"References/External links","text":"

[1] Vora, Sourabh, et al. \"PointPainting: Sequential fusion for 3d object detection.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020.

[2] CVPR'20 Workshop on Scalability in Autonomous Driving] Waymo Open Dataset Challenge: https://youtu.be/9g9GsI33ol8?t=535 Ding, Zhuangzhuang, et al. \"1st Place Solution for Waymo Open Dataset Challenge--3D Detection and Domain Adaptation.\" arXiv preprint arXiv:2006.15505 (2020).

"},{"location":"perception/image_projection_based_fusion/docs/pointpainting-fusion/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/image_projection_based_fusion/docs/roi-cluster-fusion/","title":"roi_cluster_fusion","text":""},{"location":"perception/image_projection_based_fusion/docs/roi-cluster-fusion/#purpose","title":"Purpose","text":"

The roi_cluster_fusion is a package for filtering clusters that are less likely to be objects and overwriting labels of clusters with that of Region Of Interests (ROIs) by a 2D object detector.

"},{"location":"perception/image_projection_based_fusion/docs/roi-cluster-fusion/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

The clusters are projected onto image planes, and then if the ROIs of clusters and ROIs by a detector are overlapped, the labels of clusters are overwritten with that of ROIs by detector. Intersection over Union (IoU) is used to determine if there are overlaps between them.

"},{"location":"perception/image_projection_based_fusion/docs/roi-cluster-fusion/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/image_projection_based_fusion/docs/roi-cluster-fusion/#input","title":"Input","text":"Name Type Description input tier4_perception_msgs::msg::DetectedObjectsWithFeature clustered pointcloud input/camera_info[0-7] sensor_msgs::msg::CameraInfo camera information to project 3d points onto image planes input/rois[0-7] tier4_perception_msgs::msg::DetectedObjectsWithFeature ROIs from each image input/image_raw[0-7] sensor_msgs::msg::Image images for visualization"},{"location":"perception/image_projection_based_fusion/docs/roi-cluster-fusion/#output","title":"Output","text":"Name Type Description output tier4_perception_msgs::msg::DetectedObjectsWithFeature labeled cluster pointcloud ~/debug/image_raw[0-7] sensor_msgs::msg::Image images for visualization"},{"location":"perception/image_projection_based_fusion/docs/roi-cluster-fusion/#parameters","title":"Parameters","text":""},{"location":"perception/image_projection_based_fusion/docs/roi-cluster-fusion/#core-parameters","title":"Core Parameters","text":"Name Type Description use_iou_x bool calculate IoU only along x-axis use_iou_y bool calculate IoU only along y-axis use_iou bool calculate IoU both along x-axis and y-axis use_cluster_semantic_type bool if false, the labels of clusters are overwritten by UNKNOWN before fusion iou_threshold float the IoU threshold to overwrite a label of clusters with a label of roi rois_number int the number of input rois debug_mode bool If true, subscribe and publish images for visualization."},{"location":"perception/image_projection_based_fusion/docs/roi-cluster-fusion/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/image_projection_based_fusion/docs/roi-cluster-fusion/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/image_projection_based_fusion/docs/roi-cluster-fusion/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/image_projection_based_fusion/docs/roi-cluster-fusion/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/image_projection_based_fusion/docs/roi-cluster-fusion/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/image_projection_based_fusion/docs/roi-detected-object-fusion/","title":"roi_detected_object_fusion","text":""},{"location":"perception/image_projection_based_fusion/docs/roi-detected-object-fusion/#purpose","title":"Purpose","text":"

The roi_detected_object_fusion is a package to overwrite labels of detected objects with that of Region Of Interests (ROIs) by a 2D object detector.

"},{"location":"perception/image_projection_based_fusion/docs/roi-detected-object-fusion/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

In what follows, we describe the algorithm utilized by roi_detected_object_fusion (the meaning of each parameter can be found in the Parameters section):

  1. If the existence_probability of a detected object is greater than the threshold, it is accepted without any further processing and published in output.
  2. The remaining detected objects are projected onto image planes, and if the resulting ROIs overlap with the ones from the 2D detector, they are published as fused objects in output. The Intersection over Union (IoU) is used to determine if there are overlaps between the detections from input and the ROIs from input/rois.

The DetectedObject has three possible shape choices/implementations, where the polygon's vertices for each case are defined as follows:

"},{"location":"perception/image_projection_based_fusion/docs/roi-detected-object-fusion/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/image_projection_based_fusion/docs/roi-detected-object-fusion/#input","title":"Input","text":"Name Type Description input autoware_auto_perception_msgs::msg::DetectedObjects input detected objects input/camera_info[0-7] sensor_msgs::msg::CameraInfo camera information to project 3d points onto image planes. input/rois[0-7] tier4_perception_msgs::msg::DetectedObjectsWithFeature ROIs from each image. input/image_raw[0-7] sensor_msgs::msg::Image images for visualization."},{"location":"perception/image_projection_based_fusion/docs/roi-detected-object-fusion/#output","title":"Output","text":"Name Type Description output autoware_auto_perception_msgs::msg::DetectedObjects detected objects ~/debug/image_raw[0-7] sensor_msgs::msg::Image images for visualization, ~/debug/fused_objects autoware_auto_perception_msgs::msg::DetectedObjects fused detected objects ~/debug/ignored_objects autoware_auto_perception_msgs::msg::DetectedObjects not fused detected objects"},{"location":"perception/image_projection_based_fusion/docs/roi-detected-object-fusion/#parameters","title":"Parameters","text":""},{"location":"perception/image_projection_based_fusion/docs/roi-detected-object-fusion/#core-parameters","title":"Core Parameters","text":"Name Type Description rois_number int the number of input rois debug_mode bool If set to true, the node subscribes to the image topic and publishes an image with debug drawings. min_iou_threshold double If the iou between detected objects and rois is greater than min_iou_threshold, the objects are classified as fused. passthrough_lower_bound_probability_threshold double If the existence_probability of a detected object is greater than the threshold, it is published in output. use_roi_probability float If set to true, the algorithm uses existence_probability of ROIs to match with the that of detected objects. roi_probability_threshold double If the existence_probability of ROIs is greater than the threshold, matched detected objects are published in output."},{"location":"perception/image_projection_based_fusion/docs/roi-detected-object-fusion/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

POLYGON, which is a shape of a detected object, isn't supported yet.

"},{"location":"perception/lidar_apollo_instance_segmentation/","title":"lidar_apollo_instance_segmentation","text":""},{"location":"perception/lidar_apollo_instance_segmentation/#purpose","title":"Purpose","text":"

This node segments 3D pointcloud data from lidar sensors into obstacles, e.g., cars, trucks, bicycles, and pedestrians based on CNN based model and obstacle clustering method.

"},{"location":"perception/lidar_apollo_instance_segmentation/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

See the original design by Apollo.

"},{"location":"perception/lidar_apollo_instance_segmentation/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/lidar_apollo_instance_segmentation/#input","title":"Input","text":"Name Type Description input/pointcloud sensor_msgs/PointCloud2 Pointcloud data from lidar sensors"},{"location":"perception/lidar_apollo_instance_segmentation/#output","title":"Output","text":"Name Type Description output/labeled_clusters tier4_perception_msgs/DetectedObjectsWithFeature Detected objects with labeled pointcloud cluster. debug/instance_pointcloud sensor_msgs/PointCloud2 Segmented pointcloud for visualization."},{"location":"perception/lidar_apollo_instance_segmentation/#parameters","title":"Parameters","text":""},{"location":"perception/lidar_apollo_instance_segmentation/#node-parameters","title":"Node Parameters","text":"

None

"},{"location":"perception/lidar_apollo_instance_segmentation/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description score_threshold double 0.8 If the score of a detected object is lower than this value, the object is ignored. range int 60 Half of the length of feature map sides. [m] width int 640 The grid width of feature map. height int 640 The grid height of feature map. engine_file string \"vls-128.engine\" The name of TensorRT engine file for CNN model. prototxt_file string \"vls-128.prototxt\" The name of prototxt file for CNN model. caffemodel_file string \"vls-128.caffemodel\" The name of caffemodel file for CNN model. use_intensity_feature bool true The flag to use intensity feature of pointcloud. use_constant_feature bool false The flag to use direction and distance feature of pointcloud. target_frame string \"base_link\" Pointcloud data is transformed into this frame. z_offset int 2 z offset from target frame. [m]"},{"location":"perception/lidar_apollo_instance_segmentation/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

There is no training code for CNN model.

"},{"location":"perception/lidar_apollo_instance_segmentation/#note","title":"Note","text":"

This package makes use of three external codes. The trained files are provided by apollo. The trained files are automatically downloaded when you build.

Original URL

Supported lidars are velodyne 16, 64 and 128, but you can also use velodyne 32 and other lidars with good accuracy.

  1. apollo 3D Obstacle Perception description

    /******************************************************************************\n* Copyright 2017 The Apollo Authors. All Rights Reserved.\n*\n* Licensed under the Apache License, Version 2.0 (the \"License\");\n* you may not use this file except in compliance with the License.\n* You may obtain a copy of the License at\n*\n* http://www.apache.org/licenses/LICENSE-2.0\n*\n* Unless required by applicable law or agreed to in writing, software\n* distributed under the License is distributed on an \"AS IS\" BASIS,\n* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n* See the License for the specific language governing permissions and\n* limitations under the License.\n*****************************************************************************/\n
  2. tensorRTWrapper : It is used under the lib directory.

    MIT License\n\nCopyright (c) 2018 lewes6369\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n
  3. autoware_perception description

    /*\n* Copyright 2018-2019 Autoware Foundation. All rights reserved.\n*\n* Licensed under the Apache License, Version 2.0 (the \"License\");\n* you may not use this file except in compliance with the License.\n* You may obtain a copy of the License at\n*\n*     http://www.apache.org/licenses/LICENSE-2.0\n*\n* Unless required by applicable law or agreed to in writing, software\n* distributed under the License is distributed on an \"AS IS\" BASIS,\n* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n* See the License for the specific language governing permissions and\n* limitations under the License.\n*/\n
"},{"location":"perception/lidar_apollo_instance_segmentation/#special-thanks","title":"Special thanks","text":""},{"location":"perception/lidar_apollo_instance_segmentation/lib/","title":"Index","text":""},{"location":"perception/lidar_apollo_instance_segmentation/lib/#note","title":"Note","text":"

This library is customized. Original repository : https://github.com/lewes6369/tensorRTWrapper

MIT License\n\nCopyright (c) 2018 lewes6369\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n
"},{"location":"perception/lidar_apollo_segmentation_tvm/design/lidar-segmentation-design/","title":"Macro Syntax Error","text":"

File: perception/lidar_apollo_segmentation_tvm/design/lidar-segmentation-design.md

Line 1 in Markdown file: Missing end of comment tag

# lidar_apollo_segmentation_tvm {#lidar-apollo-segmentation-tvm-design}\n

"},{"location":"perception/lidar_apollo_segmentation_tvm_nodes/design/lidar-segmentation-nodes-design/","title":"Macro Syntax Error","text":"

File: perception/lidar_apollo_segmentation_tvm_nodes/design/lidar-segmentation-nodes-design.md

Line 1 in Markdown file: Missing end of comment tag

# lidar_apollo_segmentation_tvm_nodes {#lidar-apollo-segmentation-tvm-nodes-design}\n

"},{"location":"perception/lidar_centerpoint/","title":"lidar_centerpoint","text":""},{"location":"perception/lidar_centerpoint/#purpose","title":"Purpose","text":"

lidar_centerpoint is a package for detecting dynamic 3D objects.

"},{"location":"perception/lidar_centerpoint/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

In this implementation, CenterPoint [1] uses a PointPillars-based [2] network to inference with TensorRT.

We trained the models using https://github.com/open-mmlab/mmdetection3d.

"},{"location":"perception/lidar_centerpoint/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/lidar_centerpoint/#input","title":"Input","text":"Name Type Description ~/input/pointcloud sensor_msgs::msg::PointCloud2 input pointcloud"},{"location":"perception/lidar_centerpoint/#output","title":"Output","text":"Name Type Description ~/output/objects autoware_auto_perception_msgs::msg::DetectedObjects detected objects debug/cyclic_time_ms tier4_debug_msgs::msg::Float64Stamped cyclic time (msg) debug/processing_time_ms tier4_debug_msgs::msg::Float64Stamped processing time (ms)"},{"location":"perception/lidar_centerpoint/#parameters","title":"Parameters","text":""},{"location":"perception/lidar_centerpoint/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description score_threshold float 0.4 detected objects with score less than threshold are ignored densification_world_frame_id string map the world frame id to fuse multi-frame pointcloud densification_num_past_frames int 1 the number of past frames to fuse with the current frame trt_precision string fp16 TensorRT inference precision: fp32 or fp16 encoder_onnx_path string \"\" path to VoxelFeatureEncoder ONNX file encoder_engine_path string \"\" path to VoxelFeatureEncoder TensorRT Engine file head_onnx_path string \"\" path to DetectionHead ONNX file head_engine_path string \"\" path to DetectionHead TensorRT Engine file nms_iou_target_class_names list[string] - target classes for IoU-based Non Maximum Suppression nms_iou_search_distance_2d double - If two objects are farther than the value, NMS isn't applied. nms_iou_threshold double - IoU threshold for the IoU-based Non Maximum Suppression build_only bool false shutdown the node after TensorRT engine file is built"},{"location":"perception/lidar_centerpoint/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/lidar_centerpoint/#trained-models","title":"Trained Models","text":"

You can download the onnx format of trained models by clicking on the links below.

Centerpoint was trained in nuScenes (~110k lidar frames) [8] and TIER IV's internal database (~11k lidar frames) for 60 epochs. Centerpoint tiny was trained in Argoverse 2 (~28k lidar frames) [9] and TIER IV's internal database (~11k lidar frames) for 20 epochs.

"},{"location":"perception/lidar_centerpoint/#standalone-inference-and-visualization","title":"Standalone inference and visualization","text":"

In addition to its use as a standard ROS node, lidar_centerpoint can also be used to perform inferences in an isolated manner. To do so, execute the following launcher, where pcd_path is the path of the pointcloud to be used for inference.

ros2 launch lidar_centerpoint single_inference_lidar_centerpoint.launch.xml pcd_path:=test_pointcloud.pcd detections_path:=test_detections.ply\n

lidar_centerpoint generates a ply file in the provided detections_path, which contains the detections as triangle meshes. These detections can be visualized by most 3D tools, but we also integrate a visualization UI using Open3D which is launched alongside lidar_centerpoint.

"},{"location":"perception/lidar_centerpoint/#changelog","title":"Changelog","text":""},{"location":"perception/lidar_centerpoint/#v1-20220706","title":"v1 (2022/07/06)","text":"Name URLs Description centerpoint pts_voxel_encoder pts_backbone_neck_head There is a single change due to the limitation in the implementation of this package. num_filters=[32, 32] of PillarFeatureNet centerpoint_tiny pts_voxel_encoder pts_backbone_neck_head The same model as default of v0.

These changes are compared with this configuration.

"},{"location":"perception/lidar_centerpoint/#v0-20211203","title":"v0 (2021/12/03)","text":"Name URLs Description default pts_voxel_encoder pts_backbone_neck_head There are two changes from the original CenterPoint architecture. num_filters=[32] of PillarFeatureNet and ds_layer_strides=[2, 2, 2] of RPN"},{"location":"perception/lidar_centerpoint/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/lidar_centerpoint/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/lidar_centerpoint/#referencesexternal-links","title":"References/External links","text":"

[1] Yin, Tianwei, Xingyi Zhou, and Philipp Kr\u00e4henb\u00fchl. \"Center-based 3d object detection and tracking.\" arXiv preprint arXiv:2006.11275 (2020).

[2] Lang, Alex H., et al. \"Pointpillars: Fast encoders for object detection from point clouds.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019.

[3] https://github.com/tianweiy/CenterPoint

[4] https://github.com/open-mmlab/mmdetection3d

[5] https://github.com/open-mmlab/OpenPCDet

[6] https://github.com/yukkysaito/autoware_perception

[7] https://github.com/NVIDIA-AI-IOT/CUDA-PointPillars

[8] https://www.nuscenes.org/nuscenes

[9] https://www.argoverse.org/av2.html

"},{"location":"perception/lidar_centerpoint/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/lidar_centerpoint/launch/centerpoint_vs_centerpoint-tiny/","title":"Run lidar_centerpoint and lidar_centerpoint-tiny simultaneously","text":"

This tutorial is for showing centerpoint and centerpoint_tinymodels\u2019 results simultaneously, making it easier to visualize and compare the performance.

"},{"location":"perception/lidar_centerpoint/launch/centerpoint_vs_centerpoint-tiny/#setup-development-environment","title":"Setup Development Environment","text":"

Follow the steps in the Source Installation (link) in Autoware doc.

If you fail to build autoware environment according to lack of memory, then it is recommended to build autoware sequentially.

Source the ROS 2 Galactic setup script.

source /opt/ros/galactic/setup.bash\n

Build the entire autoware repository.

colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release --parallel-workers=1\n

Or you can use a constrained number of CPU to build only one package.

export MAKEFLAGS=\"-j 4\" && MAKE_JOBS=4 colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release --parallel-workers 1 --packages-select PACKAGE_NAME\n

Source the package.

source install/setup.bash\n
"},{"location":"perception/lidar_centerpoint/launch/centerpoint_vs_centerpoint-tiny/#data-preparation","title":"Data Preparation","text":""},{"location":"perception/lidar_centerpoint/launch/centerpoint_vs_centerpoint-tiny/#using-rosbag-dataset","title":"Using rosbag dataset","text":"
ros2 bag play /YOUR/ROSBAG/PATH/ --clock 100\n

Don't forget to add clock in order to sync between two rviz display.

You can also use the sample rosbag provided by autoware here.

If you want to merge several rosbags into one, you can refer to this tool.

"},{"location":"perception/lidar_centerpoint/launch/centerpoint_vs_centerpoint-tiny/#using-realtime-lidar-dataset","title":"Using realtime LiDAR dataset","text":"

Set up your Ethernet connection according to 1.1 - 1.3 in this website.

Download Velodyne ROS driver

git clone -b ros2 https://github.com/ros-drivers/velodyne.git\n

Source the ROS 2 Galactic setup script.

source /opt/ros/galactic/setup.bash\n

Compile Velodyne driver

cd velodyne\nrosdep install -y --from-paths . --ignore-src --rosdistro $ROS_DISTRO\ncolcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release\n

Edit the configuration file. Specify the LiDAR device IP address in ./velodyne_driver/config/VLP32C-velodyne_driver_node-params.yaml

velodyne_driver_node:\nros__parameters:\ndevice_ip: 192.168.1.201 //change to your LiDAR device IP address\ngps_time: false\ntime_offset: 0.0\nenabled: true\nread_once: false\nread_fast: false\nrepeat_delay: 0.0\nframe_id: velodyne\nmodel: 32C\nrpm: 600.0\nport: 2368\n

Launch the velodyne driver.

# Terminal 1\nros2 launch velodyne_driver velodyne_driver_node-VLP32C-launch.py\n

Launch the velodyne_pointcloud.

# Terminal 2\nros2 launch velodyne_pointcloud velodyne_convert_node-VLP32C-launch.py\n

Point Cloud data will be available on topic /velodyne_points. You can check with ros2 topic echo /velodyne_points.

Check this website if there is any unexpected issue.

"},{"location":"perception/lidar_centerpoint/launch/centerpoint_vs_centerpoint-tiny/#launch-file-setting","title":"Launch file setting","text":"

Several fields to check in centerpoint_vs_centerpoint-tiny.launch.xml before running lidar centerpoint.

"},{"location":"perception/lidar_centerpoint/launch/centerpoint_vs_centerpoint-tiny/#run-centerpoint-and-centerpoint-tiny-simultaneously","title":"Run CenterPoint and CenterPoint-tiny simultaneously","text":"

Run

ros2 launch lidar_centerpoint centerpoint_vs_centerpoint-tiny.launch.xml\n

Then you will see two rviz window show immediately. On the left is the result for lidar centerpoint tiny, and on the right is the result for lidar centerpoint.

"},{"location":"perception/lidar_centerpoint/launch/centerpoint_vs_centerpoint-tiny/#troubleshooting","title":"Troubleshooting","text":""},{"location":"perception/lidar_centerpoint/launch/centerpoint_vs_centerpoint-tiny/#bounding-box-blink-on-rviz","title":"Bounding Box blink on rviz","text":"

To avoid Bounding Boxs blinking on rviz, you can extend bbox marker lifetime.

Set marker_ptr->lifetime and marker.lifetime to a longer lifetime.

Make sure to rebuild packages after any change.

"},{"location":"perception/lidar_centerpoint_tvm/","title":"lidar_centerpoint_tvm","text":""},{"location":"perception/lidar_centerpoint_tvm/#design","title":"Design","text":""},{"location":"perception/lidar_centerpoint_tvm/#usage","title":"Usage","text":"

lidar_centerpoint_tvm is a package for detecting dynamic 3D objects using TVM compiled centerpoint module for different backends. To use this package, replace lidar_centerpoint with lidar_centerpoint_tvm in perception launch files(for example, lidar_based_detection.launch.xml is lidar based detection is chosen.).

"},{"location":"perception/lidar_centerpoint_tvm/#neural-network","title":"Neural network","text":"

This package will not build without a neural network for its inference. The network is provided by the tvm_utility package. See its design page for more information on how to enable downloading pre-compiled networks (by setting the DOWNLOAD_ARTIFACTS cmake variable), or how to handle user-compiled networks.

"},{"location":"perception/lidar_centerpoint_tvm/#backend","title":"Backend","text":"

The backend used for the inference can be selected by setting the lidar_centerpoint_tvm_BACKEND cmake variable. The current available options are llvm for a CPU backend, and vulkan or opencl for a GPU backend. It defaults to llvm.

"},{"location":"perception/lidar_centerpoint_tvm/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/lidar_centerpoint_tvm/#input","title":"Input","text":"Name Type Description ~/input/pointcloud sensor_msgs::msg::PointCloud2 input pointcloud"},{"location":"perception/lidar_centerpoint_tvm/#output","title":"Output","text":"Name Type Description ~/output/objects autoware_auto_perception_msgs::msg::DetectedObjects detected objects debug/cyclic_time_ms tier4_debug_msgs::msg::Float64Stamped cyclic time (msg) debug/processing_time_ms tier4_debug_msgs::msg::Float64Stamped processing time (ms)"},{"location":"perception/lidar_centerpoint_tvm/#parameters","title":"Parameters","text":""},{"location":"perception/lidar_centerpoint_tvm/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description score_threshold float 0.1 detected objects with score less than threshold are ignored densification_world_frame_id string map the world frame id to fuse multi-frame pointcloud densification_num_past_frames int 1 the number of past frames to fuse with the current frame"},{"location":"perception/lidar_centerpoint_tvm/#bounding-box","title":"Bounding Box","text":"

The lidar segmentation node establishes a bounding box for the detected obstacles. The L-fit method of fitting a bounding box to a cluster is used for that.

"},{"location":"perception/lidar_centerpoint_tvm/#limitation-and-known-issue","title":"Limitation and Known Issue","text":"

Due to an accuracy issue of centerpoint model, vulkan cannot be used at the moment. As for 'llvm' backend, real-time performance cannot be achieved.

"},{"location":"perception/lidar_centerpoint_tvm/#scatter-implementation","title":"Scatter Implementation","text":"

Scatter function can be implemented using either TVMScript or C++. For C++ implementation, please refer to https://github.com/angry-crab/autoware.universe/blob/c020419fe52e359287eccb1b77e93bdc1a681e24/perception/lidar_centerpoint_tvm/lib/network/scatter.cpp#L65

"},{"location":"perception/lidar_centerpoint_tvm/#reference","title":"Reference","text":"

[1] Yin, Tianwei, Xingyi Zhou, and Philipp Kr\u00e4henb\u00fchl. \"Center-based 3d object detection and tracking.\" arXiv preprint arXiv:2006.11275 (2020).

[2] Lang, Alex H., et al. \"Pointpillars: Fast encoders for object detection from point clouds.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019.

[3] https://github.com/tianweiy/CenterPoint

[4] https://github.com/Abraham423/CenterPoint

[5] https://github.com/open-mmlab/OpenPCDet

"},{"location":"perception/lidar_centerpoint_tvm/#related-issues","title":"Related issues","text":""},{"location":"perception/lidar_centerpoint_tvm/#908-run-lidar-centerpoint-with-tvm","title":"908: Run Lidar Centerpoint with TVM","text":""},{"location":"perception/map_based_prediction/","title":"map_based_prediction","text":""},{"location":"perception/map_based_prediction/#role","title":"Role","text":"

map_based_prediction is a module to predict the future paths (and their probabilities) of other vehicles and pedestrians according to the shape of the map and the surrounding environment.

"},{"location":"perception/map_based_prediction/#assumptions","title":"Assumptions","text":""},{"location":"perception/map_based_prediction/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"perception/map_based_prediction/#flow-chart","title":"Flow chart","text":""},{"location":"perception/map_based_prediction/#path-prediction-for-road-users","title":"Path prediction for road users","text":""},{"location":"perception/map_based_prediction/#remove-old-object-history","title":"Remove old object history","text":"

Store time-series data of objects to determine the vehicle's route and to detect lane change for several duration. Object Data contains the object's position, speed, and time information.

"},{"location":"perception/map_based_prediction/#get-current-lanelet-and-update-object-history","title":"Get current lanelet and update Object history","text":"

Search one or more lanelets satisfying the following conditions for each target object and store them in the ObjectData.

"},{"location":"perception/map_based_prediction/#get-predicted-reference-path","title":"Get predicted reference path","text":"

The conditions for the lane change detection then becomes

//  Left Lane Change Detection\n(d_current_left / d_lane) > dl_ratio_threshold &&\n(d_current_left - d_previous_left) > ddl_threshold\n\n// Right Lane Change Detection\n(d_current_right / d_lane) < dr_ratio_threshold &&\n(d_current_right - d_previous_right) < ddr_threshold\n

where the parameter is explained in the picture below. An example of how to tune the parameters is described later.

"},{"location":"perception/map_based_prediction/#lane-change-detection-logic","title":"Lane change detection logic","text":"

This is an example to tune the parameters for lane change detection. The default parameters are set so that the lane change can be detected 1 second before the vehicle crosses the lane boundary. Here, 15 data in the lane change / non lane change cases are plotted.

On the top plot, blue dots are the distance from the lane boundary one second before the lane change, and red dots are the average distance from the lane boundary when driving straight. From this plot, the most conservative value where lane change can be detected for all of these data can be seen as -1.1. Note that the larger number makes the decision conservative (lane change may not be detected) and the lower number makes the decision aggressive (many false positive occurs).

On the bottom plot, blue dots are the lateral velocity one second before the lane change, and red dots are the average of the (absolute) lateral velocity when driving straight. In the same policy above, the parameter can be set as 0.5.

"},{"location":"perception/map_based_prediction/#limitations","title":"Limitations","text":""},{"location":"perception/map_based_prediction/#path-prediction-for-crosswalk-users","title":"Path prediction for crosswalk users","text":"

This module treats Pedestrians and Bicycles as objects using the crosswalk, and outputs prediction path based on map and estimated object's velocity, assuming the object has intention to cross the crosswalk, if the objects satisfies at least one of the following conditions:

If there are a reachable crosswalk entry points within the prediction_time_horizon and the objects satisfies above condition, this module outputs additional predicted path to cross the opposite side via the crosswalk entry point.

If the target object is inside the road or crosswalk, this module outputs one or two additional prediction path(s) to reach exit point of the crosswalk. The number of prediction paths are depend on whether object is moving or not. If the object is moving, this module outputs one prediction path toward an exit point that existed in the direction of object's movement. One the other hand, if the object has stopped, it is impossible to infer which exit points the object want to go, so this module outputs two prediction paths toward both side exit point.

"},{"location":"perception/map_based_prediction/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/map_based_prediction/#input","title":"Input","text":"Name Type Description ~/perception/object_recognition/tracking/objects autoware_auto_perception_msgs::msg::TrackedObjects tracking objects without predicted path. ~/vector_map autoware_auto_mapping_msgs::msg::HADMapBin binary data of Lanelet2 Map."},{"location":"perception/map_based_prediction/#output","title":"Output","text":"Name Type Description ~/objects autoware_auto_perception_msgs::msg::PredictedObjects tracking objects with predicted path. ~/objects_path_markers visualization_msgs::msg::MarkerArray marker for visualization."},{"location":"perception/map_based_prediction/#parameters","title":"Parameters","text":"Parameter Type Description enable_delay_compensation bool flag to enable the time delay compensation for the position of the object prediction_time_horizon double predict time duration for predicted path [s] prediction_sampling_delta_time double sampling time for points in predicted path [s] min_velocity_for_map_based_prediction double apply map-based prediction to the objects with higher velocity than this value min_crosswalk_user_velocity double minimum velocity use in path prediction for crosswalk users dist_threshold_for_searching_lanelet double The threshold of the angle used when searching for the lane to which the object belongs [rad] delta_yaw_threshold_for_searching_lanelet double The threshold of the distance used when searching for the lane to which the object belongs [m] sigma_lateral_offset double Standard deviation for lateral position of objects [m] sigma_yaw_angle double Standard deviation yaw angle of objects [rad] object_buffer_time_length double Time span of object history to store the information [s] history_time_length double Time span of object information used for prediction [s] dist_ratio_threshold_to_left_bound double Conditions for using lane change detection of objects. Distance to the left bound of lanelet. dist_ratio_threshold_to_right_bound double Conditions for using lane change detection of objects. Distance to the right bound of lanelet. diff_dist_threshold_to_left_bound double Conditions for using lane change detection of objects. Differential value of horizontal position of objects. diff_dist_threshold_to_right_bound double Conditions for using lane change detection of objects. Differential value of horizontal position of objects."},{"location":"perception/map_based_prediction/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/map_based_prediction/#reference","title":"Reference","text":"
  1. M. Werling, J. Ziegler, S. Kammel, and S. Thrun, \u201cOptimal trajectory generation for dynamic street scenario in a frenet frame,\u201d IEEE International Conference on Robotics and Automation, Anchorage, Alaska, USA, May 2010.
  2. A. Houenou, P. Bonnifait, V. Cherfaoui, and Wen Yao, \u201cVehicle trajectory prediction based on motion model and maneuver recognition,\u201d in 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, nov 2013, pp. 4363-4369.
"},{"location":"perception/multi_object_tracker/","title":"multi_object_tracker","text":""},{"location":"perception/multi_object_tracker/#purpose","title":"Purpose","text":"

The results of the detection are processed by a time series. The main purpose is to give ID and estimate velocity.

"},{"location":"perception/multi_object_tracker/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

This multi object tracker consists of data association and EKF.

"},{"location":"perception/multi_object_tracker/#data-association","title":"Data association","text":"

The data association performs maximum score matching, called min cost max flow problem. In this package, mussp[1] is used as solver. In addition, when associating observations to tracers, data association have gates such as the area of the object from the BEV, Mahalanobis distance, and maximum distance, depending on the class label.

"},{"location":"perception/multi_object_tracker/#ekf-tracker","title":"EKF Tracker","text":"

Models for pedestrians, bicycles (motorcycles), cars and unknown are available. The pedestrian or bicycle tracker is running at the same time as the respective EKF model in order to enable the transition between pedestrian and bicycle tracking. For big vehicles such as trucks and buses, we have separate models for passenger cars and large vehicles because they are difficult to distinguish from passenger cars and are not stable. Therefore, separate models are prepared for passenger cars and big vehicles, and these models are run at the same time as the respective EKF models to ensure stability.

"},{"location":"perception/multi_object_tracker/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/multi_object_tracker/#input","title":"Input","text":"Name Type Description ~/input autoware_auto_perception_msgs::msg::DetectedObjects obstacles"},{"location":"perception/multi_object_tracker/#output","title":"Output","text":"Name Type Description ~/output autoware_auto_perception_msgs::msg::TrackedObjects modified obstacles"},{"location":"perception/multi_object_tracker/#parameters","title":"Parameters","text":""},{"location":"perception/multi_object_tracker/#core-parameters","title":"Core Parameters","text":"Name Type Description can_assign_matrix double Assignment table for data association max_dist_matrix double Maximum distance table for data association max_area_matrix double Maximum area table for data association min_area_matrix double Minimum area table for data association max_rad_matrix double Maximum angle table for data association world_frame_id double tracking frame enable_delay_compensation bool Estimate obstacles at current time considering detection delay publish_rate double if enable_delay_compensation is true, how many hertz to output"},{"location":"perception/multi_object_tracker/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/multi_object_tracker/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/multi_object_tracker/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/multi_object_tracker/#evaluation-of-mussp","title":"Evaluation of muSSP","text":"

According to our evaluation, muSSP is faster than normal SSP when the matrix size is more than 100.

Execution time for varying matrix size at 95% sparsity. In real data, the sparsity was often around 95%.

Execution time for varying the sparsity with matrix size 100.

"},{"location":"perception/multi_object_tracker/#optional-referencesexternal-links","title":"(Optional) References/External links","text":"

This package makes use of external code.

Name License Original Repository muSSP Apache-2.0 https://github.com/yu-lab-vt/muSSP

[1] C. Wang, Y. Wang, Y. Wang, C.-t. Wu, and G. Yu, \u201cmuSSP: Efficient Min-cost Flow Algorithm for Multi-object Tracking,\u201d NeurIPS, 2019

"},{"location":"perception/multi_object_tracker/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/object_merger/","title":"object_merger","text":""},{"location":"perception/object_merger/#purpose","title":"Purpose","text":"

object_merger is a package for merging detected objects from two methods by data association.

"},{"location":"perception/object_merger/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

The successive shortest path algorithm is used to solve the data association problem (the minimum-cost flow problem). The cost is calculated by the distance between two objects and gate functions are applied to reset cost, s.t. the maximum distance, the maximum area and the minimum area.

"},{"location":"perception/object_merger/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/object_merger/#input","title":"Input","text":"Name Type Description input/object0 autoware_auto_perception_msgs::msg::DetectedObjects detection objects input/object1 autoware_auto_perception_msgs::msg::DetectedObjects detection objects"},{"location":"perception/object_merger/#output","title":"Output","text":"Name Type Description output/object autoware_auto_perception_msgs::msg::DetectedObjects modified Objects"},{"location":"perception/object_merger/#parameters","title":"Parameters","text":"Name Type Description can_assign_matrix double Assignment table for data association max_dist_matrix double Maximum distance table for data association max_area_matrix double Maximum area table for data association min_area_matrix double Minimum area table for data association max_rad_matrix double Maximum angle table for data association base_link_frame_id double association frame distance_threshold_list std::vector<double> Distance threshold for each class used in judging overlap. The class order depends on ObjectClassification. generalized_iou_threshold double Generalized IoU threshold"},{"location":"perception/object_merger/#tips","title":"Tips","text":""},{"location":"perception/object_merger/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/object_merger/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/object_merger/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/object_merger/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/object_merger/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":"

Data association algorithm was the same as that of multi_object_tracker, but the algorithm of multi_object_tracker was already updated.

"},{"location":"perception/object_range_splitter/","title":"object_range_splitter","text":""},{"location":"perception/object_range_splitter/#purpose","title":"Purpose","text":"

object_range_splitter is a package to divide detected objects into two messages by the distance from the origin.

"},{"location":"perception/object_range_splitter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"perception/object_range_splitter/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/object_range_splitter/#input","title":"Input","text":"Name Type Description input/object autoware_auto_perception_msgs::msg::DetectedObjects detected objects"},{"location":"perception/object_range_splitter/#output","title":"Output","text":"Name Type Description output/long_range_object autoware_auto_perception_msgs::msg::DetectedObjects long range detected objects output/short_range_object autoware_auto_perception_msgs::msg::DetectedObjects short range detected objects"},{"location":"perception/object_range_splitter/#parameters","title":"Parameters","text":"Name Type Description split_range float the distance boundary to divide detected objects [m]"},{"location":"perception/object_range_splitter/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/object_range_splitter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/object_range_splitter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/object_range_splitter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/object_range_splitter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/occupancy_grid_map_outlier_filter/","title":"occupancy_grid_map_outlier_filter","text":""},{"location":"perception/occupancy_grid_map_outlier_filter/#purpose","title":"Purpose","text":"

This node is an outlier filter based on a occupancy grid map. Depending on the implementation of occupancy grid map, it can be called an outlier filter in time series, since the occupancy grid map expresses the occupancy probabilities in time series.

"},{"location":"perception/occupancy_grid_map_outlier_filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"
  1. Use the occupancy grid map to separate point clouds into those with low occupancy probability and those with high occupancy probability.

  2. The point clouds that belong to the low occupancy probability are not necessarily outliers. In particular, the top of the moving object tends to belong to the low occupancy probability. Therefore, if use_radius_search_2d_filter is true, then apply an radius search 2d outlier filter to the point cloud that is determined to have a low occupancy probability.

    1. For each low occupancy probability point, determine the outlier from the radius (radius_search_2d_filter/search_radius) and the number of point clouds. In this case, the point cloud to be referenced is not only low occupancy probability points, but all point cloud including high occupancy probability points.
    2. The number of point clouds can be multiplied by radius_search_2d_filter/min_points_and_distance_ratio and distance from base link. However, the minimum and maximum number of point clouds is limited.

The following video is a sample. Yellow points are high occupancy probability, green points are low occupancy probability which is not an outlier, and red points are outliers. At around 0:15 and 1:16 in the first video, a bird crosses the road, but it is considered as an outlier.

"},{"location":"perception/occupancy_grid_map_outlier_filter/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/occupancy_grid_map_outlier_filter/#input","title":"Input","text":"Name Type Description ~/input/pointcloud sensor_msgs/PointCloud2 Obstacle point cloud with ground removed. ~/input/occupancy_grid_map nav_msgs/OccupancyGrid A map in which the probability of the presence of an obstacle is occupancy probability map"},{"location":"perception/occupancy_grid_map_outlier_filter/#output","title":"Output","text":"Name Type Description ~/output/pointcloud sensor_msgs/PointCloud2 Point cloud with outliers removed. trajectory ~/output/debug/outlier/pointcloud sensor_msgs/PointCloud2 Point clouds removed as outliers. ~/output/debug/low_confidence/pointcloud sensor_msgs/PointCloud2 Point clouds that had a low probability of occupancy in the occupancy grid map. However, it is not considered as an outlier. ~/output/debug/high_confidence/pointcloud sensor_msgs/PointCloud2 Point clouds that had a high probability of occupancy in the occupancy grid map. trajectory"},{"location":"perception/occupancy_grid_map_outlier_filter/#parameters","title":"Parameters","text":"Name Type Description map_frame string map frame id base_link_frame string base link frame id cost_threshold int Cost threshold of occupancy grid map (0~100). 100 means 100% probability that there is an obstacle, close to 50 means that it is indistinguishable whether it is an obstacle or free space, 0 means that there is no obstacle. enable_debugger bool Whether to output the point cloud for debugging. use_radius_search_2d_filter bool Whether or not to apply density-based outlier filters to objects that are judged to have low probability of occupancy on the occupancy grid map. radius_search_2d_filter/search_radius float Radius when calculating the density radius_search_2d_filter/min_points_and_distance_ratio float Threshold value of the number of point clouds per radius when the distance from baselink is 1m, because the number of point clouds varies with the distance from baselink. radius_search_2d_filter/min_points int Minimum number of point clouds per radius radius_search_2d_filter/max_points int Maximum number of point clouds per radius"},{"location":"perception/occupancy_grid_map_outlier_filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/occupancy_grid_map_outlier_filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/occupancy_grid_map_outlier_filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/occupancy_grid_map_outlier_filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/occupancy_grid_map_outlier_filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/probabilistic_occupancy_grid_map/","title":"probabilistic_occupancy_grid_map","text":""},{"location":"perception/probabilistic_occupancy_grid_map/#purpose","title":"Purpose","text":"

This package outputs the probability of having an obstacle as occupancy grid map.

"},{"location":"perception/probabilistic_occupancy_grid_map/#settings","title":"Settings","text":"

Occupancy grid map is generated on map_frame, and grid orientation is fixed.

You may need to choose output_frame which means grid map origin. Default is base_link, but your main LiDAR sensor frame (e.g. velodyne_top in sample_vehicle) would be the better choice.

"},{"location":"perception/probabilistic_occupancy_grid_map/#each-config-paramters","title":"Each config paramters","text":"

Config parameters are managed in config/*.yaml and here shows its outline.

Ros param name Default value map_frame \"map\" base_link_frame \"base_link\" output_frame \"base_link\" use_height_filter true enable_single_frame_mode false map_length 100.0 [m] map_width 100.0 [m] map_resolution 0.5 [m] input_obstacle_pointcloud true input_obstacle_and_raw_pointcloud true Ros param name Default value map_length 100 [m] map_resolution 0.5 [m] use_height_filter true enable_single_frame_mode false map_frame \"map\" base_link_frame \"base_link\" output_frame \"base_link\""},{"location":"perception/probabilistic_occupancy_grid_map/#referencesexternal-links","title":"References/External links","text":""},{"location":"perception/probabilistic_occupancy_grid_map/laserscan-based-occupancy-grid-map/","title":"laserscan based occupancy grid map","text":""},{"location":"perception/probabilistic_occupancy_grid_map/laserscan-based-occupancy-grid-map/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

The basic idea is to take a 2D laserscan and ray trace it to create a time-series processed occupancy grid map.

  1. the node take a laserscan and make an occupancy grid map with one frame. ray trace is done by Bresenham's line algorithm.
  2. Optionally, obstacle point clouds and raw point clouds can be received and reflected in the occupancy grid map. The reason is that laserscan only uses the most foreground point in the polar coordinate system, so it throws away a lot of information. As a result, the occupancy grid map is almost an UNKNOWN cell. Therefore, the obstacle point cloud and the raw point cloud are used to reflect what is judged to be the ground and what is judged to be an obstacle in the occupancy grid map. The black and red dots represent raw point clouds, and the red dots represent obstacle point clouds. In other words, the black points are determined as the ground, and the red point cloud is the points determined as obstacles. The gray cells are represented as UNKNOWN cells.

  3. Using the previous occupancy grid map, update the existence probability using a binary Bayesian filter (1). Also, the unobserved cells are time-decayed like the system noise of the Kalman filter (2).

    \\hat{P_{o}} = \\frac{(P_{o} * P_{z})}{(P_{o} * P_{z} + (1 - P_{o}) * \\bar{P_{z}})} \\tag{1}\n
    \\hat{P_{o}} = \\frac{(P_{o} + 0.5 * \\frac{1}{ratio})}{(\\frac{1}{ratio} + 1)} \\tag{2}\n
"},{"location":"perception/probabilistic_occupancy_grid_map/laserscan-based-occupancy-grid-map/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/probabilistic_occupancy_grid_map/laserscan-based-occupancy-grid-map/#input","title":"Input","text":"Name Type Description ~/input/laserscan sensor_msgs::LaserScan laserscan ~/input/obstacle_pointcloud sensor_msgs::PointCloud2 obstacle pointcloud ~/input/raw_pointcloud sensor_msgs::PointCloud2 The overall point cloud used to input the obstacle point cloud"},{"location":"perception/probabilistic_occupancy_grid_map/laserscan-based-occupancy-grid-map/#output","title":"Output","text":"Name Type Description ~/output/occupancy_grid_map nav_msgs::OccupancyGrid occupancy grid map"},{"location":"perception/probabilistic_occupancy_grid_map/laserscan-based-occupancy-grid-map/#parameters","title":"Parameters","text":""},{"location":"perception/probabilistic_occupancy_grid_map/laserscan-based-occupancy-grid-map/#node-parameters","title":"Node Parameters","text":"Name Type Description map_frame string map frame base_link_frame string base_link frame input_obstacle_pointcloud bool whether to use the optional obstacle point cloud? If this is true, ~/input/obstacle_pointcloud topics will be received. input_obstacle_and_raw_pointcloud bool whether to use the optional obstacle and raw point cloud? If this is true, ~/input/obstacle_pointcloud and ~/input/raw_pointcloud topics will be received. use_height_filter bool whether to height filter for ~/input/obstacle_pointcloud and ~/input/raw_pointcloud? By default, the height is set to -1~2m. map_length double The length of the map. -100 if it is 50~50[m] map_resolution double The map cell resolution [m]"},{"location":"perception/probabilistic_occupancy_grid_map/laserscan-based-occupancy-grid-map/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

In several places we have modified the external code written in BSD3 license.

"},{"location":"perception/probabilistic_occupancy_grid_map/laserscan-based-occupancy-grid-map/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/probabilistic_occupancy_grid_map/laserscan-based-occupancy-grid-map/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/probabilistic_occupancy_grid_map/laserscan-based-occupancy-grid-map/#optional-referencesexternal-links","title":"(Optional) References/External links","text":"

Bresenham's_line_algorithm

"},{"location":"perception/probabilistic_occupancy_grid_map/laserscan-based-occupancy-grid-map/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/","title":"pointcloud based occupancy grid map","text":""},{"location":"perception/probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"perception/probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#1st-step","title":"1st step","text":"

First of all, obstacle and raw pointcloud as input are transformed into a polar coordinate system and divided into bin per angle_increment. At this time, each point belonging to each bin is stored as range data. In addition, the x,y information in the map coordinate is also stored for ray trace on map coordinate. The bin contains the following information for each point

The following figure shows each of the bins from side view.

"},{"location":"perception/probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#2nd-step","title":"2nd step","text":"

The ray trace is performed in three steps for each cell. The ray trace is done by Bresenham's line algorithm.

  1. Initialize freespace to the farthest point of each bin.

  2. Fill in the unknown cells. Assume that unknown is behind the obstacle, since the back of the obstacle is a blind spot. Therefore, the unknown are assumed to be the cells that are more than a distance margin from each obstacle point.

    There are three reasons for setting a distance margin.

    • It is unlikely that a point on the ground will be immediately behind an obstacle.
    • The obstacle point cloud is processed and may not match the raw pointcloud.
    • The input may be inaccurate and obstacle points may not be determined as obstacles.
  3. Fill in the occupied cells. Fill in the point where the obstacle point is located with occupied. In addition, If the distance between obstacle points is less than or equal to the distance margin, it is filled with occupied because the input may be inaccurate and obstacle points may not be determined as obstacles.

"},{"location":"perception/probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#3rd-step","title":"3rd step","text":"

Using the previous occupancy grid map, update the existence probability using a binary Bayesian filter (1). Also, the unobserved cells are time-decayed like the system noise of the Kalman filter (2).

    \\hat{P_{o}} = \\frac{(P_{o} * P_{z})}{(P_{o} * P_{z} + (1 - P_{o}) * \\bar{P_{z}})} \\tag{1}\n
    \\hat{P_{o}} = \\frac{(P_{o} + 0.5 * \\frac{1}{ratio})}{(\\frac{1}{ratio} + 1)} \\tag{2}\n
"},{"location":"perception/probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#input","title":"Input","text":"Name Type Description ~/input/obstacle_pointcloud sensor_msgs::PointCloud2 obstacle pointcloud ~/input/raw_pointcloud sensor_msgs::PointCloud2 The overall point cloud used to input the obstacle point cloud"},{"location":"perception/probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#output","title":"Output","text":"Name Type Description ~/output/occupancy_grid_map nav_msgs::OccupancyGrid occupancy grid map"},{"location":"perception/probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#parameters","title":"Parameters","text":""},{"location":"perception/probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#node-parameters","title":"Node Parameters","text":"Name Type Description map_frame string map frame base_link_frame string base_link frame use_height_filter bool whether to height filter for ~/input/obstacle_pointcloud and ~/input/raw_pointcloud? By default, the height is set to -1~2m. map_length double The length of the map. -100 if it is 50~50[m] map_resolution double The map cell resolution [m]"},{"location":"perception/probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

In several places we have modified the external code written in BSD3 license.

"},{"location":"perception/probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/radar_fusion_to_detected_object/","title":"radar_fusion_to_detected_object","text":"

This package contains a sensor fusion module for radar-detected objects and 3D detected objects. The fusion node can:

"},{"location":"perception/radar_fusion_to_detected_object/#core-algorithm","title":"Core algorithm","text":"

The document of core algorithm is here

"},{"location":"perception/radar_fusion_to_detected_object/#parameters-for-sensor-fusion","title":"Parameters for sensor fusion","text":"Name Type Description Default value bounding_box_margin double The distance to extend the 2D bird's-eye view Bounding Box on each side. This distance is used as a threshold to find radar centroids falling inside the extended box. [m] 2.0 split_threshold_velocity double The object's velocity threshold to decide to split for two objects from radar information (currently not implemented) [m/s] 5.0 threshold_yaw_diff double The yaw orientation threshold. If $ \\vert \\theta {ob} - \\theta {ra} \\vert < thresholdyaw_diff $ attached to radar information include estimated velocity, where $ \\theta $ is yaw angle from 3d detected object, $ \\theta_ {ra} $ is yaw angle from radar object. [rad] 0.35"},{"location":"perception/radar_fusion_to_detected_object/#weight-parameters-for-velocity-estimation","title":"Weight parameters for velocity estimation","text":"

To tune these weight parameters, please see document in detail.

Name Type Description Default value velocity_weight_average double The twist coefficient of average twist of radar data in velocity estimation. 0.0 velocity_weight_median double The twist coefficient of median twist of radar data in velocity estimation. 0.0 velocity_weight_min_distance double The twist coefficient of radar data nearest to the center of bounding box in velocity estimation. 1.0 velocity_weight_target_value_average double The twist coefficient of target value weighted average in velocity estimation. Target value is amplitude if using radar pointcloud. Target value is probability if using radar objects. 0.0 velocity_weight_target_value_top double The twist coefficient of top target value radar data in velocity estimation. Target value is amplitude if using radar pointcloud. Target value is probability if using radar objects. 0.0"},{"location":"perception/radar_fusion_to_detected_object/#parameters-for-fixed-object-information","title":"Parameters for fixed object information","text":"Name Type Description Default value convert_doppler_to_twist bool Convert doppler velocity to twist using the yaw information of a detected object. false threshold_probability float If the probability of an output object is lower than this parameter, and the output object does not have radar points/objects, then delete the object. 0.4 compensate_probability bool If this parameter is true, compensate probability of objects to threshold probability. false"},{"location":"perception/radar_fusion_to_detected_object/#radar_object_fusion_to_detected_object","title":"radar_object_fusion_to_detected_object","text":"

Sensor fusion with radar objects and a detected object.

"},{"location":"perception/radar_fusion_to_detected_object/#how-to-launch","title":"How to launch","text":"
ros2 launch radar_fusion_to_detected_object radar_object_to_detected_object.launch.xml\n
"},{"location":"perception/radar_fusion_to_detected_object/#input","title":"Input","text":"Name Type Description ~/input/objects autoware_auto_perception_msgs/msg/DetectedObject.msg 3D detected objects. ~/input/radar_objects autoware_auto_perception_msgs/msg/TrackedObjects.msg Radar objects. Note that frame_id need to be same as ~/input/objects"},{"location":"perception/radar_fusion_to_detected_object/#output","title":"Output","text":"Name Type Description ~/output/objects autoware_auto_perception_msgs/msg/DetectedObjects.msg 3D detected object with twist. ~/debug/low_confidence_objects autoware_auto_perception_msgs/msg/DetectedObjects.msg 3D detected object that doesn't output as ~/output/objects because of low confidence"},{"location":"perception/radar_fusion_to_detected_object/#parameters","title":"Parameters","text":"Name Type Description Default value update_rate_hz double The update rate [hz]. 20.0"},{"location":"perception/radar_fusion_to_detected_object/#radar_scan_fusion_to_detected_object-tbd","title":"radar_scan_fusion_to_detected_object (TBD)","text":"

TBD

"},{"location":"perception/radar_fusion_to_detected_object/docs/algorithm/","title":"Algorithm","text":""},{"location":"perception/radar_fusion_to_detected_object/docs/algorithm/#common-algorithm","title":"Common Algorithm","text":""},{"location":"perception/radar_fusion_to_detected_object/docs/algorithm/#1-link-between-3d-bounding-box-and-radar-data","title":"1. Link between 3d bounding box and radar data","text":"

Choose radar pointcloud/objects within 3D bounding box from lidar-base detection with margin space from bird's-eye view.

"},{"location":"perception/radar_fusion_to_detected_object/docs/algorithm/#2-feature-support-split-the-object-going-in-a-different-direction","title":"2. [Feature support] Split the object going in a different direction","text":""},{"location":"perception/radar_fusion_to_detected_object/docs/algorithm/#3-estimate-twist-of-object","title":"3. Estimate twist of object","text":"

Estimate twist from chosen radar pointcloud/objects using twist and target value (Target value is amplitude if using radar pointcloud. Target value is probability if using radar objects). First, the estimation function calculate

Second, the estimation function calculate weighted average of these list. Third, twist information of estimated twist is attached to an object.

"},{"location":"perception/radar_fusion_to_detected_object/docs/algorithm/#4-feature-support-option-convert-doppler-velocity-to-twist","title":"4. [Feature support] [Option] Convert doppler velocity to twist","text":"

If the twist information of radars is doppler velocity, convert from doppler velocity to twist using yaw angle of DetectedObject. Because radar pointcloud has only doppler velocity information, radar pointcloud fusion should use this feature. On the other hand, because radar objects have twist information, radar object fusion should not use this feature.

"},{"location":"perception/radar_fusion_to_detected_object/docs/algorithm/#5-delete-objects-with-low-probability","title":"5. Delete objects with low probability","text":""},{"location":"perception/radar_tracks_msgs_converter/","title":"radar_tracks_msgs_converter","text":"

This package convert from radar_msgs/msg/RadarTracks into autoware_auto_perception_msgs/msg/TrackedObject.

"},{"location":"perception/radar_tracks_msgs_converter/#design","title":"Design","text":""},{"location":"perception/radar_tracks_msgs_converter/#input-output","title":"Input / Output","text":""},{"location":"perception/radar_tracks_msgs_converter/#parameters","title":"Parameters","text":""},{"location":"perception/radar_tracks_msgs_converter/#note","title":"Note","text":"

This package convert the label from radar_msgs/msg/RadarTrack.msg to Autoware label. Label id is defined as below.

RadarTrack Autoware UNKNOWN 32000 0 CAR 32001 1 TRUCK 32002 2 BUS 32003 3 TRAILER 32004 4 MOTORCYCLE 32005 5 BICYCLE 32006 6 PEDESTRIAN 32007 7 "},{"location":"perception/shape_estimation/","title":"shape_estimation","text":""},{"location":"perception/shape_estimation/#purpose","title":"Purpose","text":"

This node calculates a refined object shape (bounding box, cylinder, convex hull) in which a pointcloud cluster fits according to a label.

"},{"location":"perception/shape_estimation/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"perception/shape_estimation/#fitting-algorithms","title":"Fitting algorithms","text":" "},{"location":"perception/shape_estimation/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/shape_estimation/#input","title":"Input","text":"Name Type Description input tier4_perception_msgs::msg::DetectedObjectsWithFeature detected objects with labeled cluster"},{"location":"perception/shape_estimation/#output","title":"Output","text":"Name Type Description output/objects autoware_auto_perception_msgs::msg::DetectedObjects detected objects with refined shape"},{"location":"perception/shape_estimation/#parameters","title":"Parameters","text":"Name Type Default Value Description use_corrector bool true The flag to apply rule-based filter use_filter bool true The flag to apply rule-based corrector use_vehicle_reference_yaw bool true The flag to use vehicle reference yaw for corrector"},{"location":"perception/shape_estimation/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD

"},{"location":"perception/shape_estimation/#referencesexternal-links","title":"References/External links","text":"

L-shape fitting implementation of the paper:

@conference{Zhang-2017-26536,\nauthor = {Xiao Zhang and Wenda Xu and Chiyu Dong and John M. Dolan},\ntitle = {Efficient L-Shape Fitting for Vehicle Detection Using Laser Scanners},\nbooktitle = {2017 IEEE Intelligent Vehicles Symposium},\nyear = {2017},\nmonth = {June},\nkeywords = {autonomous driving, laser scanner, perception, segmentation},\n}\n
"},{"location":"perception/tensorrt_yolo/","title":"tensorrt_yolo","text":""},{"location":"perception/tensorrt_yolo/#purpose","title":"Purpose","text":"

This package detects 2D bounding boxes for target objects e.g., cars, trucks, bicycles, and pedestrians on a image based on YOLO(You only look once) model.

"},{"location":"perception/tensorrt_yolo/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"perception/tensorrt_yolo/#cite","title":"Cite","text":"

yolov3

Redmon, J., & Farhadi, A. (2018). Yolov3: An incremental improvement. arXiv preprint arXiv:1804.02767.

yolov4

Bochkovskiy, A., Wang, C. Y., & Liao, H. Y. M. (2020). Yolov4: Optimal speed and accuracy of object detection. arXiv preprint arXiv:2004.10934.

yolov5

Jocher, G., et al. (2021). ultralytics/yolov5: v6.0 - YOLOv5n 'Nano' models, Roboflow integration, TensorFlow export, OpenCV DNN support (v6.0). Zenodo. https://doi.org/10.5281/zenodo.5563715

"},{"location":"perception/tensorrt_yolo/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/tensorrt_yolo/#input","title":"Input","text":"Name Type Description in/image sensor_msgs/Image The input image"},{"location":"perception/tensorrt_yolo/#output","title":"Output","text":"Name Type Description out/objects tier4_perception_msgs/DetectedObjectsWithFeature The detected objects with 2D bounding boxes out/image sensor_msgs/Image The image with 2D bounding boxes for visualization"},{"location":"perception/tensorrt_yolo/#parameters","title":"Parameters","text":""},{"location":"perception/tensorrt_yolo/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description anchors double array [10.0, 13.0, 16.0, 30.0, 33.0, 23.0, 30.0, 61.0, 62.0, 45.0, 59.0, 119.0, 116.0, 90.0, 156.0, 198.0, 373.0, 326.0] The anchors to create bounding box candidates scale_x_y double array [1.0, 1.0, 1.0] The scale parameter to eliminate grid sensitivity score_thresh double 0.1 If the objectness score is less than this value, the object is ignored in yolo layer. iou_thresh double 0.45 The iou threshold for NMS method detections_per_im int 100 The maximum detection number for one frame use_darknet_layer bool true The flag to use yolo layer in darknet ignore_thresh double 0.5 If the output score is less than this value, ths object is ignored."},{"location":"perception/tensorrt_yolo/#node-parameters","title":"Node Parameters","text":"Name Type Default Value Description onnx_file string \"\" The onnx file name for yolo model engine_file string \"\" The tensorrt engine file name for yolo model label_file string \"\" The label file with label names for detected objects written on it calib_image_directory string \"\" The directory name including calibration images for int8 inference calib_cache_file string \"\" The calibration cache file for int8 inference mode string \"FP32\" The inference mode: \"FP32\", \"FP16\", \"INT8\" gpu_id int 0 GPU device ID that runs the model"},{"location":"perception/tensorrt_yolo/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

This package includes multiple licenses.

"},{"location":"perception/tensorrt_yolo/#onnx-model","title":"Onnx model","text":"

All YOLO ONNX models are converted from the officially trained model. If you need information about training datasets and conditions, please refer to the official repositories.

All models are downloaded automatically when building. When launching the node with a model for the first time, the model is automatically converted to TensorRT, although this may take some time.

"},{"location":"perception/tensorrt_yolo/#yolov3","title":"YOLOv3","text":"

YOLOv3: Converted from darknet weight file and conf file.

"},{"location":"perception/tensorrt_yolo/#yolov4","title":"YOLOv4","text":"

YOLOv4: Converted from darknet weight file and conf file.

YOLOv4-tiny: Converted from darknet weight file and conf file.

"},{"location":"perception/tensorrt_yolo/#yolov5","title":"YOLOv5","text":"

Refer to this guide

"},{"location":"perception/tensorrt_yolo/#limitations","title":"Limitations","text":""},{"location":"perception/tensorrt_yolo/#reference-repositories","title":"Reference repositories","text":" "},{"location":"perception/tensorrt_yolox/","title":"tensorrt_yolox","text":""},{"location":"perception/tensorrt_yolox/#purpose","title":"Purpose","text":"

This package detects target objects e.g., cars, trucks, bicycles, and pedestrians on a image based on YOLOX model.

"},{"location":"perception/tensorrt_yolox/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"perception/tensorrt_yolox/#cite","title":"Cite","text":"

Zheng Ge, Songtao Liu, Feng Wang, Zeming Li, Jian Sun, \"YOLOX: Exceeding YOLO Series in 2021\", arXiv preprint arXiv:2107.08430, 2021 [ref]

"},{"location":"perception/tensorrt_yolox/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/tensorrt_yolox/#input","title":"Input","text":"Name Type Description in/image sensor_msgs/Image The input image"},{"location":"perception/tensorrt_yolox/#output","title":"Output","text":"Name Type Description out/objects tier4_perception_msgs/DetectedObjectsWithFeature The detected objects with 2D bounding boxes out/image sensor_msgs/Image The image with 2D bounding boxes for visualization"},{"location":"perception/tensorrt_yolox/#parameters","title":"Parameters","text":""},{"location":"perception/tensorrt_yolox/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description score_threshold float 0.3 If the objectness score is less than this value, the object is ignored in yolox layer. nms_threshold float 0.7 The IoU threshold for NMS method

NOTE: These two parameters are only valid for \"plain\" model (described later).

"},{"location":"perception/tensorrt_yolox/#node-parameters","title":"Node Parameters","text":"Name Type Default Value Description model_path string \"\" The onnx file name for yolox model label_path string \"\" The label file with label names for detected objects written on it trt_precision string \"fp32\" The inference mode: \"fp32\", \"fp16\", \"int8\" build_only bool false shutdown node after TensorRT engine file is built"},{"location":"perception/tensorrt_yolox/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

The label contained in detected 2D bounding boxes (i.e., out/objects) will be either one of the followings:

If other labels (case insensitive) are contained in the file specified via the label_file parameter, those are labeled as UNKNOWN, while detected rectangles are drawn in the visualization result (out/image).

"},{"location":"perception/tensorrt_yolox/#onnx-model","title":"Onnx model","text":"

A sample model (named yolox-tiny.onnx) is downloaded automatically during the build process. To accelerate Non-maximum-suppression (NMS), which is one of the common post-process after object detection inference, EfficientNMS_TRT module is attached after the ordinal YOLOX (tiny) network. The EfficientNMS_TRT module contains fixed values for score_threshold and nms_threshold in it, hence these parameters are ignored when users specify ONNX models including this module.

This package accepts both EfficientNMS_TRT attached ONNXs and models published from the official YOLOX repository (we referred to them as \"plain\" models).

All models are automatically converted to TensorRT format. These converted files will be saved in the same directory as specified ONNX files with .engine filename extension and reused from the next run. The conversion process may take a while (typically a few minutes) and the inference process is blocked until complete the conversion, so it will take some time until detection results are published on the first run.

"},{"location":"perception/tensorrt_yolox/#package-acceptable-model-generation","title":"Package acceptable model generation","text":"

To convert users' own model that saved in PyTorch's pth format into ONNX, users can exploit the converter offered by the official repository. For the convenience, only procedures are described below. Please refer the official document for more detail.

"},{"location":"perception/tensorrt_yolox/#for-plain-models","title":"For plain models","text":"
  1. Install dependency

    git clone git@github.com:Megvii-BaseDetection/YOLOX.git\ncd YOLOX\npython3 setup.py develop --user\n
  2. Convert pth into ONNX

    python3 tools/export_onnx.py \\\n--output-name YOUR_YOLOX.onnx \\\n-f YOUR_YOLOX.py \\\n-c YOUR_YOLOX.pth\n
"},{"location":"perception/tensorrt_yolox/#for-efficientnms_trt-embedded-models","title":"For EfficientNMS_TRT embedded models","text":"
  1. Install dependency

    git clone git@github.com:Megvii-BaseDetection/YOLOX.git\ncd YOLOX\npython3 setup.py develop --user\npip3 install git+ssh://git@github.com/wep21/yolox_onnx_modifier.git --user\n
  2. Convert pth into ONNX

    python3 tools/export_onnx.py \\\n--output-name YOUR_YOLOX.onnx \\\n-f YOUR_YOLOX.py \\\n-c YOUR_YOLOX.pth\n  --decode_in_inference\n
  3. Embed EfficientNMS_TRT to the end of YOLOX

    yolox_onnx_modifier YOUR_YOLOX.onnx -o YOUR_YOLOX_WITH_NMS.onnx\n
"},{"location":"perception/tensorrt_yolox/#label-file","title":"Label file","text":"

A sample label file (named label.txt)is also downloaded automatically during the build process (NOTE: This file is incompatible with models that output labels for the COCO dataset (e.g., models from the official YOLOX repository)).

This file represents the correspondence between class index (integer outputted from YOLOX network) and class label (strings making understanding easier). This package maps class IDs (incremented from 0) with labels according to the order in this file.

"},{"location":"perception/tensorrt_yolox/#reference-repositories","title":"Reference repositories","text":""},{"location":"perception/traffic_light_classifier/","title":"traffic_light_classifier","text":""},{"location":"perception/traffic_light_classifier/#purpose","title":"Purpose","text":"

traffic_light_classifier is a package for classifying traffic light labels using cropped image around a traffic light. This package has two classifier models: cnn_classifier and hsv_classifier.

"},{"location":"perception/traffic_light_classifier/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"perception/traffic_light_classifier/#cnn_classifier","title":"cnn_classifier","text":"

Traffic light labels are classified by MobileNetV2. Totally 37600 (26300 for training, 6800 for evaluation and 4500 for test) TIER IV internal images of Japanese traffic lights were used for fine-tuning.

"},{"location":"perception/traffic_light_classifier/#hsv_classifier","title":"hsv_classifier","text":"

Traffic light colors (green, yellow and red) are classified in HSV model.

"},{"location":"perception/traffic_light_classifier/#about-label","title":"About Label","text":"

The message type is designed to comply with the unified road signs proposed at the Vienna Convention. This idea has been also proposed in Autoware.Auto.

There are rules for naming labels that nodes receive. One traffic light is represented by the following character string separated by commas. color1-shape1, color2-shape2 .

For example, the simple red and red cross traffic light label must be expressed as \"red-circle, red-cross\".

These colors and shapes are assigned to the message as follows:

"},{"location":"perception/traffic_light_classifier/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/traffic_light_classifier/#input","title":"Input","text":"Name Type Description ~/input/image sensor_msgs::msg::Image input image ~/input/rois autoware_auto_perception_msgs::msg::TrafficLightRoiArray rois of traffic lights"},{"location":"perception/traffic_light_classifier/#output","title":"Output","text":"Name Type Description ~/output/traffic_signals autoware_auto_perception_msgs::msg::TrafficSignalArray classified signals ~/output/debug/image sensor_msgs::msg::Image image for debugging"},{"location":"perception/traffic_light_classifier/#parameters","title":"Parameters","text":""},{"location":"perception/traffic_light_classifier/#node-parameters","title":"Node Parameters","text":"Name Type Description classifier_type int if the value is 1, cnn_classifier is used"},{"location":"perception/traffic_light_classifier/#core-parameters","title":"Core Parameters","text":""},{"location":"perception/traffic_light_classifier/#cnn_classifier_1","title":"cnn_classifier","text":"Name Type Description model_file_path str path to the model file label_file_path str path to the label file precision str TensorRT precision, fp16 or int8 input_c str the channel size of an input image input_h str the height of an input image input_w str the width of an input image build_only bool shutdown node after TensorRT engine file is built"},{"location":"perception/traffic_light_classifier/#hsv_classifier_1","title":"hsv_classifier","text":"Name Type Description green_min_h int the minimum hue of green color green_min_s int the minimum saturation of green color green_min_v int the minimum value (brightness) of green color green_max_h int the maximum hue of green color green_max_s int the maximum saturation of green color green_max_v int the maximum value (brightness) of green color yellow_min_h int the minimum hue of yellow color yellow_min_s int the minimum saturation of yellow color yellow_min_v int the minimum value (brightness) of yellow color yellow_max_h int the maximum hue of yellow color yellow_max_s int the maximum saturation of yellow color yellow_max_v int the maximum value (brightness) of yellow color red_min_h int the minimum hue of red color red_min_s int the minimum saturation of red color red_min_v int the minimum value (brightness) of red color red_max_h int the maximum hue of red color red_max_s int the maximum saturation of red color red_max_v int the maximum value (brightness) of red color"},{"location":"perception/traffic_light_classifier/#customization-of-cnn-model","title":"Customization of CNN model","text":"

Currently, in Autoware, MobileNetV2 is used as CNN classifier by default. The corresponding onnx file is data/traffic_light_classifier_mobilenetv2.onnx. Also, you can apply the following models shown as below, for example.

In order to train models and export onnx model, we recommend open-mmlab/mmclassification. Please follow the official document to install and experiment with mmclassification. If you get into troubles, FAQ page would help you.

The following steps are example of a quick-start.

"},{"location":"perception/traffic_light_classifier/#step-0-install-mmcv-and-mim","title":"step 0. Install MMCV and MIM","text":"

NOTE : First of all, install PyTorch suitable for your CUDA version (CUDA11.6 is supported in Autoware).

In order to install mmcv suitable for your CUDA version, install it specifying a url.

# Install mim\n$ pip install -U openmim\n\n# Install mmcv on a machine with CUDA11.6 and PyTorch1.13.0\n$ pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu116/torch1.13/index.html\n
"},{"location":"perception/traffic_light_classifier/#step-1-install-mmclassification","title":"step 1. Install MMClassification","text":"

You can install MMClassification as a Python package or from source.

# As a Python package\n$ pip install mmcls\n\n# From source\n$ git clone https://github.com/open-mmlab/mmclassification.git\n$ cd mmclassification\n$ pip install -v -e .\n
"},{"location":"perception/traffic_light_classifier/#step-2-train-your-model","title":"step 2. Train your model","text":"

Train model with your experiment configuration file. For the details of config file, see here.

# [] is optional, you can start training from pre-trained checkpoint\n$ mim train mmcls YOUR_CONFIG.py [--resume-from YOUR_CHECKPOINT.pth]\n
"},{"location":"perception/traffic_light_classifier/#step-3-export-onnx-model","title":"step 3. Export onnx model","text":"

In exporting onnx, use mmclassificatoin/tools/deployment/pytorch2onnx.py or open-mmlab/mmdeploy.

cd ~/mmclassification/tools/deployment\npython3 pytorch2onnx.py YOUR_CONFIG.py ...\n

After obtaining your onnx model, update parameters defined in the launch file (e.g. model_file_path, label_file_path, input_h, input_w...). Note that, we only support labels defined in autoware_auto_perception_msgs::msg::TrafficLight.

"},{"location":"perception/traffic_light_classifier/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/traffic_light_classifier/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/traffic_light_classifier/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/traffic_light_classifier/#referencesexternal-links","title":"References/External links","text":"

[1] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov and L. Chen, \"MobileNetV2: Inverted Residuals and Linear Bottlenecks,\" 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, 2018, pp. 4510-4520, doi: 10.1109/CVPR.2018.00474.

"},{"location":"perception/traffic_light_classifier/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/traffic_light_map_based_detector/","title":"The traffic_light_map_based_detector Package","text":""},{"location":"perception/traffic_light_map_based_detector/#overview","title":"Overview","text":"

traffic_light_map_based_detector calculates where the traffic lights will appear in the image based on the HD map.

Calibration and vibration errors can be entered as parameters, and the size of the detected RegionOfInterest will change according to the error.

If the node receives route information, it only looks at traffic lights on that route. If the node receives no route information, it looks at a radius of 200 meters and the angle between the traffic light and the camera is less than 40 degrees.

"},{"location":"perception/traffic_light_map_based_detector/#input-topics","title":"Input topics","text":"Name Type Description ~input/vector_map autoware_auto_mapping_msgs::HADMapBin vector map ~input/camera_info sensor_msgs::CameraInfo target camera parameter ~input/route autoware_planning_msgs::LaneletRoute optional: route"},{"location":"perception/traffic_light_map_based_detector/#output-topics","title":"Output topics","text":"Name Type Description ~output/rois autoware_auto_perception_msgs::TrafficLightRoiArray location of traffic lights in image corresponding to the camera info ~debug/markers visualization_msgs::MarkerArray visualization to debug"},{"location":"perception/traffic_light_map_based_detector/#node-parameters","title":"Node parameters","text":"Parameter Type Description max_vibration_pitch double Maximum error in pitch direction. If -5~+5, it will be 10. max_vibration_yaw double Maximum error in yaw direction. If -5~+5, it will be 10. max_vibration_height double Maximum error in height direction. If -5~+5, it will be 10. max_vibration_width double Maximum error in width direction. If -5~+5, it will be 10. max_vibration_depth double Maximum error in depth direction. If -5~+5, it will be 10."},{"location":"perception/traffic_light_selector/","title":"traffic_light_selector","text":""},{"location":"perception/traffic_light_selector/#purpose","title":"Purpose","text":"

This package receives multiple traffic light/signal states and outputs a single traffic signal state for use by the planning component.

"},{"location":"perception/traffic_light_selector/#trafficlightselector","title":"TrafficLightSelector","text":"

A node that merges traffic light/signal state from image recognition and V2X to provide a planning component. It's currently a provisional implementation.

"},{"location":"perception/traffic_light_selector/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/traffic_light_selector/#input","title":"Input","text":"Name Type Description ~/sub/vector_map autoware_auto_mapping_msgs/msg/HADMapBin The vector map to get traffic light id relations. ~/sub/traffic_lights autoware_perception_msgs/msg/TrafficLightArray The traffic light state from image recognition."},{"location":"perception/traffic_light_selector/#output","title":"Output","text":"Name Type Description ~/pub/traffic_signals autoware_perception_msgs/msg/TrafficSignalArray The merged traffic signal state."},{"location":"perception/traffic_light_selector/#trafficlightconverter","title":"TrafficLightConverter","text":"

A temporary node that converts old message type to new message type.

"},{"location":"perception/traffic_light_selector/#inputs-outputs_1","title":"Inputs / Outputs","text":""},{"location":"perception/traffic_light_selector/#input_1","title":"Input","text":"Name Type Description ~/sub/traffic_lights autoware_auto_perception_msgs/msg/TrafficSignalArray The state in old message type."},{"location":"perception/traffic_light_selector/#output_1","title":"Output","text":"Name Type Description ~/pub/traffic_lights autoware_perception_msgs/msg/TrafficLightArray The state in new message type."},{"location":"perception/traffic_light_ssd_fine_detector/","title":"traffic_light_ssd_fine_detector","text":""},{"location":"perception/traffic_light_ssd_fine_detector/#purpose","title":"Purpose","text":"

It is a package for traffic light detection using MobileNetV2 and SSDLite.

"},{"location":"perception/traffic_light_ssd_fine_detector/#training-information","title":"Training Information","text":""},{"location":"perception/traffic_light_ssd_fine_detector/#pretrained-model","title":"Pretrained Model","text":"

The model is based on pytorch-ssd and the pretrained model could be downloaded from here.

"},{"location":"perception/traffic_light_ssd_fine_detector/#training-data","title":"Training Data","text":"

The model was fine-tuned on 1750 TierIV internal images of Japanese traffic lights.

"},{"location":"perception/traffic_light_ssd_fine_detector/#trained-onnx-model","title":"Trained Onnx model","text":""},{"location":"perception/traffic_light_ssd_fine_detector/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

Based on the camera image and the global ROI array detected by map_based_detection node, a CNN-based detection method enables highly accurate traffic light detection.

"},{"location":"perception/traffic_light_ssd_fine_detector/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/traffic_light_ssd_fine_detector/#input","title":"Input","text":"Name Type Description ~/input/image sensor_msgs/Image The full size camera image ~/input/rois autoware_auto_perception_msgs::msg::TrafficLightRoiArray The array of ROIs detected by map_based_detector"},{"location":"perception/traffic_light_ssd_fine_detector/#output","title":"Output","text":"Name Type Description ~/output/rois autoware_auto_perception_msgs::msg::TrafficLightRoiArray The detected accurate rois ~/debug/exe_time_ms tier4_debug_msgs::msg::Float32Stamped The time taken for inference"},{"location":"perception/traffic_light_ssd_fine_detector/#parameters","title":"Parameters","text":""},{"location":"perception/traffic_light_ssd_fine_detector/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description score_thresh double 0.7 If the objectness score is less than this value, the object is ignored mean std::vector [0.5,0.5,0.5] Average value of the normalized values of the image data used for training std std::vector [0.5,0.5,0.5] Standard deviation of the normalized values of the image data used for training"},{"location":"perception/traffic_light_ssd_fine_detector/#node-parameters","title":"Node Parameters","text":"Name Type Default Value Description onnx_file string \"./data/mb2-ssd-lite-tlr.onnx\" The onnx file name for yolo model label_file string \"./data/voc_labels_tl.txt\" The label file with label names for detected objects written on it mode string \"FP32\" The inference mode: \"FP32\", \"FP16\", \"INT8\" max_batch_size int 8 The size of the batch processed at one time by inference by TensorRT approximate_sync bool false Flag for whether to ues approximate sync policy build_only bool false shutdown node after TensorRT engine file is built"},{"location":"perception/traffic_light_ssd_fine_detector/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/traffic_light_ssd_fine_detector/#reference-repositories","title":"Reference repositories","text":"

pytorch-ssd github repository

MobileNetV2

"},{"location":"perception/traffic_light_visualization/","title":"traffic_light_visualization","text":""},{"location":"perception/traffic_light_visualization/#purpose","title":"Purpose","text":"

The traffic_light_visualization is a package that includes two visualizing nodes:

"},{"location":"perception/traffic_light_visualization/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"perception/traffic_light_visualization/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/traffic_light_visualization/#traffic_light_map_visualizer","title":"traffic_light_map_visualizer","text":""},{"location":"perception/traffic_light_visualization/#input","title":"Input","text":"Name Type Description ~/input/tl_state autoware_auto_perception_msgs::msg::TrafficSignalArray status of traffic lights ~/input/vector_map autoware_auto_mapping_msgs::msg::HADMapBin vector map"},{"location":"perception/traffic_light_visualization/#output","title":"Output","text":"Name Type Description ~/output/traffic_light visualization_msgs::msg::MarkerArray marker array that indicates status of traffic lights"},{"location":"perception/traffic_light_visualization/#traffic_light_roi_visualizer","title":"traffic_light_roi_visualizer","text":""},{"location":"perception/traffic_light_visualization/#input_1","title":"Input","text":"Name Type Description ~/input/tl_state autoware_auto_perception_msgs::msg::TrafficSignalArray status of traffic lights ~/input/image sensor_msgs::msg::Image the image captured by perception cameras ~/input/rois autoware_auto_perception_msgs::msg::TrafficLightRoiArray the ROIs detected by traffic_light_ssd_fine_detector ~/input/rough/rois (option) autoware_auto_perception_msgs::msg::TrafficLightRoiArray the ROIs detected by traffic_light_map_based_detector"},{"location":"perception/traffic_light_visualization/#output_1","title":"Output","text":"Name Type Description ~/output/image sensor_msgs::msg::Image output image with ROIs"},{"location":"perception/traffic_light_visualization/#parameters","title":"Parameters","text":""},{"location":"perception/traffic_light_visualization/#traffic_light_map_visualizer_1","title":"traffic_light_map_visualizer","text":"

None

"},{"location":"perception/traffic_light_visualization/#traffic_light_roi_visualizer_1","title":"traffic_light_roi_visualizer","text":""},{"location":"perception/traffic_light_visualization/#node-parameters","title":"Node Parameters","text":"Name Type Default Value Description enable_fine_detection bool false whether to visualize result of the traffic light fine detection"},{"location":"perception/traffic_light_visualization/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/traffic_light_visualization/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/traffic_light_visualization/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/traffic_light_visualization/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/traffic_light_visualization/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"planning/behavior_path_planner/","title":"Behavior Path Planner","text":""},{"location":"planning/behavior_path_planner/#purpose-use-cases","title":"Purpose / Use cases","text":"

The behavior_path_planner module is responsible to generate

  1. path based on the traffic situation,
  2. drivable area that the vehicle can move (defined in the path msg),
  3. turn signal command to be sent to the vehicle interface.

Depending on the situation, a suitable module is selected and executed on the behavior tree system.

The following modules are currently supported:

[WIP]

"},{"location":"planning/behavior_path_planner/#design","title":"Design","text":""},{"location":"planning/behavior_path_planner/#inputs-outputs-api","title":"Inputs / Outputs / API","text":""},{"location":"planning/behavior_path_planner/#output","title":"output","text":""},{"location":"planning/behavior_path_planner/#input","title":"input","text":""},{"location":"planning/behavior_path_planner/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"planning/behavior_path_planner/#parameters-for-drivable-area-expansion","title":"Parameters for drivable area expansion","text":"

Optionally, the drivable area can be expanded by a static distance. Expansion parameters are defined for each module of the behavior_path_planner and should be prefixed accordingly (see config/drivable_area_expansion.yaml for an example).

Name Unit Type Description Default value drivable_area_right_bound_offset [m] double expansion distance of the right bound 0.0 drivable_area_right_bound_offset [m] double expansion distance of the left bound 0.0 drivable_area_types_to_skip list of strings types of linestrings that are not expanded [road_border]"},{"location":"planning/behavior_path_planner/#behavior-tree","title":"Behavior Tree","text":"

In the behavior path planner, the behavior tree mechanism is used to manage which modules are activated in which situations. In general, this \"behavior manager\" like function is expected to become bigger as more and more modules are added in the future. To improve maintainability, we adopted the behavior tree. The behavior tree has the following advantages: easy visualization, easy configuration management (behaviors can be changed by replacing configuration files), and high scalability compared to the state machine.

The current behavior tree structure is shown below. Each modules (LaneChange, Avoidance, etc) have Request, Ready, and Plan nodes as a common function.

"},{"location":"planning/behavior_path_planner/#lane-following","title":"Lane Following","text":"

Generate path from center line of the route.

"},{"location":"planning/behavior_path_planner/#special-case","title":"special case","text":"

In the case of a route that requires a lane change, the path is generated with a specific distance margin (default: 12.0 m) from the end of the lane to ensure the minimum distance for lane change. (This function works not only for lane following but also for all modules.)

"},{"location":"planning/behavior_path_planner/#lane-change","title":"Lane Change","text":"

The Lane Change module is activated when lane change is needed and can be safely executed.

"},{"location":"planning/behavior_path_planner/#start-lane-change-condition-need-to-meet-all-of-the-conditions-below","title":"start lane change condition (need to meet all of the conditions below)","text":""},{"location":"planning/behavior_path_planner/#finish-lane-change-condition-need-to-meet-any-of-the-conditions-below","title":"finish lane change condition (need to meet any of the conditions below)","text":""},{"location":"planning/behavior_path_planner/#collision-prediction-with-obstacles","title":"Collision prediction with obstacles","text":"
  1. Predict each position of the ego-vehicle and other vehicle on the target lane of the lane change at t1, t2,...tn
  2. If a distance between the ego-vehicle and other one is lower than the threshold (ego_velocity * stop_time (2s)) at each time, that is judged as a collision
"},{"location":"planning/behavior_path_planner/#path-generation","title":"Path Generation","text":"

Path to complete the lane change in n + m seconds under an assumption that a velocity of the ego-vehicle is constant. Once the lane change is executed, the path won\u2019t be updated until the \"finish-lane-change-condition\" is satisfied.

"},{"location":"planning/behavior_path_planner/#avoidance","title":"Avoidance","text":"

The Avoidance module is activated when dynamic objects to be avoided exist and can be safely avoided.

"},{"location":"planning/behavior_path_planner/#target-objects","title":"Target objects","text":"

Dynamic objects that satisfy the following conditions are considered to be avoidance targets.

"},{"location":"planning/behavior_path_planner/#how-to-generate-avoidance-path","title":"How to generate avoidance path","text":"

To prevent sudden changes in the vicinity of the ego-position, an avoidance path is generated after a certain distance of straight lane driving. The process of generating the avoidance path is as follows:

  1. detect the target object and calculate the lateral shift distance (default: 2.0 m from closest footprint point)
  2. calculate the avoidance distance within the constraint of lateral jerk. (default: 0.3 ~ 2.0 m/s3)
    1. If the maximum jerk constraint is exceeded to keep the straight margin, the avoidance path generation is aborted.
  3. generates the smooth path with given avoiding distance and lateral shift length.
  4. generate \"return to center\" path if there is no next target within a certain distance (default: 50 m) after the current target.
"},{"location":"planning/behavior_path_planner/#single-objects-case","title":"single objects case","text":""},{"location":"planning/behavior_path_planner/#multiple-objects-case","title":"multiple objects case","text":"

If there are multiple avoidance targets and the lateral distances of these are close (default: < 0.5m), those objects are considered as a single avoidance target and avoidance is performed simultaneously with a single steering operation. If the lateral distances of the avoidance targets differ greatly than threshold, multiple steering operations are used to avoid them.

"},{"location":"planning/behavior_path_planner/#smooth-path-generation","title":"Smooth path generation","text":"

The path generation is computed in Frenet coordinates. The shift length profile for avoidance is generated by four segmental constant jerk polynomials, and added to the original path. Since the lateral jerk can be approximately seen as a steering maneuver, this calculation yields a result similar to a Clothoid curve.

For more detail, see behavior-path-planner-path-generation.

"},{"location":"planning/behavior_path_planner/#unimplemented-parts-limitations-for-avoidance","title":"Unimplemented parts / limitations for avoidance","text":""},{"location":"planning/behavior_path_planner/#pull-over","title":"Pull Over","text":"

The Pull Over module is activated when goal is in the shoulder lane. Ego-vehicle will stop at the goal.

"},{"location":"planning/behavior_path_planner/#start-pull-over-condition-need-to-meet-all-of-the-conditions-below","title":"start pull over condition (need to meet all of the conditions below)","text":" "},{"location":"planning/behavior_path_planner/#finish-pull-over-condition-need-to-meet-any-of-the-conditions-below","title":"finish pull over condition (need to meet any of the conditions below)","text":""},{"location":"planning/behavior_path_planner/#path-generation_1","title":"Path Generation","text":"

There are three path generation methods. The path is generated with a certain margin (default: 0.5 m) from left boundary of shoulder lane.

"},{"location":"planning/behavior_path_planner/#shift-parking","title":"shift parking","text":"

Pull over distance is calculated by the speed, lateral deviation, and the lateral jerk. The lateral jerk is searched for among the predetermined minimum and maximum values, and the one satisfies ready conditions described above is output.

  1. Apply uniform offset to centerline of shoulder lane for ensuring margin
  2. In the section between merge start and end, path is shifted by a method that is used to generate avoidance path (four segmental constant jerk polynomials)
  3. Combine this path with center line of road lane

"},{"location":"planning/behavior_path_planner/#geometric-parallel-parking","title":"geometric parallel parking","text":"

Generate two arc paths with discontinuous curvature. It stops twice in the middle of the path to control the steer on the spot. There are two path generation methods: forward and backward. See also [1] for details of the algorithm. There is also a simple python implementation.

"},{"location":"planning/behavior_path_planner/#arc-forward-parking","title":"arc forward parking","text":"

Generate two forward arc paths.

"},{"location":"planning/behavior_path_planner/#arc-backward-parking","title":"arc backward parking","text":"

Generate two backward arc paths.

.

"},{"location":"planning/behavior_path_planner/#unimplemented-parts-limitations-for-pull-over","title":"Unimplemented parts / limitations for pull over","text":""},{"location":"planning/behavior_path_planner/#pull-out","title":"Pull Out","text":"

The Pull Out module is activated when ego-vehicle is stationary and footprint of ego-vehicle is included in shoulder lane. This module ends when ego-vehicle merges into the road.

"},{"location":"planning/behavior_path_planner/#start-pull-out-condition-need-to-meet-all-of-the-conditions-below","title":"start pull out condition (need to meet all of the conditions below)","text":" "},{"location":"planning/behavior_path_planner/#finish-pull-out-condition-need-to-meet-any-of-the-conditions-below","title":"finish pull out condition (need to meet any of the conditions below)","text":""},{"location":"planning/behavior_path_planner/#path-generation_2","title":"Path Generation","text":"

There are two path generation methods.

"},{"location":"planning/behavior_path_planner/#shift-pull-out","title":"shift pull out","text":"

Pull out distance is calculated by the speed, lateral deviation, and the lateral jerk. The lateral jerk is searched for among the predetermined minimum and maximum values, and the one that generates a safe path is selected.

"},{"location":"planning/behavior_path_planner/#geometric-pull-out","title":"geometric pull out","text":"

Generate two arc paths with discontinuous curvature. Ego-vehicle stops once in the middle of the path to control the steer on the spot. See also [1] for details of the algorithm.

"},{"location":"planning/behavior_path_planner/#unimplemented-parts-limitations-for-pull-put","title":"Unimplemented parts / limitations for pull put","text":""},{"location":"planning/behavior_path_planner/#side-shift","title":"Side Shift","text":"

The role of the Side Shift module is to shift the reference path laterally in response to external instructions (such as remote operation).

"},{"location":"planning/behavior_path_planner/#parameters-for-path-generation","title":"Parameters for path generation","text":"

In the figure, straight margin distance is to avoid sudden shifting, that is calculated by max(min_distance_to_start_shifting, ego_speed * time_to_start_shifting) . The shifting distance is calculated by jerk, with minimum speed and minimum distance parameter, described below. The minimum speed is used to prevent sharp shift when ego vehicle is stopped.

Name Unit Type Description Default value min_distance_to_start_shifting [m] double minimum straight distance before shift start. 5.0 time_to_start_shifting [s] double time of minimum straight distance before shift start. 1.0 shifting_lateral_jerk [m/s3] double lateral jerk to calculate shifting distance. 0.2 min_shifting_distance [m] double the shifting distance is longer than this length. 5.0 min_shifting_speed [m/s] double lateral jerk is calculated with the greater of current_speed or this speed. 5.56"},{"location":"planning/behavior_path_planner/#smooth-goal-connection","title":"Smooth goal connection","text":"

If the target path contains a goal, modify the points of the path so that the path and the goal are connected smoothly. This process will change the shape of the path by the distance of refine_goal_search_radius_range from the goal. Note that this logic depends on the interpolation algorithm that will be executed in a later module (at the moment it uses spline interpolation), so it needs to be updated in the future.

"},{"location":"planning/behavior_path_planner/#references-external-links","title":"References / External links","text":"

This module depends on the external BehaviorTreeCpp library.

[1] H. Vorobieva, S. Glaser, N. Minoiu-Enache, and S. Mammar, \u201cGeometric path planning for automatic parallel parking in tiny spots\u201d, IFAC Proceedings Volumes, vol. 45, no. 24, pp. 36\u201342, 2012.

"},{"location":"planning/behavior_path_planner/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":"

-

"},{"location":"planning/behavior_path_planner/#related-issues","title":"Related issues","text":"

-

"},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/","title":"Avoidance design","text":"

This is a rule-based path planning module designed for obstacle avoidance.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#purpose-role","title":"Purpose / Role","text":"

This module is designed for rule-based avoidance that is easy for developers to design its behavior. It generates avoidance path parameterized by intuitive parameters such as lateral jerk and avoidance distance margin. This makes it possible to pre-define avoidance behavior.

In addition, the approval interface of behavior_path_planner allows external users / modules (e.g. remote operation) to intervene the decision of the vehicle behavior.\u3000 This function is expected to be used, for example, for remote intervention in emergency situations or gathering information on operator decisions during development.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#limitations","title":"Limitations","text":"

This module allows developers to design vehicle behavior in avoidance planning using specific rules. Due to the property of rule-based planning, the algorithm can not compensate for not colliding with obstacles in complex cases. This is a trade-off between \"be intuitive and easy to design\" and \"be hard to tune but can handle many cases\". This module adopts the former policy and therefore this output should be checked more strictly in the later stage. In the .iv reference implementation, there is another avoidance module in motion planning module that uses optimization to handle the avoidance in complex cases. (Note that, the motion planner needs to be adjusted so that the behavior result will not be changed much in the simple case and this is a typical challenge for the behavior-motion hierarchical architecture.)

"},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#why-is-avoidance-in-behavior-module","title":"Why is avoidance in behavior module?","text":"

This module executes avoidance over lanes, and the decision requires the lane structure information to take care of traffic rules (e.g. it needs to send an indicator signal when the vehicle crosses a lane). The difference between motion and behavior module in the planning stack is whether the planner takes traffic rules into account, which is why this avoidance module exists in the behavior module.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

The following figure shows a simple explanation of the logic for avoidance path generation. First, target objects are picked up, and shift requests are generated for each object. These shift requests are generated by taking into account the lateral jerk required for avoidance (red lines). Then these requests are merged and the shift points are created on the reference path (blue line). Filtering operations are performed on the shift points such as removing unnecessary shift points (yellow line), and finally a smooth avoidance path is generated by combining Clothoid-like curve primitives (green line).

"},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#flowchart","title":"Flowchart","text":""},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#overview-of-algorithm-for-target-object-filtering","title":"Overview of algorithm for target object filtering","text":""},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#how-to-decide-the-target-obstacles","title":"How to decide the target obstacles","text":"

The avoidance target should be limited to stationary objects (you should not avoid a vehicle waiting at a traffic light even if it blocks your path). Therefore, target vehicles for avoidance should meet the following specific conditions.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#parked-car-detection","title":"Parked-car detection","text":"

Not only the length from the centerline, but also the length from the road shoulder is calculated and used for the filtering process. It calculates the ratio of the actual length between the the object's center and the center line shift_length and the maximum length the object can shift shiftable_length.

l_D = l_a - \\frac{width}{2}\nratio =  \\frac{l_d}{l_D}\n

The closer the object is to the shoulder, the larger the value of ratio (theoretical max value is 1.0), and it compares the value and object_check_shiftable_ratio to determine whether the object is a parked-car.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#compensation-for-detection-lost","title":"Compensation for detection lost","text":"

In order to prevent chattering of recognition results, once an obstacle is targeted, it is hold for a while even if it disappears. This is effective when recognition is unstable. However, since it will result in over-detection (increase a number of false-positive), it is necessary to adjust parameters according to the recognition accuracy (if object_last_seen_threshold = 0.0, the recognition result is 100% trusted).

"},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#overview-of-algorithm-for-avoidance-path-generation","title":"Overview of algorithm for avoidance path generation","text":""},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#how-to-prevent-shift-line-chattering-that-is-caused-by-perception-noise","title":"How to prevent shift line chattering that is caused by perception noise","text":"

Since object recognition results contain noise related to position ,orientation and boundary size, if the raw object recognition results are used in path generation, the avoidance path will be directly affected by the noise.

Therefore, in order to reduce the influence of the noise, avoidance module generate a envelope polygon for the avoidance target that covers it, and the avoidance path should be generated based on that polygon. The envelope polygons are generated so that they are parallel to the reference path and the polygon size is larger than the avoidance target (define by object_envelope_buffer). The position and size of the polygon is not updated as long as the avoidance target exists within that polygon.

# default value\nobject_envelope_buffer: 0.3 # [m]\n

"},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#computing-shift-length-and-shift-points","title":"Computing Shift Length and Shift Points","text":"

The lateral shift length is affected by 4 variables, namely lateral_collision_safety_buffer, lateral_collision_margin, vehicle_width and overhang_distance. The equation is as follows

avoid_margin = lateral_collision_margin + lateral_collision_safety_buffer + 0.5 * vehicle_width\nmax_allowable_lateral_distance = to_road_shoulder_distance - road_shoulder_safety_margin - 0.5 * vehicle_width\nif(isOnRight(o))\n{\nshift_length = avoid_margin + overhang_distance\n}\nelse\n{\nshift_length = avoid_margin - overhang_distance\n}\n

The following figure illustrates these variables(This figure just shows the max value of lateral shift length).

"},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#rationale-of-having-safety-buffer-and-safety-margin","title":"Rationale of having safety buffer and safety margin","text":"

To compute the shift length, additional parameters that can be tune are lateral_collision_safety_buffer and road_shoulder_safety_margin.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#generating-path-only-within-lanelet-boundaries","title":"Generating path only within lanelet boundaries","text":"

The shift length is set as a constant value before the feature is implemented. Setting the shift length like this will cause the module to generate an avoidance path regardless of actual environmental properties. For example, the path might exceed the actual road boundary or go towards a wall. Therefore, to address this limitation, in addition to how to decide the target obstacle, the module also takes into account the following additional element

These elements are used to compute the distance from the object to the road's shoulder (to_road_shoulder_distance). The parameters enable_avoidance_over_same_direction and enable_avoidance_over_opposite_direction allows further configuration of the to to_road_shoulder_distance. The following image illustrates the configuration.

If one of the following conditions is false, then the shift point will not be generated.

avoid_margin = lateral_collision_margin + lateral_collision_safety_buffer + 0.5 * vehicle_width\navoid_margin <= (to_road_shoulder_distance - 0.5 * vehicle_width - road_shoulder_safety_margin)\n
"},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#details-of-algorithm-for-avoidance-path-generation","title":"Details of algorithm for avoidance path generation","text":""},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#flow-chart-of-the-process","title":"Flow-chart of the process","text":""},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#how-to-decide-the-path-shape","title":"How to decide the path shape","text":"

Generate shift points for obstacles with given lateral jerk. These points are integrated to generate an avoidance path. The detailed process flow for each case corresponding to the obstacle placement are described below. The actual implementation is not separated for each case, but the function corresponding to multiple obstacle case (both directions) is always running.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#one-obstacle-case","title":"One obstacle case","text":"

The lateral shift distance to the obstacle is calculated, and then the shift point is generated from the ego vehicle speed and the given lateral jerk as shown in the figure below. A smooth avoidance path is then calculated based on the shift point.

Additionally, the following processes are executed in special cases.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#lateral-jerk-relaxation-conditions","title":"Lateral jerk relaxation conditions","text":""},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#minimum-velocity-relaxation-conditions","title":"Minimum velocity relaxation conditions","text":"

There is a problem that we can not know the actual speed during avoidance in advance. This is especially critical when the ego vehicle speed is 0. To solve that, this module provides a parameter for the minimum avoidance speed, which is used for the lateral jerk calculation when the vehicle speed is low.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#multiple-obstacle-case-one-direction","title":"Multiple obstacle case (one direction)","text":"

Generate shift points for multiple obstacles. All of them are merged to generate new shift points along the reference path. The new points are filtered (e.g. remove small-impact shift points), and the avoidance path is computed for the filtered shift points.

Merge process of raw shift points: check the shift length on each path points. If the shift points are overlapped, the maximum shift value is selected for the same direction.

For the details of the shift point filtering, see filtering for shift points.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#multiple-obstacle-case-both-direction","title":"Multiple obstacle case (both direction)","text":"

Generate shift points for multiple obstacles. All of them are merged to generate new shift points. If there are areas where the desired shifts conflict in different directions, the sum of the maximum shift amounts of these areas is used as the final shift amount. The rest of the process is the same as in the case of one direction.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#filtering-for-shift-points","title":"Filtering for shift points","text":"

The shift points are modified by a filtering process in order to get the expected shape of the avoidance path. It contains the following filters.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#other-features","title":"Other features","text":""},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#safety-check","title":"Safety check","text":"

The avoidance module has a safety check logic. The result of safe check is used for yield maneuver. It is enable by setting enable_safety_check as true.

enable_safety_check: false\n\n# For safety check\nsafety_check_backward_distance: 50.0 # [m]\nsafety_check_time_horizon: 10.0 # [s]\nsafety_check_idling_time: 1.5 # [s]\nsafety_check_accel_for_rss: 2.5 # [m/ss]\n

safety_check_backward_distance is the parameter related to the safety check area. The module checks a collision risk for all vehicle that is within shift side lane and between object object_check_forward_distance ahead and safety_check_backward_distance behind.

NOTE: Even if a part of an object polygon overlaps the detection area, if the center of gravity of the object does not exist on the lane, the vehicle is excluded from the safety check target.

Judge the risk of collision based on ego future position and object prediction path. The module calculates Ego's future position in the time horizon (safety_check_time_horizon), and use object's prediction path as object future position.

After calculating the future position of Ego and object, the module calculates the lateral/longitudinal deviation of Ego and the object. The module also calculates the lateral/longitudinal margin necessary to determine that it is safe to execute avoidance maneuver, and if both the lateral and longitudinal distances are less than the margins, it determines that there is a risk of a collision at that time.

The value of the longitudinal margin is calculated based on Responsibility-Sensitive Safety theory (RSS). The safety_check_idling_time represents T_{idle}, and safety_check_accel_for_rss represents a_{max}.

D_{lon} = V_{ego}T_{idle} + \\frac{1}{2}a_{max}T_{idle}^2 + \\frac{(V_{ego} + a_{max}T_{idle})^2}{2a_{max}} - \\frac{V_{obj}^2}{2a_{max}}\n

The lateral margin is changeable based on ego longitudinal velocity. If the vehicle is driving at a high speed, the lateral margin should be larger, and if the vehicle is driving at a low speed, the value of the lateral margin should be set to a smaller value. Thus, the lateral margin for each vehicle speed is set as a parameter, and the module determines the lateral margin from the current vehicle speed as shown in the following figure.

target_velocity_matrix:\ncol_size: 5\nmatrix: [2.78 5.56 ... 16.7  # target velocity [m/s]\n0.50 0.75 ... 1.50] # margin [m]\n
"},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#yield-maneuver","title":"Yield maneuver","text":""},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#overview","title":"Overview","text":"

If an avoidance path can be generated and it is determined that avoidance maneuver should not be executed due to surrounding traffic conditions, the module executes YIELD maneuver. In yield maneuver, the vehicle slows down to the target vehicle velocity (yield_velocity) and keep that speed until the module judge that avoidance path is safe. If the YIELD condition goes on and the vehicle approaches the avoidance target, it stops at the avoidable position and waits until the safety is confirmed.

# For yield maneuver\nyield_velocity: 2.78 # [m/s]\n

NOTE: In yield maneuver, the vehicle decelerates target velocity under constraints.

nominal_deceleration: -1.0 # [m/ss]\nnominal_jerk: 0.5 # [m/sss]\n

If it satisfies following all of three conditions, the module inserts stop point in front of the avoidance target with an avoidable interval.

The module determines that it is NOT passable without avoidance if the object overhang is less than the threshold.

lateral_passable_collision_margin: 0.5 # [-]\n
L_{overhang} < \\frac{W}{2} + L_{margin} (not passable)\n

The W represents vehicle width, and L_{margin} represents lateral_passable_collision_margin.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#limitation","title":"Limitation","text":""},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#limitation1","title":"Limitation1","text":"

The current behavior in unsafe condition is just slow down and it is so conservative. It is difficult to achieve aggressive behavior in the current architecture because of modularity. There are many modules in autoware that change the vehicle speed, and the avoidance module cannot know what speed planning they will output, so it is forced to choose a behavior that is as independent of other modules' processing as possible.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#limitation2","title":"Limitation2","text":"

The YIELD maneuver is executed ONLY when the vehicle has NOT initiated avoidance maneuver. The module has a threshold parameter (avoidance_initiate_threshold) for the amount of shifting and determines that the vehicle is initiating avoidance if the vehicle current shift exceeds the threshold.

SHIFT_{current} > L_{threshold}\n

"},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#avoidance-cancelling-maneuver","title":"Avoidance cancelling maneuver","text":"

If enable_update_path_when_object_is_gone parameter is true, Avoidance Module takes different actions according to the situations as follows:

If enable_update_path_when_object_is_gone parameter is false, Avoidance Module doesn't revert generated avoidance path even if path objects are gone.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#how-to-keep-the-consistency-of-the-optimize-base-path-generation-logic","title":"How to keep the consistency of the optimize-base path generation logic","text":"

WIP

"},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#parameters","title":"Parameters","text":"

The avoidance specific parameter configuration file can be located at src/autoware/launcher/planning_launch/config/scenario_planning/lane_driving/behavior_planning/behavior_path_planner/avoidance/avoidance.param.yaml.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#general-parameters","title":"General parameters","text":"

namespace: avoidance.

Name Unit Type Description Default value resample_interval_for_planning [m] double Path resample interval for avoidance planning path. 0.3 resample_interval_for_output [m] double Path resample interval for output path. Too short interval increases computational cost for latter modules. 4.0 detection_area_right_expand_dist [m] double Lanelet expand length for right side to find avoidance target vehicles. 0.0 detection_area_left_expand_dist [m] double Lanelet expand length for left side to find avoidance target vehicles. 1.0 object_envelope_buffer [m] double Envelope object polygon with buffer in order to prevent shift line chattering. 0.3 enable_bound_clipping [-] bool Enable clipping left and right bound of drivable area when obstacles are in the drivable area false enable_avoidance_over_same_direction [-] bool Extend avoidance trajectory to adjacent lanes that has same direction. If false, avoidance only happen in current lane. true enable_avoidance_over_opposite_direction [-] bool Extend avoidance trajectory to adjacent lanes that has opposite direction. enable_avoidance_over_same_direction must be true to take effects true enable_update_path_when_object_is_gone [-] bool Reset trajectory when avoided objects are gone. If false, shifted path points remain same even though the avoided objects are gone. false enable_safety_check [-] bool Flag to enable safety check. false enable_yield_maneuver [-] bool Flag to enable yield maneuver. false publish_debug_marker [-] bool Flag to publish debug marker (set false as default since it takes considerable cost). false print_debug_info [-] bool Flag to print debug info (set false as default since it takes considerable cost). false

NOTE: It has to set both enable_safety_check and enable_yield_maneuver to enable yield maneuver.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#avoidance-target-filtering-parameters","title":"Avoidance target filtering parameters","text":"

namespace: avoidance.target_object.

Name Unit Type Description Default value car [-] bool Allow avoidance for object type CAR true truck [-] bool Allow avoidance for object type TRUCK true bus [-] bool Allow avoidance for object type BUS true trailer [-] bool Allow avoidance for object type TRAILER true unknown [-] bool Allow avoidance for object type UNKNOWN false bicycle [-] bool Allow avoidance for object type BICYCLE false motorcycle [-] bool Allow avoidance for object type MOTORCYCLE false pedestrian [-] bool Allow avoidance for object type PEDESTRIAN false

namespace: avoidance.target_filtering.

Name Unit Type Description Default value threshold_speed_object_is_stopped [m/s] double Vehicles with speed greater than this will be excluded from avoidance target. 1.0 threshold_time_object_is_moving [s] double Forward distance to search the avoidance target. 1.0 threshold_distance_object_is_on_center [m] double Vehicles around the center line within this distance will be excluded from avoidance target. 1.0 object_check_forward_distance [m] double Forward distance to search the avoidance target. 150.0 object_check_backward_distance [m] double Backward distance to search the avoidance target. 2.0 object_check_goal_distance [m] double Backward distance to search the avoidance target. 20.0 object_check_shiftable_ratio [m] double Vehicles around the center line within this distance will be excluded from avoidance target. 0.6 object_check_min_road_shoulder_width [m] double Vehicles around the center line within this distance will be excluded from avoidance target. 0.5 object_last_seen_threshold [s] double For the compensation of the detection lost. The object is registered once it is observed as an avoidance target. When the detection loses, the timer will start and the object will be un-registered when the time exceeds this limit. 2.0 left_hand_traffic [-] bool Flag to select left or right hand traffic. TRUE: LEFT-HAND TRAFFIC / FALSE: RIGHT-HAND TRAFFIC true"},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#safety-check-parameters","title":"Safety check parameters","text":"

namespace: avoidance.safety_check.

Name Unit Type Description Default value safety_check_backward_distance [m] double Backward distance to search the dynamic objects. 50.0 safety_check_time_horizon [s] double Time horizon to check lateral/longitudinal margin is enough or not. 10.0 safety_check_idling_time [t] double Time delay constant that be use for longitudinal margin calculation based on RSS. 1.5 safety_check_accel_for_rss [m/ss] double Accel constant that be used for longitudinal margin calculation based on RSS. 2.5 safety_check_hysteresis_factor [-] double Hysteresis factor that be used for chattering prevention. 2.0"},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#avoidance-maneuver-parameters","title":"Avoidance maneuver parameters","text":"

namespace: avoidance.avoidance.lateral.

Name Unit Type Description Default value lateral_collision_margin [m] double The lateral distance between ego and avoidance targets. 1.0 lateral_collision_safety_buffer [m] double Creates an additional gap that will prevent the vehicle from getting to near to the obstacle 0.7 road_shoulder_safety_margin [m] double Prevents the generated path to come too close to the road shoulders. 0.0 avoidance_execution_lateral_threshold [m] double The lateral distance deviation threshold between the current path and suggested avoidance point to execute avoidance. (*2) 0.499 max_right_shift_length [m] double Maximum shift length for right direction 5.0 max_left_shift_length [m] double Maximum shift length for left direction 5.0

namespace: avoidance.avoidance.longitudinal.

Name Unit Type Description Default value prepare_time [s] double Avoidance shift starts from point ahead of this time x ego_speed to avoid sudden path change. 2.0 longitudinal_collision_safety_buffer [s] double Longitudinal collision buffer between target object and shift line. 0.0 min_prepare_distance [m] double Minimum distance for \"prepare_time\" x \"ego_speed\". 1.0 min_avoidance_distance [m] double Minimum distance of avoidance path (i.e. this distance is needed even if its lateral jerk is very low) 10.0 min_nominal_avoidance_speed [m/s] double Minimum speed for jerk calculation in a nominal situation (*1). 7.0 min_sharp_avoidance_speed [m/s] double Minimum speed for jerk calculation in a sharp situation (*1). 1.0"},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#yield-maneuver-parameters","title":"Yield maneuver parameters","text":"

namespace: avoidance.yield.

Name Unit Type Description Default value yield_velocity [m/s] double The ego will decelerate yield velocity in the yield maneuver. 2.78"},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#stop-maneuver-parameters","title":"Stop maneuver parameters","text":"

namespace: avoidance.stop.

Name Unit Type Description Default value min_distance [m] double Minimum stop distance in the situation where avoidance maneuver is not approved or in yield maneuver. 10.0 max_distance [m] double Maximum stop distance in the situation where avoidance maneuver is not approved or in yield maneuver. 20.0"},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#constraints-parameters","title":"Constraints parameters","text":"

namespace: avoidance.constraints.

Name Unit Type Description Default value use_constraints_for_decel [-] bool Flag to decel under longitudinal constraints. TRUE: allow to control breaking mildness false

namespace: avoidance.constraints.lateral.

a Name Unit Type Description Default value prepare_time [s] double Avoidance shift starts from point ahead of this time x ego_speed to avoid sudden path change. 2.0 min_prepare_distance [m] double Minimum distance for \"prepare_time\" x \"ego_speed\". 1.0 nominal_lateral_jerk [m/s3] double Avoidance path is generated with this jerk when there is enough distance from ego. 0.2 max_lateral_jerk [m/s3] double Avoidance path gets sharp up to this jerk limit when there is not enough distance from ego. 1.0

namespace: avoidance.constraints.longitudinal.

Name Unit Type Description Default value nominal_deceleration [m/ss] double Nominal deceleration limit. -1.0 nominal_jerk [m/sss] double Nominal jerk limit. 0.5 max_deceleration [m/ss] double Max decelerate limit. -2.0 max_jerk [m/sss] double Max jerk limit. 1.0 min_avoidance_speed_for_acc_prevention [m] double Minimum speed limit to be applied to prevent acceleration during avoidance. 3.0 max_avoidance_acceleration [m/ss] double Maximum acceleration during avoidance. 0.5

(*2) If there are multiple vehicles in a row to be avoided, no new avoidance path will be generated unless their lateral margin difference exceeds this value.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":" "},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#how-to-debug","title":"How to debug","text":""},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#publishing-visualization-marker","title":"Publishing Visualization Marker","text":"

Developers can see what is going on in each process by visualizing all the avoidance planning process outputs. The example includes target vehicles, shift points for each object, shift points after each filtering process, etc.

To enable the debug marker, execute ros2 param set /planning/scenario_planning/lane_driving/behavior_planning/behavior_path_planner avoidance.publish_debug_marker true (no restart is needed) or simply set the publish_debug_marker to true in the avoidance.param.yaml for permanent effect (restart is needed). Then add the marker /planning/scenario_planning/lane_driving/behavior_planning/behavior_path_planner/debug/avoidance in rviz2.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#echoing-debug-message-to-find-out-why-the-objects-were-ignored","title":"Echoing debug message to find out why the objects were ignored","text":"

If for some reason, no shift point is generated for your object, you can check for the failure reason via ros2 topic echo.

To print the debug message, just run the following

ros2 topic echo /planning/scenario_planning/lane_driving/behavior_planning/behavior_path_planner/debug/avoidance_debug_message_array\n
"},{"location":"planning/behavior_path_planner/behavior_path_planner_drivable_area_design/","title":"Drivable Area design","text":"

Drivable Area represents the area where ego vehicle can pass.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_drivable_area_design/#purpose-role","title":"Purpose / Role","text":"

In order to defined the area that ego vehicle can travel safely, we generate drivable area in behavior path planner module. Our drivable area is represented by two line strings, which are left_bound line and right_bound line respectively. Both left_bound and right_bound are created from left and right boundaries of lanelets. Note that left_bound and right bound are generated by generateDrivableArea function.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_drivable_area_design/#assumption","title":"Assumption","text":"

Our drivable area has several assumptions.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_drivable_area_design/#limitations","title":"Limitations","text":"

Currently, when clipping left bound or right bound, it can clip the bound more than necessary and the generated path might be conservative.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_drivable_area_design/#parameters-for-drivable-area-generation","title":"Parameters for drivable area generation","text":"Name Unit Type Description Default value enabled [-] boolean whether to dynamically the drivable area using the ego footprint false ego.extra_footprint_offset.front [m] double extra length to add to the front of the ego footprint 0.0 ego.extra_footprint_offset.rear [m] double extra length to add to the rear of the ego footprint 0.0 ego.extra_footprint_offset.left [m] double extra length to add to the left of the ego footprint 0.0 ego.extra_footprint_offset.right [m] double extra length to add to the rear of the ego footprint 0.0 dynamic_objects.avoid [-] boolean if true, the drivable area is not expanded in the predicted path of dynamic objects true dynamic_objects.extra_footprint_offset.front [m] double extra length to add to the front of the ego footprint 0.5 dynamic_objects.extra_footprint_offset.rear [m] double extra length to add to the rear of the ego footprint 0.5 dynamic_objects.extra_footprint_offset.left [m] double extra length to add to the left of the ego footprint 0.5 dynamic_objects.extra_footprint_offset.right [m] double extra length to add to the rear of the ego footprint 0.5 max_distance [m] double maximum distance by which the drivable area can be expended. A value of 0.0 means no maximum distance. 0.0 expansion.method [-] string method to use for the expansion: \"polygon\" will expand the drivable area around the ego footprint polygons; \"lanelet\" will expand to the whole lanelets overlapped by the ego footprints \"polygon\" expansion.max_arc_path_length [m] double maximum length along the path where the ego footprint is projected 50.0 expansion.extra_arc_length [m] double extra arc length (relative to the path) around ego footprints where the drivable area is expanded 0.5 expansion.avoid_linestring.types [-] string array linestring types in the lanelet maps that will not be crossed when expanding the drivable area [guard_rail, road_border] avoid_linestring.distance [m] double distance to keep between the drivable area and the linestrings to avoid 0.0 avoid_linestring.compensate.enable [-] bool if true, when the expansion is blocked by a linestring on one side of the path, we try to compensate and expand on the other side true avoid_linestring.compensate.extra_distance [m] double extra distance added to the expansion when compensating 3.0

The following parameters are defined for each module. Please refer to the config/drivable_area_expansion.yaml file.

Name Unit Type Description Default value drivable_area_right_bound_offset [m] double right offset length to expand drivable area 5.0 drivable_area_left_bound_offset [m] double left offset length to expand drivable area 5.0 drivable_area_types_to_skip [-] string linestring types (as defined in the lanelet map) that will not be expanded road_border"},{"location":"planning/behavior_path_planner/behavior_path_planner_drivable_area_design/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

This section gives details of the generation of the drivable area (left_bound and right_bound).

"},{"location":"planning/behavior_path_planner/behavior_path_planner_drivable_area_design/#drivable-lanes-generation","title":"Drivable Lanes Generation","text":"

Before generating drivable areas, drivable lanes need to be sorted. Drivable Lanes are selected in each module (Lane Follow, Avoidance, Lane Change, Pull Over, Pull Out and etc.), so more details about selection of drivable lanes can be found in each module's document. We use the following structure to define the drivable lanes.

struct DrivalbleLanes\n{\nlanelet::ConstLanelet right_lanelet; // right most lane\nlanelet::ConstLanelet left_lanelet; // left most lane\nlanelet::ConstLanelets middle_lanelets; // middle lanes\n};\n

The image of the sorted drivable lanes is depicted in the following picture.

Note that, the order of drivable lanes become

drivable_lanes = {DrivableLane1, DrivableLanes2, DrivableLanes3, DrivableLanes4, DrivableLanes5}\n
"},{"location":"planning/behavior_path_planner/behavior_path_planner_drivable_area_design/#drivable-area-generation","title":"Drivable Area Generation","text":"

In this section, a drivable area is created using drivable lanes arranged in the order in which vehicles pass by. We created left_bound from left boundary of the leftmost lanelet and right_bound from right boundary of the rightmost lanelet. The image of the created drivable area will be the following blue lines. Note that the drivable area is defined in the Path and PathWithLaneId messages as

std::vector<geometry_msgs::msg::Point> left_bound;\nstd::vector<geometry_msgs::msg::Point> right_bound;\n

and each point of right bound and left bound has a position in the absolute coordinate system.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_drivable_area_design/#drivable-area-expansion","title":"Drivable Area Expansion","text":""},{"location":"planning/behavior_path_planner/behavior_path_planner_drivable_area_design/#static-expansion","title":"Static Expansion","text":"

Each module can statically expand the left and right bounds of the target lanes by the parameter defined values. This enables large vehicles to pass narrow curve. The image of this process can be described as

Note that we only expand right bound of the rightmost lane and left bound of the leftmost lane.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_drivable_area_design/#dynamic-expansion","title":"Dynamic Expansion","text":"

The drivable area can also be expanded dynamically by considering the ego vehicle footprint projected on each path point. This expansion can be summarized with the following steps:

  1. Build the ego path footprint.
  2. Build the dynamic objects' predicted footprints (optional).
  3. Build \"uncrossable\" lines.
  4. Remove the footprints from step 2 and the lines from step 3 from the ego path footprint from step 1.
  5. Expand the drivable area with the result of step 4.
Inputs Footprints and uncrossable lines Expanded drivable area

Please note that the dynamic expansion can only increase the size of the drivable area and cannot remove any part from the original drivable area.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_drivable_area_design/#visualizing-maximum-drivable-area-debug","title":"Visualizing maximum drivable area (Debug)","text":"

Sometimes, the developers might get a different result between two maps that may look identical during visual inspection.

For example, in the same area, one can perform avoidance and another cannot. This might be related to the maximum drivable area issues due to the non-compliance vector map design from the user.

To debug the issue, the maximum drivable area boundary can be visualized.

The maximum drivable area can be visualize by adding the marker from /planning/scenario_planning/lane_driving/behavior_planning/behavior_path_planner/maximum_drivable_area

"},{"location":"planning/behavior_path_planner/behavior_path_planner_lane_change_design/","title":"Lane Change design","text":"

The Lane Change module is activated when lane change is needed and can be safely executed.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_lane_change_design/#lane-change-requirement","title":"Lane Change Requirement","text":""},{"location":"planning/behavior_path_planner/behavior_path_planner_lane_change_design/#generating-lane-change-candidate-path","title":"Generating Lane Change Candidate Path","text":"

The lane change candidate path is divided into two phases: preparation and lane-changing. The following figure illustrates each phase of the lane change candidate path.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_lane_change_design/#preparation-phase","title":"Preparation phase","text":"

The preparation trajectory is the candidate path's first and the straight portion generated along the ego vehicle's current lane. The length of the preparation trajectory is computed as follows.

lane_change_prepare_distance = max(current_speed * lane_change_prepare_duration + 0.5 * deceleration * lane_change_prepare_duration^2, minimum_lane_change_prepare_distance)\n

During the preparation phase, the turn signal will be activated when the remaining distance is equal to or less than lane_change_search_distance.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_lane_change_design/#lane-changing-phase","title":"Lane-changing phase","text":"

The lane-changing phase consist of the shifted path that moves ego from current lane to the target lane. Total distance of lane-changing phase is as follows.

lane_change_prepare_velocity = current_speed + deceleration * lane_change_prepare_duration\nlane_changing_distance = max(lane_change_prepare_velocity * lane_changing_duration + 0.5 * deceleration * lane_changing_duration^2, minimum_lane_change_length + backward_length_buffer_for_end_of_lane)\n

The backward_length_buffer_for_end_of_lane is added to allow some window for any possible delay, such as control or mechanical delay during brake lag.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_lane_change_design/#multiple-candidate-path-samples","title":"Multiple candidate path samples","text":"

Lane change velocity is affected by the ego vehicle's current velocity. High velocity requires longer preparation and lane changing distance. However we also need to plan lane changing trajectories in case ego vehicle slows down. Computing candidate paths that assumes ego vehicle's slows down is performed by substituting predetermined deceleration value into lane_change_prepare_distance, lane_change_prepare_velocity and lane_changing_distance equation.

The predetermined deceleration are a set of value that starts from deceleration = 0.0, and decrease by acceleration_resolution until it reachesdeceleration = -maximum_deceleration. The acceleration_resolution is determine by the following

acceleration_resolution = maximum_deceleration / lane_change_sampling_num\n

The following figure illustrates when lane_change_sampling_num = 4. Assuming that maximum_deceleration = 1.0 then a0 == 0.0 == no deceleration, a1 == 0.25, a2 == 0.5, a3 == 0.75 and a4 == 1.0 == maximum_deceleration. a0 is the expected lane change trajectories should ego vehicle do not decelerate, and a1's path is the expected lane change trajectories should ego vehicle decelerate at 0.25 m/s^2.

Which path will be chosen will depend on validity and collision check.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_lane_change_design/#candidate-paths-validity-check","title":"Candidate Path's validity check","text":"

A candidate path is valid if the total lane change distance is less than

  1. distance to the end of current lane
  2. distance to the next intersection
  3. distance from current pose to the goal.
  4. distance to the crosswalk.

The goal must also be in the list of the preferred lane.

The following flow chart illustrates the validity check.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_lane_change_design/#candidate-paths-safety-check","title":"Candidate Path's Safety check","text":"

Valid candidate path is evaluated for safety before is was selected as the output candidate path. The flow of the process is as follows.

If all valid candidate path is unsafe, then the operator will have the option to perform force lane change by using the front-most candidate path as the output. The force lane change will ignore all safety checks.

A candidate path's is safe if it satisfies the following lateral distance criteria,

lateral distance > lateral_distance_threshold\n

However, suppose the lateral distance is insufficient. In that case, longitudinal distance will be evaluated. The candidate path is safe only when the longitudinal gap between the ego vehicle and the dynamic object is wide enough.

The following charts illustrate the flow of the safety checks

"},{"location":"planning/behavior_path_planner/behavior_path_planner_lane_change_design/#calculating-and-evaluating-longitudinal-distance","title":"Calculating and evaluating longitudinal distance","text":"

A sufficient longitudinal gap between vehicles will prevent any rear-end collision from happening. This includes an emergency stop or sudden braking scenarios.

The following information is required to evaluate the longitudinal distance between vehicles

  1. estimated speed of the dynamic objects
  2. predicted path of dynamic objects
  3. ego vehicle's current speed
  4. ego vehicle's predicted path (converted/estimated from candidate path)

The following figure illustrates how the safety check is performed on ego vs. dynamic objects.

Let v_front and a_front be the front vehicle's velocity and deceleration, respectively, and v_rear and a_rear be the rear vehicle's velocity and deceleration, respectively. Front vehicle and rear vehicle assignment will depend on which predicted path's pose is currently being evaluated.

The following figure illustrates front and rear vehicle velocity assignment.

Assuming the front vehicle brakes, then d_front is the distance the front vehicle will travel until it comes to a complete stop. The distance is computed from the equation of motion, which yield.

d_front = -std::pow(v_front,2) / (2*a_front)\n

Using the same formula to evaluate the rear vehicle's stopping distance d_rear is insufficient. This is because as the rear vehicle's driver saw the front vehicle's sudden brake, it will take some time for the driver to process the information and react by pressing the brake. We call this delay the reaction time.

The reaction time is considered from the duration starting from the driver seeing the front vehicle brake light until the brake is pressed. As the brake is pressed, the time margin (which might be caused by mechanical or control delay) also needs to be considered. With these two parameters included, the formula for d_rear will be as follows.

d_rear = v_rear * rear_vehicle_reaction_time + v_rear * rear_vehicle_safety_time_margin + (-std::pow(v_front,2) / 2 * a_rear)\n

Since there is no absolute value for the decelerationa_front and a_rear, both of the values are parameterized (expected_front_deceleration and expected_rear_deceleration, respectively) with the estimation of how much deceleration will occur if the brake is pressed.

The longitudinal distance is evaluated as follows

d_rear < d_front + d_inter\n

where d_inter is the relative longitudinal distance obtained at each evaluated predicted pose.

Finally minimum longitudinal distance for d_rear is added to compensate for object to near to each other when d_rear = 0.0. This yields

std::max(longitudinal_distance_min_threshold, d_rear) < d_front + d_inter\n
"},{"location":"planning/behavior_path_planner/behavior_path_planner_lane_change_design/#collision-check-in-prepare-phase","title":"Collision check in prepare phase","text":"

The ego vehicle may need to secure ample inter-vehicle distance ahead of the target vehicle before attempting a lane change. The flag enable_collision_check_at_prepare_phase can be enabled to gain this behavior. The following image illustrates the differences between the false and true cases.

The parameter prepare_phase_ignore_target_speed_thresh can be configured to ignore the prepare phase collision check for targets whose speeds are less than a specific threshold, such as stationary or very slow-moving objects.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_lane_change_design/#if-the-lane-is-blocked-and-multiple-lane-changes","title":"If the lane is blocked and multiple lane changes","text":"

When driving on the public road with other vehicles, there exist scenarios where lane changes cannot be executed. Suppose the candidate path is evaluated as unsafe, for example, due to incoming vehicles in the adjacent lane. In that case, the ego vehicle can't change lanes, and it is impossible to reach the goal. Therefore, the ego vehicle must stop earlier at a certain distance and wait for the adjacent lane to be evaluated as safe. The minimum stopping distance computation is as follows.

minimum_lane_change_distance = num_of_lane_changes * (minimum_lane_change_length + backward_length_buffer_for_end_of_lane)\n

The following figure illustrates when the lane is blocked in multiple lane changes cases.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_lane_change_design/#intersection","title":"Intersection","text":"

Lane change in the intersection is prohibited following traffic regulation. Therefore, if the goal is place close passed the intersection, the lane change needs to be completed before ego vehicle enters the intersection region. Similar to the lane blocked case, in case of the path is unsafe, ego vehicle will stop and waits for the dynamic object to pass by.

The following figure illustrate the intersection case.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_lane_change_design/#aborting-lane-change","title":"Aborting lane change","text":"

The abort process may result in three different outcome; Cancel, Abort and Stop/Cruise.

The following depicts the flow of the abort lane change check.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_lane_change_design/#cancel","title":"Cancel","text":"

Suppose the lane change trajectory is evaluated as unsafe. In that case, if the ego vehicle has not departed from the current lane yet, the trajectory will be reset, and the ego vehicle will resume the lane following the maneuver.

The function can be enabled by setting enable_cancel_lane_change to true.

The following image illustrates the cancel process.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_lane_change_design/#abort","title":"Abort","text":"

Assume the ego vehicle has already departed from the current lane. In that case, it is dangerous to cancel the path, and it will cause the ego vehicle to change the heading direction abruptly. In this case, planning a trajectory that allows the ego vehicle to return to the current path while minimizing the heading changes is necessary. In this case, the lane change module will generate an abort path. The following images show an example of the abort path. Do note that the function DOESN'T GUARANTEE a safe abort process, as it didn't check the presence of the surrounding objects and/or their reactions. The function can be enable manually by setting both enable_cancel_lane_change and enable_abort_lane_change to true. The parameter abort_max_lateral_jerk need to be set to a high value in order for it to work.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_lane_change_design/#stopcruise","title":"Stop/Cruise","text":"

The last behavior will also occur if the ego vehicle has departed from the current lane. If the abort function is disabled or the abort is no longer possible, the ego vehicle will attempt to stop or transition to the obstacle cruise mode. Do note that the module DOESN'T GUARANTEE safe maneuver due to the unexpected behavior that might've occurred during these critical scenarios. The following images illustrate the situation.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_lane_change_design/#parameters","title":"Parameters","text":""},{"location":"planning/behavior_path_planner/behavior_path_planner_lane_change_design/#essential-lane-change-parameters","title":"Essential lane change parameters","text":"

The following parameters are configurable in lane_change.param.yaml.

Name Unit Type Description Default value lane_change_prepare_duration [m] double The preparation time for the ego vehicle to be ready to perform lane change. 4.0 lane_changing_safety_check_duration [m] double The total time that is taken to complete the lane-changing task. 8.0 minimum_lane_change_prepare_distance [m] double Minimum prepare distance for lane change 2.0 minimum_lane_change_length [m] double The minimum distance needed for changing lanes. 16.5 backward_length_buffer_for_end_of_lane [m] double The end of lane buffer to ensure ego vehicle has enough distance to start lane change 2.0 lane_change_finish_judge_buffer [m] double The additional buffer used to confirm lane change process completion 3.0 lane_changing_lateral_jerk [m/s3] double Lateral jerk value for lane change path generation 0.5 lane_changing_lateral_acc [m/s2] double Lateral acceleration value for lane change path generation 0.5 minimum_lane_change_velocity [m/s] double Minimum speed during lane changing process. 2.78 prediction_time_resolution [s] double Time resolution for object's path interpolation and collision check. 0.5 maximum_deceleration [m/s^2] double Ego vehicle maximum deceleration when performing lane change. 1.0 lane_change_sampling_num [-] int Number of possible lane-changing trajectories that are being influenced by deceleration 10"},{"location":"planning/behavior_path_planner/behavior_path_planner_lane_change_design/#collision-checks-during-lane-change","title":"Collision checks during lane change","text":"

The following parameters are configurable in behavior_path_planner.param.yaml.

Name Unit Type Description Default value lateral_distance_max_threshold [m] double The lateral distance threshold that is used to determine whether lateral distance between two object is enough and whether lane change is safe. 2.0 longitudinal_distance_min_threshold [m] double The longitudinal distance threshold that is used to determine whether longitudinal distance between two object is enough and whether lane change is safe. 3.0 expected_front_deceleration [m/s^2] double The front object's maximum deceleration when the front vehicle perform sudden braking. (*1) -1.0 expected_rear_deceleration [m/s^2] double The rear object's maximum deceleration when the rear vehicle perform sudden braking. (*1) -1.0 rear_vehicle_reaction_time [s] double The reaction time of the rear vehicle driver which starts from the driver noticing the sudden braking of the front vehicle until the driver step on the brake. 2.0 rear_vehicle_safety_time_margin [s] double The time buffer for the rear vehicle to come into complete stop when its driver perform sudden braking. 2.0 enable_collision_check_at_prepare_phase [-] boolean Perform collision check starting from prepare phase. If false, collision check only evaluated for lane changing phase. true prepare_phase_ignore_target_speed_thresh [m/s] double Ignore collision check in prepare phase of object speed that is lesser that the configured value. enable_collision_check_at_prepare_phase must be true 0.1 use_predicted_path_outside_lanelet [-] boolean If true, include collision check for predicted path that is out of lanelet (freespace). false use_all_predicted_path [-] boolean If false, use only the predicted path that has the maximum confidence. true

(*1) the value must be negative.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_lane_change_design/#abort-lane-change","title":"Abort lane change","text":"

The following parameters are configurable in lane_change.param.yaml.

Name Unit Type Description Default value enable_cancel_lane_change [-] boolean Enable cancel lane change true enable_abort_lane_change [-] boolean Enable abort lane change. false abort_delta_time [s] double The time taken to start aborting. 3.0 abort_max_lateral_jerk [s] double The maximum lateral jerk for abort path 1000.0"},{"location":"planning/behavior_path_planner/behavior_path_planner_lane_change_design/#debug","title":"Debug","text":"

The following parameters are configurable in lane_change.param.yaml.

Name Unit Type Description Default value publish_debug_marker [-] boolean Flag to publish debug marker false"},{"location":"planning/behavior_path_planner/behavior_path_planner_lane_change_design/#debug-marker-visualization","title":"Debug Marker & Visualization","text":"

To enable the debug marker, execute ros2 param set /planning/scenario_planning/lane_driving/behavior_planning/behavior_path_planner lane_change.publish_debug_marker true (no restart is needed) or simply set the publish_debug_marker to true in the lane_change.param.yaml for permanent effect (restart is needed). Then add the marker /planning/scenario_planning/lane_driving/behavior_planning/behavior_path_planner/debug/lanechange in rviz2.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_limitations/","title":"Limitations","text":"

The document describes the limitations that are currently present in the behavior_path_planner module.

The following items (but not limited to) fall in the scope of limitation:

"},{"location":"planning/behavior_path_planner/behavior_path_planner_limitations/#limitation-multiple-connected-opposite-lanes-require-linestring-with-shared-id","title":"Limitation: Multiple connected opposite lanes require Linestring with shared ID","text":"

To fully utilize the Lanelet2's API, the design of the vector map (.osm) needs to follow all the criteria described in Lanelet2 documentation. Specifically, in the case of 2 or more lanes, the Linestrings that divide the current lane with the opposite/adjacent lane need to have a matching Linestring ID. Assume the following ideal case.

In the image, Linestring ID51 is shared by Lanelet A and Lanelet B. Hence we can directly use the available left, adjacentLeft, right, adjacentRight and findUsages method within Lanelet2's API to directly query the direction and opposite lane availability.

const auto right_lane = routing_graph_ptr_->right(lanelet);\nconst auto adjacent_right_lane = routing_graph_ptr_->adjacentRight(lanelet);\nconst auto opposite_right_lane = lanelet_map_ptr_->laneletLayer.findUsages(lanelet.rightBound().invert());\n

The following images show the situation where these API does not work directly. This means that we cannot use them straight away, and several assumptions and logical instruction are needed to make these APIs work.

In this example (multiple linestring issues), Lanelet C contains Linestring ID61 and ID62, while Lanelet D contains Linestring ID63 and ID 64. Although the Linestring ID62 and ID64 have identical point IDs and seem visually connected, the API will treat these Linestring as though they are separated. When it searches for any Lanelet that is connected via Linestring ID62, it will return NULL, since ID62 only connects to Lanelet C and not other Lanelet.

Although, in this case, it is possible to forcefully search the lanelet availability by checking the lanelet that contains the points, usinggetLaneletFromPoint method. But, the implementation requires complex rules for it to work. Take the following images as an example.

Assume Object X is in Lanelet F. We can forcefully search Lanelet E via Point 7, and it will work if Point 7 is utilized by only 2 lanelet. However, the complexity increases when we want to start searching for the direction of the opposite lane. We can infer the direction of the lanelet by using mathematical operations (dot product of vector V_ID72 (Point 6 minus Point 9), and V_ID74 (Point 7 minus Point 8). But, notice that we did not use Point 7 in V_ID72. This is because searching it requires an iteration, adding additional non-beneficial computation.

Suppose the points are used by more than 2 lanelets. In that case, we have to find the differences for all lanelet, and the result might be undefined. The reason is that the differences between the coordinates do not reflect the actual shape of the lanelet. The following image demonstrates this point.

There are many other available solutions to try. However, further attempt to solve this might cause issues in the future, especially for maintaining or scaling up the software.

In conclusion, the multiple Linestring issues will not be supported. Covering these scenarios might give the user an \"everything is possible\" impression. This is dangerous since any attempt to create a non-standardized vector map is not compliant with safety regulations.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_limitations/#limitation-avoidance-at-corners-and-intersections","title":"Limitation: Avoidance at Corners and Intersections","text":"

Currently, the implementation doesn't cover avoidance at corners and intersections. The reason is similar to here. However, this case can still be supported in the future (assuming the vector map is defined correctly).

"},{"location":"planning/behavior_path_planner/behavior_path_planner_limitations/#limitation-chattering-shifts","title":"Limitation: Chattering shifts","text":"

There are possibilities that the shifted path chatters as a result of various factors. For example, bounded box shape or position from the perception input. Sometimes, it is difficult for the perception to get complete information about the object's size. As the object size is updated, the object length will also be updated. This might cause shifts point to be re-calculated, therefore resulting in chattering shift points.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_path_generation_design/","title":"Path Generation design","text":"

This document explains how the path is generated for lane change and avoidance, etc. The implementation can be found in path_shifter.hpp.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_path_generation_design/#overview","title":"Overview","text":"

The base idea of the path generation in lane change and avoidance is to smoothly shift the reference path, such as the center line, in the lateral direction. This is achieved by using a constant-jerk profile as in the figure below. More details on how it is used can be found in README. It is assumed that the reference path is smooth enough for this algorithm.

The figure below explains how the application of a constant lateral jerk l^{'''}(s) can be used to induce lateral shifting. In order to comply with the limits on lateral acceleration and velocity, zero-jerk time is employed in the figure ( T_a and T_v ). In each interval where constant jerk is applied, the shift position l(s) can be characterized by a third-degree polynomial. Therefore the shift length from the reference path can then be calculated by combining spline curves.

Note that, due to the rarity of the T_v in almost all cases of lane change and avoidance, T_v is not considered in the current implementation.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_path_generation_design/#mathematical-derivation","title":"Mathematical Derivation","text":"

By applying simple integral operations, the following analytical equations can be derived to describe the shift distance l(t) at each time under lateral jerk, acceleration, and velocity constraints.

\\begin{align}\nl_1&= \\frac{1}{6}jT_j^3\\\\[10pt]\nl_2&= \\frac{1}{6}j T_j^3 + \\frac{1}{2} j T_a T_j^2 + \\frac{1}{2} j T_a^2 T_j\\\\[10pt]\nl_3&= j  T_j^3 + \\frac{3}{2} j T_a T_j^2 + \\frac{1}{2} j T_a^2 T_j\\\\[10pt]\nl_4&= j T_j^3 + \\frac{3}{2} j T_a T_j^2 + \\frac{1}{2} j T_a^2 T_j + j(T_a + T_j)T_j T_v\\\\[10pt]\nl_5&= \\frac{11}{6} j T_j^3 + \\frac{5}{2} j T_a T_j^2 + \\frac{1}{2} j T_a^2 T_j + j(T_a + T_j)T_j T_v \\\\[10pt]\nl_6&= \\frac{11}{6} j T_j^3 + 3 j T_a T_j^2 + j T_a^2 T_j + j(T_a + T_j)T_j T_v\\\\[10pt]\nl_7&= 2 j T_j^3 + 3 j T_a T_j^2 + j T_a^2 T_j + j(T_a + T_j)T_j T_v\n\\end{align}\n

These equations are used to determine the shape of a path. Additionally, by applying further mathematical operations to these basic equations, the expressions of the following subsections can be derived.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_path_generation_design/#calculation-of-maximum-acceleration-from-transition-time-and-final-shift-length","title":"Calculation of Maximum Acceleration from transition time and final shift length","text":"

In the case where there are no limitations on lateral velocity and lateral acceleration, the maximum lateral acceleration during the shifting can be calculated as follows. The constant-jerk time is given by T_j = T_{\\rm total}/4 because of its symmetric property. Since T_a=T_v=0, the final shift length L=l_7=2jT_j^3 can be determined using the above equation. The maximum lateral acceleration is then given by a_{\\rm max} =jT_j. This results in the following expression for the maximum lateral acceleration:

\\begin{align}\na_{\\rm max}  = \\frac{8L}{T_{\\rm total}^2}\n\\end{align}\n
"},{"location":"planning/behavior_path_planner/behavior_path_planner_path_generation_design/#calculation-of-ta-tj-and-jerk-from-acceleration-limit","title":"Calculation of Ta, Tj and jerk from acceleration limit","text":"

In the case where there are no limitations on lateral velocity, the constant-jerk and acceleration times, as well as the required jerk can be calculated from the acceleration limit, total time, and final shift length as follows. Since T_v=0, the final shift length is given by:

\\begin{align}\nL = l_7 = 2 j T_j^3 + 3 j T_a T_j^2 + j T_a^2 T_j\n\\end{align}\n

Additionally, the velocity profile reveals the following relations:

\\begin{align}\na_{\\rm lim} &= j T_j\\\\\nT_{\\rm total} &= 4T_j + 2T_a\n\\end{align}\n

By solving these three equations, the following can be obtained:

\\begin{align}\nT_j&=\\frac{T_{\\rm total}}{2} - \\frac{2L}{a_{\\rm lim} T_{\\rm total}}\\\\[10pt]\nT_a&=\\frac{4L}{a_{\\rm lim} T_{\\rm total}} - \\frac{T_{\\rm total}}{2}\\\\[10pt]\njerk&=\\frac{2a_{\\rm lim} ^2T_{\\rm total}}{a_{\\rm lim} T_{\\rm total}^2-4L}\n\\end{align}\n

where T_j is the constant-jerk time, T_a is the constant acceleration time, j is the required jerk, a_{\\rm lim} is the acceleration limit, and L is the final shift length.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_path_generation_design/#calculation-of-required-time-from-jerk-and-acceleration-constraint","title":"Calculation of Required Time from Jerk and Acceleration Constraint","text":"

In the case where there are no limitations on lateral velocity, the total time required for shifting can be calculated from the jerk and acceleration limits and the final shift length as follows. By solving the two equations given above:

L = l_7 = 2 j T_j^3 + 3 j T_a T_j^2 + j T_a^2 T_j,\\quad a_{\\rm lim} = j T_j\n

we obtain the following expressions:

\\begin{align}\nT_j &= \\frac{a_{\\rm lim}}{j}\\\\[10pt]\nT_a &= \\frac{1}{2}\\sqrt{\\frac{a_{\\rm lim}}{j}^2 + \\frac{4L}{a_{\\rm lim}}} - \\frac{3a_{\\rm lim}}{2j}\n\\end{align}\n

The total time required for shifting can then be calculated as T_{\\rm total}=4T_j+2T_a.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_path_generation_design/#limitation","title":"Limitation","text":""},{"location":"planning/behavior_path_planner/behavior_path_planner_pull_out_design/","title":"Pull Out design","text":""},{"location":"planning/behavior_path_planner/behavior_path_planner_pull_out_design/#purpose-role","title":"Purpose / Role","text":"

Pull out from the shoulder lane without colliding with objects.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_pull_out_design/#design","title":"Design","text":""},{"location":"planning/behavior_path_planner/behavior_path_planner_pull_out_design/#general-parameters-for-pull_out","title":"General parameters for pull_out","text":"Name Unit Type Description Default value th_arrived_distance_m [m] double distance threshold for arrival of path termination 1.0 th_stopped_velocity_mps [m/s] double velocity threshold for arrival of path termination 0.01 th_stopped_time_sec [s] double time threshold for arrival of path termination 1.0 collision_check_margin [m] double Obstacle collision check margin 1.0 collision_check_distance_from_end [m] double collision check distance from end point. currently only for pull out 15.0"},{"location":"planning/behavior_path_planner/behavior_path_planner_pull_out_design/#safe-check-with-obstacles-in-shoulder-lane","title":"Safe check with obstacles in shoulder lane","text":"
  1. Calculate ego-vehicle's footprint on pull out path between from current position to pull out end point. (Illustrated by blue frame)
  2. Calculate object's polygon which is located in shoulder lane
  3. If a distance between the footprint and the polygon is lower than the threshold (default: 1.0 m), that is judged as a unsafe path
"},{"location":"planning/behavior_path_planner/behavior_path_planner_pull_out_design/#path-generation","title":"Path Generation","text":"

There are two path generation methods.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_pull_out_design/#shift-pull-out","title":"shift pull out","text":"

Pull out distance is calculated by the speed, lateral deviation, and the lateral jerk. The lateral jerk is searched for among the predetermined minimum and maximum values, and the one that generates a safe path is selected.

shift pull out video

"},{"location":"planning/behavior_path_planner/behavior_path_planner_pull_out_design/#parameters-for-shift-pull-out","title":"parameters for shift pull out","text":"Name Unit Type Description Default value enable_shift_pull_out [-] bool flag whether to enable shift pull out true shift_pull_out_velocity [m/s] double velocity of shift pull out 2.0 pull_out_sampling_num [-] int Number of samplings in the minimum to maximum range of lateral_jerk 4 maximum_lateral_jerk [m/s3] double maximum lateral jerk 2.0 minimum_lateral_jerk [m/s3] double minimum lateral jerk 0.5 minimum_shift_pull_out_distance [m] double minimum shift pull out distance. if calculated pull out distance is shorter than this, use this for path generation. 20.0"},{"location":"planning/behavior_path_planner/behavior_path_planner_pull_out_design/#geometric-pull-out","title":"geometric pull out","text":"

Generate two arc paths with discontinuous curvature. Ego-vehicle stops once in the middle of the path to control the steer on the spot. See also [1] for details of the algorithm.

geometric pull out video

"},{"location":"planning/behavior_path_planner/behavior_path_planner_pull_out_design/#parameters-for-geometric-pull-out","title":"parameters for geometric pull out","text":"Name Unit Type Description Default value enable_geometric_pull_out [-] bool flag whether to enable geometric pull out true divide_pull_out_path [-] bool flag whether to divide arc paths.\u3000The path is assumed to be divided because the curvature is not continuous. But it requires a stop during the departure. false geometric_pull_out_velocity [m/s] double velocity of geometric pull out 1.0 arc_path_interval [m] double path points interval of arc paths of geometric pull out 1.0 lane_departure_margin [m] double margin of deviation to lane right 0.2 pull_out_max_steer_angle [rad] double maximum steer angle for path generation 0.26"},{"location":"planning/behavior_path_planner/behavior_path_planner_pull_out_design/#backward-pull-out-start-point-search","title":"backward pull out start point search","text":"

If a safe path cannot be generated from the current position, search backwards for a pull out start point at regular intervals(default: 2.0).

pull out after backward driving video

"},{"location":"planning/behavior_path_planner/behavior_path_planner_pull_out_design/#parameters-for-backward-pull-out-start-point-search","title":"parameters for backward pull out start point search","text":"Name Unit Type Description Default value enable_back [-] bool flag whether to search backward for start_point true search_priority [-] string In the case of efficient_path, use efficient paths even if the back distance is longer. In case of short_back_distance, use a path with as short a back distance efficient_path max_back_distance [m] double maximum back distance 30.0 backward_search_resolution [m] double distance interval for searching backward pull out start point 2.0 backward_path_update_duration [s] double time interval for searching backward pull out start point. this prevents chattering between back driving and pull_out 3.0 ignore_distance_from_lane_end [m] double distance from end of pull out lanes for ignoring start candidates 15.0"},{"location":"planning/behavior_path_planner/behavior_path_planner_pull_over_design/","title":"Pull Over design","text":""},{"location":"planning/behavior_path_planner/behavior_path_planner_pull_over_design/#purpose-role","title":"Purpose / Role","text":"

Search for a space where there are no objects and pull over there.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_pull_over_design/#design","title":"Design","text":""},{"location":"planning/behavior_path_planner/behavior_path_planner_pull_over_design/#general-parameters-for-pull_over","title":"General parameters for pull_over","text":"Name Unit Type Description Default value request_length [m] double when the ego-vehicle approaches the goal by this distance, the module is activated. 200.0 th_arrived_distance [m] double distance threshold for arrival of path termination 1.0 th_stopped_velocity [m/s] double velocity threshold for arrival of path termination 0.01 th_stopped_time [s] double time threshold for arrival of path termination 2.0 pull_over_velocity [m/s] double decelerate to this speed by the goal search area 2.0 pull_over_minimum_velocity [m/s] double speed of pull_over after stopping once. this prevents excessive acceleration. 1.38 margin_from_boundary [m] double distance margin from edge of the shoulder lane 0.5 decide_path_distance [m] double decide path if it approaches this distance relative to the parking position. after that, no path planning and goal search are performed 10.0 maximum_deceleration [m/s2] double maximum deceleration. it prevents sudden deceleration when a parking path cannot be found suddenly 1.0"},{"location":"planning/behavior_path_planner/behavior_path_planner_pull_over_design/#collision-check","title":"collision check","text":""},{"location":"planning/behavior_path_planner/behavior_path_planner_pull_over_design/#occupancy-grid-based-collision-check","title":"occupancy grid based collision check","text":"

Generate footprints from ego-vehicle path points and determine obstacle collision from the value of occupancy_grid of the corresponding cell.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_pull_over_design/#parameters-for-occupancy-grid-based-collision-check","title":"Parameters for occupancy grid based collision check","text":"Name Unit Type Description Default value use_occupancy_grid [-] bool flag whether to use occupancy grid for collision check true use_occupancy_grid_for_longitudinal_margin [-] bool flag whether to use occupancy grid for keeping longitudinal margin false occupancy_grid_collision_check_margin [m] double margin to calculate ego-vehicle cells from footprint. 0.0 theta_size [-] int size of theta angle to be considered. angular resolution for collision check will be 2\\pi / theta_size [rad]. 360 obstacle_threshold [-] int threshold of cell values to be considered as obstacles 60"},{"location":"planning/behavior_path_planner/behavior_path_planner_pull_over_design/#onject-recognition-based-collision-check","title":"onject recognition based collision check","text":""},{"location":"planning/behavior_path_planner/behavior_path_planner_pull_over_design/#parameters-for-object-recognition-based-collision-check","title":"Parameters for object recognition based collision check","text":"Name Unit Type Description Default value use_object_recognition [-] bool flag whether to use object recognition for collision check true object_recognition_collision_check_margin [m] double margin to calculate ego-vehicle cells from footprint. 1.0"},{"location":"planning/behavior_path_planner/behavior_path_planner_pull_over_design/#goal-search","title":"Goal Search","text":"

If it is not possible to park safely at a given goal, /planning/scenario_planning/modified_goal is searched for in certain range of the shoulder lane.

goal search video

"},{"location":"planning/behavior_path_planner/behavior_path_planner_pull_over_design/#parameters-for-goal-search","title":"Parameters for goal search","text":"Name Unit Type Description Default value search_priority [-] string In case efficient_path use a goal that can generate an efficient path( priority is shift_parking -> arc_forward_parking -> arc_backward_parking). In case close_goal use the closest goal to the original one. efficient_path enable_goal_research [-] double flag whether to search goal true forward_goal_search_length [m] double length of forward range to be explored from the original goal 20.0 backward_goal_search_length [m] double length of backward range to be explored from the original goal 20.0 goal_search_interval [m] double distance interval for goal search 2.0 longitudinal_margin [m] double margin between ego-vehicle at the goal position and obstacles 3.0 max_lateral_offset [m] double maximum offset of goal search in the lateral direction 3.0 lateral_offset_interval [m] double distance interval of goal search in the lateral direction 3.0 ignore_distance_from_lane_start [m] double distance from start of pull over lanes for ignoring goal candidates 15.0"},{"location":"planning/behavior_path_planner/behavior_path_planner_pull_over_design/#path-generation","title":"Path Generation","text":"

There are three path generation methods. The path is generated with a certain margin (default: 0.5 m) from left boundary of shoulder lane.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_pull_over_design/#shift-parking","title":"shift parking","text":"

Pull over distance is calculated by the speed, lateral deviation, and the lateral jerk. The lateral jerk is searched for among the predetermined minimum and maximum values, and the one satisfies ready conditions described above is output.

  1. Apply uniform offset to centerline of shoulder lane for ensuring margin
  2. In the section between merge start and end, path is shifted by a method that is used to generate avoidance path (four segmental constant jerk polynomials)
  3. Combine this path with center line of road lane

shift_parking video

"},{"location":"planning/behavior_path_planner/behavior_path_planner_pull_over_design/#parameters-for-shift-parking","title":"Parameters for shift parking","text":"Name Unit Type Description Default value enable_shift_parking [-] bool flag whether to enable shift parking true pull_over_sampling_num [-] int Number of samplings in the minimum to maximum range of lateral_jerk 4 maximum_lateral_jerk [m/s3] double maximum lateral jerk 2.0 minimum_lateral_jerk [m/s3] double minimum lateral jerk 0.5 deceleration_interval [m] double distance of deceleration section 15.0 after_pull_over_straight_distance [m] double straight line distance after pull over end point 5.0"},{"location":"planning/behavior_path_planner/behavior_path_planner_pull_over_design/#geometric-parallel-parking","title":"geometric parallel parking","text":"

Generate two arc paths with discontinuous curvature. It stops twice in the middle of the path to control the steer on the spot. There are two path generation methods: forward and backward. See also [1] for details of the algorithm. There is also a simple python implementation.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_pull_over_design/#parameters-geometric-parallel-parking","title":"Parameters geometric parallel parking","text":"Name Unit Type Description Default value arc_path_interval [m] double interval between arc path points 1.0 pull_over_max_steer_rad [rad] double maximum steer angle for path generation. it may not be possible to control steer up to max_steer_angle in vehicle_info when stopped 0.35"},{"location":"planning/behavior_path_planner/behavior_path_planner_pull_over_design/#arc-forward-parking","title":"arc forward parking","text":"

Generate two forward arc paths.

how arc_forward_parking video

"},{"location":"planning/behavior_path_planner/behavior_path_planner_pull_over_design/#parameters-arc-forward-parking","title":"Parameters arc forward parking","text":"Name Unit Type Description Default value enable_arc_forward_parking [-] bool flag whether to enable arc forward parking true after_forward_parking_straight_distance [m] double straight line distance after pull over end point 2.0 forward_parking_velocity [m/s] double velocity when forward parking 1.38 forward_parking_lane_departure_margin [m/s] double lane departure margin for front left corner of ego-vehicle when forward parking 0.0"},{"location":"planning/behavior_path_planner/behavior_path_planner_pull_over_design/#arc-backward-parking","title":"arc backward parking","text":"

Generate two backward arc paths.

.

arc_forward_parking video

"},{"location":"planning/behavior_path_planner/behavior_path_planner_pull_over_design/#parameters-arc-backward-parking","title":"Parameters arc backward parking","text":"Name Unit Type Description Default value enable_arc_backward_parking [-] bool flag whether to enable arc backward parking true after_backward_parking_straight_distance [m] double straight line distance after pull over end point 2.0 backward_parking_velocity [m/s] double velocity when backward parking -1.38 backward_parking_lane_departure_margin [m/s] double lane departure margin for front right corner of ego-vehicle when backward 0.0"},{"location":"planning/behavior_path_planner/behavior_path_planner_pull_over_design/#freespace-parking","title":"freespace parking","text":"

If the vehicle gets stuck with lane_parking, run freespace_parking. To run this feature, you need to set parking_lot to the map, activate_by_scenario of costmap_generator to false and enable_freespace_parking to true

*Series execution with avoidance_module in the flowchart is under development.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_pull_over_design/#unimplemented-parts-limitations-for-freespace-parking","title":"Unimplemented parts / limitations for freespace parking","text":""},{"location":"planning/behavior_path_planner/behavior_path_planner_pull_over_design/#parameters-freespace-parking","title":"Parameters freespace parking","text":"Name Unit Type Description Default value enable_freespace_parking [-] bool This flag enables freespace parking, which runs when the vehicle is stuck due to e.g. obstacles in the parking area. true

See freespace_planner for other parameters.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_side_shift_design/","title":"Side Shift design","text":"

(For remote control) Shift the path to left or right according to an external instruction.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_side_shift_design/#flowchart","title":"Flowchart","text":"

```plantuml --> @startuml skinparam monochrome true skinparam defaultTextAlignment center skinparam noteTextAlignment left

title path generation

start partition plan { if (lateral_offset_change_request_ == true \\n && \\n (shifting_status_ == BEFORE_SHIFT \\n || \\n shifting_status_ == AFTER_SHIFT)) then ( true) partition replace-shift-line { if ( shift line is inserted in the path ) then ( yes) :erase left shift line; else ( no) endif :calcShiftLines; :add new shift lines; :inserted_lateral_offset_ = requested_lateral_offset_ \\n inserted_shift_lines_ = new_shift_line; } else( false) endif stop @enduml ```

"},{"location":"planning/behavior_path_planner/behavior_path_planner_turn_signal_design/","title":"Turn Signal design","text":"

Turn Signal decider determines necessary blinkers.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_turn_signal_design/#purpose-role","title":"Purpose / Role","text":"

This module is responsible for activating a necessary blinker during driving. It uses rule-based algorithm to determine blinkers, and the details of this algorithm are described in the following sections. Note that this algorithm is strictly based on the Japanese Road Traffic Raw.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_turn_signal_design/#assumptions","title":"Assumptions","text":"

Autoware has following order of priorities for turn signals.

  1. Activate turn signal to safely navigate ego vehicle and protect other road participants
  2. Follow traffic laws
  3. Follow human driving practices
"},{"location":"planning/behavior_path_planner/behavior_path_planner_turn_signal_design/#limitations","title":"Limitations","text":"

Currently, this algorithm can sometimes give unnatural (not wrong) blinkers in a complicated situations. This is because it tries to follow the road traffic raw and cannot solve blinker conflicts clearly in that environment.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_turn_signal_design/#parameters-for-turn-signal-decider","title":"Parameters for turn signal decider","text":"Name Unit Type Description Default value turn_signal_intersection_search_distance [m] double constant search distance to decide activation of blinkers at intersections 30 turn_signal_intersection_angle_threshold_degree deg double angle threshold to determined the end point of intersection required section 15 turn_signal_minimum_search_distance [m] double minimum search distance for avoidance and lane change 10 turn_signal_search_time [s] double search time for to decide activation of blinkers 3.0 turn_signal_shift_length_threshold [m] double shift length threshold to decide activation of blinkers 0.3 turn_signal_on_swerving [-] bool flag to activate blinkers when swerving true

Note that the default values for turn_signal_intersection_search_distance and turn_signal_search_time is strictly followed by Japanese Road Traffic Laws. So if your country does not allow to use these default values, you should change these values in configuration files.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_turn_signal_design/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

In this algorithm, it assumes that each blinker has two sections, which are desired section and required section. The image of these two sections are depicted in the following diagram.

These two sections have the following meanings.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_turn_signal_design/#-desired-section","title":"- Desired Section","text":"
- This section is defined by road traffic laws. It cannot be longer or shorter than the designated length defined by the law.\n- In this section, you do not have to activate the designated blinkers if it is dangerous to do so.\n
"},{"location":"planning/behavior_path_planner/behavior_path_planner_turn_signal_design/#-required-section","title":"- Required Section","text":"
- In this section, ego vehicle must activate designated blinkers. However, if there are blinker conflicts, it must solve them based on the algorithm we mention later in this document.\n- Required section cannot be longer than desired section.\n

For left turn, right turn, avoidance, lane change, pull over and pull out, we define these two sections, which are elaborated in the following part.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_turn_signal_design/#1-left-and-right-turn","title":"1. Left and Right turn","text":"

Turn signal decider checks each lanelet on the map if it has turn_direction information. If a lanelet has this information, it activates necessary blinker based on this information.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_turn_signal_design/#2-avoidance","title":"2. Avoidance","text":"

Avoidance can be separated into two sections, first section and second section. The first section is from the start point of the path shift to the end of the path shift. The second section is from the end of shift point to the end of avoidance. Note that avoidance module will not activate turn signal when its shift length is below turn_signal_shift_length_threshold.

First section

Second section

"},{"location":"planning/behavior_path_planner/behavior_path_planner_turn_signal_design/#3-lane-change","title":"3. Lane Change","text":" "},{"location":"planning/behavior_path_planner/behavior_path_planner_turn_signal_design/#4-pull-out","title":"4. Pull out","text":" "},{"location":"planning/behavior_path_planner/behavior_path_planner_turn_signal_design/#5-pull-over","title":"5. Pull over","text":"

When it comes to handle several blinkers, it gives priority to the first blinker that comes first. However, this rule sometimes activate unnatural blinkers, so turn signal decider uses the following five rules to decide the necessary turn signal.

Based on these five rules, turn signal decider can solve blinker conflicts. The following pictures show some examples of this kind of conflicts.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_turn_signal_design/#-several-right-and-left-turns-on-short-sections","title":"- Several right and left turns on short sections","text":"

In this scenario, ego vehicle has to pass several turns that are close each other. Since this pattern can be solved by the pattern1 rule, the overall result is depicted in the following picture.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_turn_signal_design/#-avoidance-with-left-turn-1","title":"- Avoidance with left turn (1)","text":"

In this scene, ego vehicle has to deal with the obstacle that is on its original path as well as make a left turn. The overall result can be varied by the position of the obstacle, but the image of the result is described in the following picture.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_turn_signal_design/#-avoidance-with-left-turn-2","title":"- Avoidance with left turn (2)","text":"

Same as the previous scenario, ego vehicle has to avoid the obstacle as well as make a turn left. However, in this scene, the obstacle is parked after the intersection. Similar to the previous one, the overall result can be varied by the position of the obstacle, but the image of the result is described in the following picture.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_turn_signal_design/#-lane-change-and-left-turn","title":"- Lane change and left turn","text":"

In this scenario, ego vehicle has to do lane change before making a left turn. In the following example, ego vehicle does not activate left turn signal until it reaches the end point of the lane change path.

"},{"location":"planning/behavior_velocity_planner/","title":"Behavior Velocity Planner","text":""},{"location":"planning/behavior_velocity_planner/#overview","title":"Overview","text":"

behavior_velocity_planner is a planner that adjust velocity based on the traffic rules. It consists of several modules. Please refer to the links listed below for detail on each module.

When each module plans velocity, it considers based on base_link(center of rear-wheel axis) pose. So for example, in order to stop at a stop line with the vehicles' front on the stop line, it calculates base_link position from the distance between base_link to front and modifies path velocity from the base_link position.

"},{"location":"planning/behavior_velocity_planner/#input-topics","title":"Input topics","text":"Name Type Description ~input/path_with_lane_id autoware_auto_planning_msgs::msg::PathWithLaneId path with lane_id ~input/vector_map autoware_auto_mapping_msgs::msg::HADMapBin vector map ~input/vehicle_odometry nav_msgs::msg::Odometry vehicle velocity ~input/dynamic_objects autoware_auto_perception_msgs::msg::PredictedObjects dynamic objects ~input/no_ground_pointcloud sensor_msgs::msg::PointCloud2 obstacle pointcloud ~/input/compare_map_filtered_pointcloud sensor_msgs::msg::PointCloud2 obstacle pointcloud filtered by compare map. Note that this is used only when the detection method of run out module is Points. ~input/traffic_signals autoware_auto_perception_msgs::msg::TrafficSignalArray traffic light states"},{"location":"planning/behavior_velocity_planner/#output-topics","title":"Output topics","text":"Name Type Description ~output/path autoware_auto_planning_msgs::msg::Path path to be followed ~output/stop_reasons tier4_planning_msgs::msg::StopReasonArray reasons that cause the vehicle to stop"},{"location":"planning/behavior_velocity_planner/#node-parameters","title":"Node parameters","text":"Parameter Type Description launch_blind_spot bool whether to launch blind_spot module launch_crosswalk bool whether to launch crosswalk module launch_detection_area bool whether to launch detection_area module launch_intersection bool whether to launch intersection module launch_traffic_light bool whether to launch traffic light module launch_stop_line bool whether to launch stop_line module launch_occlusion_spot bool whether to launch occlusion_spot module launch_run_out bool whether to launch run_out module launch_speed_bump bool whether to launch speed_bump module forward_path_length double forward path length backward_path_length double backward path length max_accel double (to be a global parameter) max acceleration of the vehicle system_delay double (to be a global parameter) delay time until output control command delay_response_time double (to be a global parameter) delay time of the vehicle's response to control commands"},{"location":"planning/behavior_velocity_planner/blind-spot-design/","title":"Blind spot design","text":""},{"location":"planning/behavior_velocity_planner/blind-spot-design/#blind-spot","title":"Blind Spot","text":""},{"location":"planning/behavior_velocity_planner/blind-spot-design/#role","title":"Role","text":"

Blind spot check while turning right/left by a dynamic object information and planning a velocity of the start/stop.

"},{"location":"planning/behavior_velocity_planner/blind-spot-design/#activation-timing","title":"Activation Timing","text":"

This function is activated when the lane id of the target path has an intersection label (i.e. the turn_direction attribute is left or right).

"},{"location":"planning/behavior_velocity_planner/blind-spot-design/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

Sets a stop line, a pass judge line, a detection area and conflict area based on a map information and a self position.

Stop/Go state: When both conditions are met for any of each object, this module state is transited to the \"stop\" state and insert zero velocity to stop the vehicle.

In order to avoid a rapid stop, the \u201cstop\u201d judgement is not executed after the judgment line is passed.

Once a \"stop\" is judged, it will not transit to the \"go\" state until the \"go\" judgment continues for a certain period in order to prevent chattering of the state (e.g. 2 seconds).

"},{"location":"planning/behavior_velocity_planner/blind-spot-design/#module-parameters","title":"Module Parameters","text":"Parameter Type Description stop_line_margin double [m] a margin that the vehicle tries to stop before stop_line backward_length double [m] distance from closest path point to the edge of beginning point. ignore_width_from_center_line double [m] ignore threshold that vehicle behind is collide with ego vehicle or not max_future_movement_time double [s] maximum time for considering future movement of object"},{"location":"planning/behavior_velocity_planner/blind-spot-design/#flowchart","title":"Flowchart","text":""},{"location":"planning/behavior_velocity_planner/crosswalk-design/","title":"Crosswalk design","text":""},{"location":"planning/behavior_velocity_planner/crosswalk-design/#crosswalk","title":"Crosswalk","text":""},{"location":"planning/behavior_velocity_planner/crosswalk-design/#role","title":"Role","text":"

This module judges whether the ego should stop in front of the crosswalk in order to provide safe passage of pedestrians and bicycles based on object's behavior and surround traffic.

crosswalk module"},{"location":"planning/behavior_velocity_planner/crosswalk-design/#activation-timing","title":"Activation Timing","text":"

The manager launch crosswalk scene modules when the reference path conflicts crosswalk lanelets.

"},{"location":"planning/behavior_velocity_planner/crosswalk-design/#module-parameters","title":"Module Parameters","text":""},{"location":"planning/behavior_velocity_planner/crosswalk-design/#common-parameters","title":"Common parameters","text":"Parameter Type Description show_processing_time bool whether to show processing time"},{"location":"planning/behavior_velocity_planner/crosswalk-design/#parameters-for-stop-position","title":"Parameters for stop position","text":"

The crosswalk module determines a stop position at least stop_margin away from the object.

stop margin

The stop line is the reference point for the stopping position of the vehicle, but if there is no stop line in front of the crosswalk, the position stop_line_distance meters before the crosswalk is the virtual stop line for the vehicle. Then, if the stop position determined from stop_margin exists in front of the stop line determined from the HDMap or stop_line_distance, the actual stop position is determined according to stop_margin in principle, and vice versa.

explicit stop line

virtual stop point

On the other hand, if pedestrian (bicycle) is crossing wide crosswalks seen in scramble intersections, and the pedestrian position is more than stop_line_margin meters away from the stop line, the actual stop position is determined to be stop_margin and pedestrian position, not at the stop line.

stop at wide crosswalk

See the workflow in algorithms section.

Parameter Type Description stop_margin double [m] the vehicle decelerates to be able to stop in front of object with margin stop_line_distance double [m] make stop line away from crosswalk when no explicit stop line exists stop_line_margin double [m] if objects cross X meters behind the stop line, the stop position is determined according to the object position (stop_margin meters before the object) stop_position_threshold double [m] threshold for check whether the vehicle stop in front of crosswalk"},{"location":"planning/behavior_velocity_planner/crosswalk-design/#parameters-for-ego-velocity","title":"Parameters for ego velocity","text":"Parameter Type Description slow_velocity double [m/s] target vehicle velocity when module receive slow down command from FOA max_slow_down_jerk double [m/sss] minimum jerk deceleration for safe brake max_slow_down_accel double [m/ss] minimum accel deceleration for safe brake no_relax_velocity double [m/s] if the current velocity is less than X m/s, ego always stops at the stop position(not relax deceleration constraints)"},{"location":"planning/behavior_velocity_planner/crosswalk-design/#parameters-for-stuck-vehicle","title":"Parameters for stuck vehicle","text":"

If there are low speed or stop vehicle ahead of the crosswalk, and there is not enough space between the crosswalk and the vehicle (see following figure), closing the distance to that vehicle could cause Ego to be stuck on the crosswalk. So, in this situation, this module plans to stop before the crosswalk and wait until the vehicles move away, even if there are no pedestrians or bicycles.

stuck vehicle attention range Parameter Type Description stuck_vehicle_velocity double [m/s] maximum velocity threshold whether the vehicle is stuck max_lateral_offset double [m] maximum lateral offset for stuck vehicle position should be looked stuck_vehicle_attention_range double [m] the detection area is defined as X meters behind the crosswalk"},{"location":"planning/behavior_velocity_planner/crosswalk-design/#parameters-for-pass-judge-logic","title":"Parameters for pass judge logic","text":"

Also see algorithm section.

Parameter Type Description ego_pass_first_margin double [s] time margin for ego pass first situation ego_pass_later_margin double [s] time margin for object pass first situation stop_object_velocity_threshold double [m/s] velocity threshold for the module to judge whether the objects is stopped min_object_velocity double [m/s] minimum object velocity (compare the estimated velocity by perception module with this parameter and adopt the larger one to calculate TTV.) max_yield_timeout double [s] if the pedestrian does not move for X seconds after stopping before the crosswalk, the module judge that ego is able to pass first. ego_yield_query_stop_duration double [s] the amount of time which ego should be stopping to query whether it yields or not."},{"location":"planning/behavior_velocity_planner/crosswalk-design/#parameters-for-input-data","title":"Parameters for input data","text":"Parameter Type Description tl_state_timeout double [s] timeout threshold for traffic light signal"},{"location":"planning/behavior_velocity_planner/crosswalk-design/#parameters-for-target-area-object","title":"Parameters for target area & object","text":"

As a countermeasure against pedestrians attempting to cross outside the crosswalk area, this module watches not only the crosswalk zebra area but also in front of and behind space of the crosswalk, and if there are pedestrians or bicycles attempting to pass through the watch area, this module judges whether ego should pass or stop.

crosswalk attention range

This module mainly looks the following objects as target objects. There are also optional flags that enables the pass/stop decision for motorcycle and unknown objects.

Parameter Type Description crosswalk_attention_range double [m] the detection area is defined as -X meters before the crosswalk to +X meters behind the crosswalk target/unknown bool whether to look and stop by UNKNOWN objects target/bicycle bool whether to look and stop by BICYCLE objects target/motorcycle bool whether to look and stop MOTORCYCLE objects target/pedestrian bool whether to look and stop PEDESTRIAN objects"},{"location":"planning/behavior_velocity_planner/crosswalk-design/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"planning/behavior_velocity_planner/crosswalk-design/#stop-position","title":"Stop position","text":"

The stop position is determined by the existence of the stop line defined by the HDMap, the positional relationship between the stop line and the pedestrians and bicycles, and each parameter.

"},{"location":"planning/behavior_velocity_planner/crosswalk-design/#pass-judge-logic","title":"Pass judge logic","text":"

At first, this module determines whether the pedestrians or bicycles are likely to cross the crosswalk based on the color of the pedestrian traffic light signal related to the crosswalk. Only when the pedestrian traffic signal is RED, this module judges that the objects will not cross the crosswalk and skip the pass judge logic.

Secondly, this module makes a decision as to whether ego should stop in front of the crosswalk or pass through based on the relative relationship between TTC(Time-To-Collision) and TTV(Time-To-Vehicle). The TTC is the time it takes for ego to reach the virtual collision point, and the TTV is the time it takes for the object to reach the virtual collision point.

virtual collision point

Depending on the relative relationship between TTC and TTV, the ego's behavior at crosswalks can be classified into three categories.

  1. TTC >> TTV: The objects have enough time to cross first before ego reaches the crosswalk. (Type-A)
  2. TTC \u2252 TTV: There is a risk of a near miss and collision between ego and objects at the virtual collision point. (Type-B)
  3. TTC << TTV: Ego has enough time to path through the crosswalk before the objects reach the virtual collision point. (Type-C)

This module judges that ego is able to pass through the crosswalk without collision risk when the relative relationship between TTC and TTV is Type-A and Type-C. On the other hand, this module judges that ego needs to stop in front of the crosswalk prevent collision with objects in Type-B condition. The time margin can be set by parameters ego_pass_first_margin and ego_pass_later_margin. This logic is designed based on [1].

time-to-collision vs time-to-vehicle

This module uses the larger value of estimated object velocity and min_object_velocity in calculating TTV in order to avoid division by zero.

"},{"location":"planning/behavior_velocity_planner/crosswalk-design/#dead-lock-prevention","title":"Dead lock prevention","text":"

If there are objects stop within a radius of min_object_velocity * ego_pass_later_margin meters from virtual collision point, this module judges that ego should stop based on the pass judge logic described above at all times. In such a situation, even if the pedestrian has no intention of crossing, ego continues the stop decision on the spot. So, this module has another logic for dead lock prevention, and if the object continues to stop for more than max_yield_timeout seconds after ego stops in front of the crosswalk, this module judges that the object has no intention of crossing and switches from STOP state to PASS state. The parameter stop_object_velocity_threshold is used to judge whether the objects are stopped or not. In addition, if the object starts to move after the module judges that the object has no intention of crossing, this module judges whether ego should stop or not once again.

dead lock situation"},{"location":"planning/behavior_velocity_planner/crosswalk-design/#safety-slow-down-behavior","title":"Safety Slow Down Behavior","text":"

In current autoware implementation if there are no target objects around a crosswalk, ego vehicle will not slow down for the crosswalk. However, if ego vehicle to slow down to a certain speed in such cases is wanted then it is possible by adding some tags to the related crosswalk definition as it is instructed in lanelet2_format_extension.md document.

"},{"location":"planning/behavior_velocity_planner/crosswalk-design/#limitations","title":"Limitations","text":"

When multiple crosswalks are nearby (such as intersection), this module may make a stop decision even at crosswalks where the object has no intention of crossing.

design limits"},{"location":"planning/behavior_velocity_planner/crosswalk-design/#known-issues","title":"Known Issues","text":""},{"location":"planning/behavior_velocity_planner/crosswalk-design/#referencesexternal-links","title":"References/External links","text":"

[1] \u4f50\u85e4 \u307f\u306a\u307f, \u65e9\u5742 \u7965\u4e00, \u6e05\u6c34 \u653f\u884c, \u6751\u91ce \u9686\u5f66, \u6a2a\u65ad\u6b69\u884c\u8005\u306b\u5bfe\u3059\u308b\u30c9\u30e9\u30a4\u30d0\u306e\u30ea\u30b9\u30af\u56de\u907f\u884c\u52d5\u306e\u30e2\u30c7\u30eb\u5316, \u81ea\u52d5\u8eca\u6280\u8853\u4f1a\u8ad6\u6587\u96c6, 2013, 44 \u5dfb, 3 \u53f7, p. 931-936.

"},{"location":"planning/behavior_velocity_planner/detection-area-design/","title":"Detection area design","text":""},{"location":"planning/behavior_velocity_planner/detection-area-design/#detection-area","title":"Detection Area","text":""},{"location":"planning/behavior_velocity_planner/detection-area-design/#role","title":"Role","text":"

If pointcloud is detected in a detection area defined on a map, the stop planning will be executed at the predetermined point.

"},{"location":"planning/behavior_velocity_planner/detection-area-design/#activation-timing","title":"Activation Timing","text":"

This module is activated when there is a detection area on the target lane.

"},{"location":"planning/behavior_velocity_planner/detection-area-design/#module-parameters","title":"Module Parameters","text":"Parameter Type Description use_dead_line bool [-] weather to use dead line or not use_pass_judge_line bool [-] weather to use pass judge line or not state_clear_time double [s] when the vehicle is stopping for certain time without incoming obstacle, move to STOPPED state stop_margin double [m] a margin that the vehicle tries to stop before stop_line dead_line_margin double [m] ignore threshold that vehicle behind is collide with ego vehicle or not hold_stop_margin_distance double [m] parameter for restart prevention (See Algorithm section)"},{"location":"planning/behavior_velocity_planner/detection-area-design/#inner-workings-algorithm","title":"Inner-workings / Algorithm","text":"
  1. Gets a detection area and stop line from map information and confirms if there is pointcloud in the detection area
  2. Inserts stop point l[m] in front of the stop line
  3. Inserts a pass judge point to a point where the vehicle can stop with a max deceleration
  4. Sets velocity as zero behind the stop line when the ego-vehicle is in front of the pass judge point
  5. If the ego vehicle has passed the pass judge point already, it doesn\u2019t stop and pass through.
"},{"location":"planning/behavior_velocity_planner/detection-area-design/#flowchart","title":"Flowchart","text":""},{"location":"planning/behavior_velocity_planner/detection-area-design/#restart-prevention","title":"Restart prevention","text":"

If it needs X meters (e.g. 0.5 meters) to stop once the vehicle starts moving due to the poor vehicle control performance, the vehicle goes over the stopping position that should be strictly observed when the vehicle starts to moving in order to approach the near stop point (e.g. 0.3 meters away).

This module has parameter hold_stop_margin_distance in order to prevent from these redundant restart. If the vehicle is stopped within hold_stop_margin_distance meters from stop point of the module (_front_to_stop_line < hold_stop_margin_distance), the module judges that the vehicle has already stopped for the module's stop point and plans to keep stopping current position even if the vehicle is stopped due to other factors.

parameters

outside the hold_stop_margin_distance

inside the hold_stop_margin_distance"},{"location":"planning/behavior_velocity_planner/intersection-design/","title":"Intersection design","text":""},{"location":"planning/behavior_velocity_planner/intersection-design/#intersection","title":"Intersection","text":""},{"location":"planning/behavior_velocity_planner/intersection-design/#role","title":"Role","text":"

Judgement whether a vehicle can go into an intersection or not by a dynamic object information, and planning a velocity of the low-down/stop. This module is designed for rule-based intersection velocity decision that is easy for developers to design its behavior. It generates proper velocity for intersection scene.

In addition, the external users / modules (e.g. remote operation) to can intervene the STOP/GO decision for the vehicle behavior. The override interface is expected to be used, for example, for remote intervention in emergency situations or gathering information on operator decisions during development.

"},{"location":"planning/behavior_velocity_planner/intersection-design/#activation-timing","title":"Activation Timing","text":"

This function is activated when the attention lane conflicts with the ego vehicle's lane.

"},{"location":"planning/behavior_velocity_planner/intersection-design/#limitations","title":"Limitations","text":"

This module allows developers to design vehicle velocity in intersection module using specific rules. This module is affected by object detection and prediction accuracy considering as stuck vehicle in this intersection module.

"},{"location":"planning/behavior_velocity_planner/intersection-design/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"planning/behavior_velocity_planner/intersection-design/#how-to-select-attention-target-objects","title":"How To Select Attention Target Objects","text":"

Objects that satisfy all of the following conditions are considered as target objects (possible collision objects):

"},{"location":"planning/behavior_velocity_planner/intersection-design/#how-to-define-attention-lanes","title":"How to Define Attention Lanes","text":"

Target objects are limited by lanelets to prevent unexpected behavior. A lane that satisfies the following conditions is defined as an \"Attention Lane\" and used to define the target object.

See the following figures to know how to create an attention area and its rationale.

Note: the case traffic light, turn right only is not currently implemented.

"},{"location":"planning/behavior_velocity_planner/intersection-design/#collision-check-and-crossing-judgement","title":"Collision Check and Crossing Judgement","text":"

The following process is performed for the attention targets to determine whether the ego vehicle can cross the intersection safely. If it is judged that the ego vehicle cannot pass through the intersection with enough margin, it will insert the stopping speed on the stop line of the intersection.

  1. calculate the passing time and the time that the ego vehicle is in the intersection. This time is set as t_s ~ t_e
  2. extract the predicted path of the target object whose confidence is greater than min_predicted_path_confidence.
  3. detect collision between the extracted predicted path and ego's predicted path in the following process.
    1. obtain the passing area of the ego vehicle A_{ego} in t_s ~ t_e.
    2. calculate the passing area of the target object A_{target} at t_s - collision_start_margin_time ~ t_e + collision_end_margin_time for each predicted path (*1).
    3. check if A_{ego} and A_{target} regions are overlapped (has collision).
  4. when a collision is detected, the module inserts a stop velocity in front of the intersection. Note that there is a time margin for the stop release (*2).
  5. If ego is over the pass_judge_line, collision checking is not processed to avoid sudden braking. However if ego velocity is lower than the threshold keep_detection_vel_thr then this module continues collision checking.

(*1) The parameters collision_start_margin_time and collision_end_margin_time can be interpreted as follows:

(*2) If the collision is detected, the state transits to \"stop\" immediately. On the other hand, the collision judgment must be clear for a certain period (default : 2.0[s]) to transit from \"stop\" to \"go\" to prevent to prevent chattering of decisions.

"},{"location":"planning/behavior_velocity_planner/intersection-design/#stop-line-automatic-generation","title":"Stop Line Automatic Generation","text":"

The driving lane is complemented at a certain intervals (default : 20 [cm]), and the line which is a margin distance (default : 100cm) in front of the attention lane is defined as a stop line. (Also the length of the vehicle is considered and the stop point is set at the base_link point in front of the stop lane.)

"},{"location":"planning/behavior_velocity_planner/intersection-design/#pass-judge-line","title":"Pass Judge Line","text":"

To avoid a rapid braking, in case that a deceleration more than a threshold (default : 0.5[G]) is needed, the ego vehicle doesn\u2019t stop. In order to judge this condition, pass judge line is set a certain distance (default : 0.5 * v_current^2 / a_max) in front of the stop line. To prevent a chattering, once the ego vehicle passes this line, \u201cstop\u201d decision in the intersection won\u2019t be done any more. To prevent going over the pass judge line before the traffic light stop line, the distance between stop line and pass judge line become 0m in case that there is a stop line between the ego vehicle and an intersection stop line.

"},{"location":"planning/behavior_velocity_planner/intersection-design/#stuck-vehicle-detection","title":"Stuck Vehicle Detection","text":"

If there is any object in a certain distance (default : 5[m]) from the end point of the intersection lane on the driving lane and the object velocity is less than a threshold (default 3.0[km/h]), the object is regarded as a stuck vehicle. If the stuck vehicle exists, the ego vehicle cannot enter the intersection.

As a related case, if the object in front of the ego vehicle is turning the same direction, this module predicts the stopping point that the object will reach at a certain deceleration (default: -1.0[m/s^2]). If the predicted position is in stuck vehicle detection area AND the position which vehicle length [m] behind the predicted position is in detection area, the ego vehicle will also stop.

"},{"location":"planning/behavior_velocity_planner/intersection-design/#module-parameters","title":"Module Parameters","text":"Parameter Type Description intersection/state_transit_margin_time double [m] time margin to change state intersection/path_expand_width bool [m] path area to see with expansion intersection/stop_line_margin double [m] margin before stop line intersection/stuck_vehicle_detect_dist double [m] this should be the length between cars when they are stopped. intersection/stuck_vehicle_ignore_dist double [m] obstacle stop max distance(5.0[m]) + stuck vehicle size / 2.0[m]) intersection/stuck_vehicle_vel_thr double [m/s] velocity below 3[km/h] is ignored by default intersection/intersection_velocity double [m/s] velocity to pass intersection. 10[km/h] is by default intersection/intersection_max_accel double [m/s^2] acceleration in intersection intersection/detection_area_margin double [m] range for expanding detection area intersection/detection_area_length double [m] range for lidar detection 200[m] is by default intersection/detection_area_angle_threshold double [rad] threshold of angle difference between the detection object and lane intersection/min_predicted_path_confidence double [-] minimum confidence value of predicted path to use for collision detection merge_from_private_road/stop_duration_sec double [s] duration to stop assumed_front_car_decel: 1.0 double [m/s^2] deceleration of front car used to check if it could stop in the stuck area at the exit keep_detection_vel_threshold double [m/s] the threshold for ego vehicle for keeping detection after passing pass_judge_line"},{"location":"planning/behavior_velocity_planner/intersection-design/#how-to-tune-parameters","title":"How To Tune Parameters","text":""},{"location":"planning/behavior_velocity_planner/intersection-design/#flowchart","title":"Flowchart","text":"

NOTE current state is treated as STOP if is_entry_prohibited = true else GO

"},{"location":"planning/behavior_velocity_planner/intersection-design/#known-limits","title":"Known Limits","text":""},{"location":"planning/behavior_velocity_planner/intersection-design/#how-to-set-lanelet-map-fot-intersection","title":"How to Set Lanelet Map fot Intersection","text":""},{"location":"planning/behavior_velocity_planner/intersection-design/#set-a-turn_direction-tag-fig-1","title":"Set a turn_direction tag (Fig. 1)","text":"

IntersectionModule will be activated by this tag. If this tag is not set, ego-vehicle don\u2019t recognize the lane as an intersection. Even if it\u2019s a straight lane, this tag is mandatory if it is located within intersection.

Set a value in turn_direction tag to light up turn signals.

Values of turn_direction must be one of \u201cstraight\u201d(no turn signal), \u201cright\u201d or \u201cleft\u201d. Autoware will light up respective turn signals 30[m] before entering the specified lane. You may also set optional tag \u201cturn_signal_distance\u201d to modify the distance to start lighting up turn signals.

Lanes within intersections must be defined as a single Lanelet. For example, blue lane in Fig.3 cannot be split into 2 Lanelets.

"},{"location":"planning/behavior_velocity_planner/intersection-design/#explicitly-describe-a-stop-position-roadmarking-optional-fig-2","title":"Explicitly describe a stop position [RoadMarking] (Optional) (Fig. 2)","text":"

As a default, IntersectionModule estimates a stop position by the crossing point of driving lane and attention lane. But there are some cases like Fig.2 in which we would like to set a stop position explicitly. When a stop_line is defined as a RoadMarking item in the intersection lane, it overwrites the stop position. (Not only creating stop_line, but also defining as a RoadMaking item are needed.)

"},{"location":"planning/behavior_velocity_planner/intersection-design/#exclusion-setting-of-attention-lanes-rightofway-fig3","title":"Exclusion setting of attention lanes [RightOfWay] (Fig.3)","text":"

By default, IntersectionModule treats all lanes crossing with the registered lane as attention targets (yellow and green lanelets). But in some cases (e.g. when driving lane is priority lane or traffic light is green for the driving lane), we want to ignore some of the yield lanes. By setting RightOfWay of the RegulatoryElement item, we can define lanes to be ignored. Register ignored lanes as \u201cyield\u201d and register the attention lanes and driving lane as \u201cright_of_way\u201d lanelets in RightOfWay RegulatoryElement (For an intersection with traffic lights, we need to create items for each lane in the intersection. Please note that it needs a lot of man-hours.)

"},{"location":"planning/behavior_velocity_planner/merge-from-private-design/","title":"Merge from private design","text":""},{"location":"planning/behavior_velocity_planner/merge-from-private-design/#merge-from-private","title":"Merge From Private","text":""},{"location":"planning/behavior_velocity_planner/merge-from-private-design/#role","title":"Role","text":"

When an ego vehicle enters a public road from a private road (e.g. a parking lot), it needs to face and stop before entering the public road to make sure it is safe.

This module is activated when there is an intersection at the private area from which the vehicle enters the public road. The stop line is generated both when the goal is in the intersection lane and when the path goes beyond the intersection lane. The basic behavior is the same as the intersection module, but the ego vehicle must stop once at the stop line.

"},{"location":"planning/behavior_velocity_planner/merge-from-private-design/#activation-timing","title":"Activation Timing","text":"

This module is activated when the following conditions are met:

"},{"location":"planning/behavior_velocity_planner/merge-from-private-design/#module-parameters","title":"Module Parameters","text":"Parameter Type Description merge_from_private_road/stop_duration_sec double [m] time margin to change state"},{"location":"planning/behavior_velocity_planner/merge-from-private-design/#known-issue","title":"Known Issue","text":"

If ego vehicle go over the stop line for a certain distance, then ego vehicle will not transit from STOP.

"},{"location":"planning/behavior_velocity_planner/no-stopping-area-design/","title":"No stopping area design","text":""},{"location":"planning/behavior_velocity_planner/no-stopping-area-design/#no-stopping-area","title":"No Stopping Area","text":""},{"location":"planning/behavior_velocity_planner/no-stopping-area-design/#role","title":"Role","text":"

This module plans to avoid stop in 'no stopping area`.

"},{"location":"planning/behavior_velocity_planner/no-stopping-area-design/#limitation","title":"Limitation","text":"

This module allows developers to design vehicle velocity in no_stopping_area module using specific rules. Once ego vehicle go through pass through point, ego vehicle does't insert stop velocity and does't change decision from GO. Also this module only considers dynamic object in order to avoid unnecessarily stop.

"},{"location":"planning/behavior_velocity_planner/no-stopping-area-design/#modelparameter","title":"ModelParameter","text":"Parameter Type Description state_clear_time double [s] time to clear stop state stuck_vehicle_vel_thr double [m/s] vehicles below this velocity are considered as stuck vehicle. stop_margin double [m] margin to stop line at no stopping area dead_line_margin double [m] if ego pass this position GO stop_line_margin double [m] margin to auto-gen stop line at no stopping area detection_area_length double [m] length of searching polygon stuck_vehicle_front_margin double [m] obstacle stop max distance"},{"location":"planning/behavior_velocity_planner/no-stopping-area-design/#flowchart","title":"Flowchart","text":""},{"location":"planning/behavior_velocity_planner/occlusion-spot-design/","title":"Occlusion spot design","text":""},{"location":"planning/behavior_velocity_planner/occlusion-spot-design/#occlusion-spot","title":"Occlusion Spot","text":""},{"location":"planning/behavior_velocity_planner/occlusion-spot-design/#role","title":"Role","text":"

This module plans safe velocity to slow down before reaching collision point that hidden object is darting out from occlusion spot where driver can't see clearly because of obstacles.

"},{"location":"planning/behavior_velocity_planner/occlusion-spot-design/#activation-timing","title":"Activation Timing","text":"

This module is activated if launch_occlusion_spot becomes true. To make pedestrian first zone map tag is one of the TODOs.

"},{"location":"planning/behavior_velocity_planner/occlusion-spot-design/#limitation-and-todos","title":"Limitation and TODOs","text":"

This module is prototype implementation to care occlusion spot. To solve the excessive deceleration due to false positive of the perception, the logic of detection method can be selectable. This point has not been discussed in detail and needs to be improved.

TODOs are written in each Inner-workings / Algorithms (see the description below).

"},{"location":"planning/behavior_velocity_planner/occlusion-spot-design/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"planning/behavior_velocity_planner/occlusion-spot-design/#logics-working","title":"Logics Working","text":"

There are several types of occlusions, such as \"occlusions generated by parked vehicles\" and \"occlusions caused by obstructions\". In situations such as driving on road with obstacles, where people jump out of the way frequently, all possible occlusion spots must be taken into account. This module considers all occlusion spots calculated from the occupancy grid, but it is not reasonable to take into account all occlusion spots for example, people jumping out from behind a guardrail, or behind cruising vehicle. Therefore currently detection area will be limited to to use predicted object information.

Note that this decision logic is still under development and needs to be improved.

"},{"location":"planning/behavior_velocity_planner/occlusion-spot-design/#detectionarea-polygon","title":"DetectionArea Polygon","text":"

This module considers TTV from pedestrian velocity and lateral distance to occlusion spot. TTC is calculated from ego velocity and acceleration and longitudinal distance until collision point using motion velocity smoother. To compute fast this module only consider occlusion spot whose TTV is less than TTC and only consider area within \"max lateral distance\".

"},{"location":"planning/behavior_velocity_planner/occlusion-spot-design/#occlusion-spot-occupancy-grid-base","title":"Occlusion Spot Occupancy Grid Base","text":"

This module considers any occlusion spot around ego path computed from the occupancy grid. Due to the computational cost occupancy grid is not high resolution and this will make occupancy grid noisy so this module add information of occupancy to occupancy grid map.

TODO: consider hight of obstacle point cloud to generate occupancy grid.

"},{"location":"planning/behavior_velocity_planner/occlusion-spot-design/#collision-free-judgement","title":"Collision Free Judgement","text":"

obstacle that can run out from occlusion should have free space until intersection from ego vehicle

"},{"location":"planning/behavior_velocity_planner/occlusion-spot-design/#partition-lanelet","title":"Partition Lanelet","text":"

By using lanelet information of \"guard_rail\", \"fence\", \"wall\" tag, it's possible to remove unwanted occlusion spot.

By using static object information, it is possible to make occupancy grid more accurate.

To make occupancy grid for planning is one of the TODOs.

"},{"location":"planning/behavior_velocity_planner/occlusion-spot-design/#possible-collision","title":"Possible Collision","text":"

obstacle that can run out from occlusion is interrupted by moving vehicle.

"},{"location":"planning/behavior_velocity_planner/occlusion-spot-design/#about-safe-motion","title":"About safe motion","text":""},{"location":"planning/behavior_velocity_planner/occlusion-spot-design/#the-concept-of-safe-velocity-and-margin","title":"The Concept of Safe Velocity and Margin","text":"

The safe slowdown velocity is calculated from the below parameters of ego emergency braking system and time to collision. Below calculation is included but change velocity dynamically is not recommended for planner.

This module defines safe margin to consider ego distance to stop and collision path point geometrically. While ego is cruising from safe margin to collision path point, ego vehicle keeps the same velocity as occlusion spot safe velocity.

Note: This logic assumes high-precision vehicle speed tracking and margin for decel point might not be the best solution, and override with manual driver is considered if pedestrian really run out from occlusion spot.

TODO: consider one of the best choices

  1. stop in front of occlusion spot
  2. insert 1km/h velocity in front of occlusion spot
  3. slowdown this way
  4. etc... .
"},{"location":"planning/behavior_velocity_planner/occlusion-spot-design/#maximum-slowdown-velocity","title":"Maximum Slowdown Velocity","text":"

The maximum slowdown velocity is calculated from the below parameters of ego current velocity and acceleration with maximum slowdown jerk and maximum slowdown acceleration in order not to slowdown too much.

"},{"location":"planning/behavior_velocity_planner/occlusion-spot-design/#module-parameters","title":"Module Parameters","text":"Parameter Type Description pedestrian_vel double [m/s] maximum velocity assumed pedestrian coming out from occlusion point. pedestrian_radius double [m] assumed pedestrian radius which fits in occlusion spot. Parameter Type Description use_object_info bool [-] whether to reflect object info to occupancy grid map or not. use_partition_lanelet bool [-] whether to use partition lanelet map data. Parameter /debug Type Description is_show_occlusion bool [-] whether to show occlusion point markers.\u3000 is_show_cv_window bool [-] whether to show open_cv debug window. is_show_processing_time bool [-] whether to show processing time. Parameter /threshold Type Description detection_area_length double [m] the length of path to consider occlusion spot stuck_vehicle_vel double [m/s] velocity below this value is assumed to stop lateral_distance double [m] maximum lateral distance to consider hidden collision Parameter /motion Type Description safety_ratio double [-] safety ratio for jerk and acceleration max_slow_down_jerk double [m/s^3] jerk for safe brake max_slow_down_accel double [m/s^2] deceleration for safe brake non_effective_jerk double [m/s^3] weak jerk for velocity planning. non_effective_acceleration double [m/s^2] weak deceleration for velocity planning. min_allowed_velocity double [m/s] minimum velocity allowed safe_margin double [m] maximum error to stop with emergency braking system. Parameter /detection_area Type Description min_occlusion_spot_size double [m] the length of path to consider occlusion spot slice_length double [m] the distance of divided detection area max_lateral_distance double [m] buffer around the ego path used to build the detection_area area. Parameter /grid Type Description free_space_max double [-] maximum value of a free space cell in the occupancy grid occupied_min double [-] buffer around the ego path used to build the detection_area area."},{"location":"planning/behavior_velocity_planner/occlusion-spot-design/#flowchart","title":"Flowchart","text":""},{"location":"planning/behavior_velocity_planner/occlusion-spot-design/#rough-overview-of-the-whole-process","title":"Rough overview of the whole process","text":""},{"location":"planning/behavior_velocity_planner/occlusion-spot-design/#detail-process-for-predicted-objectnot-updated","title":"Detail process for predicted object(not updated)","text":""},{"location":"planning/behavior_velocity_planner/occlusion-spot-design/#detail-process-for-occupancy-grid-base","title":"Detail process for Occupancy grid base","text":""},{"location":"planning/behavior_velocity_planner/run-out-design/","title":"Run out design","text":""},{"location":"planning/behavior_velocity_planner/run-out-design/#run-out","title":"Run Out","text":""},{"location":"planning/behavior_velocity_planner/run-out-design/#role","title":"Role","text":"

run_out is the module that decelerates and stops for dynamic obstacles such as pedestrians and bicycles.

"},{"location":"planning/behavior_velocity_planner/run-out-design/#activation-timing","title":"Activation Timing","text":"

This module is activated if launch_run_out becomes true

"},{"location":"planning/behavior_velocity_planner/run-out-design/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"planning/behavior_velocity_planner/run-out-design/#flow-chart","title":"Flow chart","text":""},{"location":"planning/behavior_velocity_planner/run-out-design/#preprocess-path","title":"Preprocess path","text":""},{"location":"planning/behavior_velocity_planner/run-out-design/#calculate-the-expected-target-velocity-for-ego-vehicle","title":"Calculate the expected target velocity for ego vehicle","text":"

Calculate the expected target velocity for the ego vehicle path to calculate time to collision with obstacles more precisely. The expected target velocity is calculated with motion velocity smoother module by using current velocity, current acceleration and velocity limits directed by the map and external API.

"},{"location":"planning/behavior_velocity_planner/run-out-design/#extend-the-path","title":"Extend the path","text":"

The path is extended by the length of base link to front to consider obstacles after the goal.

"},{"location":"planning/behavior_velocity_planner/run-out-design/#trim-path-from-ego-position","title":"Trim path from ego position","text":"

The path is trimmed from ego position to a certain distance to reduce calculation time. Trimmed distance is specified by parameter of detection_distance.

"},{"location":"planning/behavior_velocity_planner/run-out-design/#preprocess-obstacles","title":"Preprocess obstacles","text":""},{"location":"planning/behavior_velocity_planner/run-out-design/#create-data-of-abstracted-dynamic-obstacle","title":"Create data of abstracted dynamic obstacle","text":"

This module can handle multiple types of obstacles by creating abstracted dynamic obstacle data layer. Currently we have 3 types of detection method (Object, ObjectWithoutPath, Points) to create abstracted obstacle data.

"},{"location":"planning/behavior_velocity_planner/run-out-design/#abstracted-dynamic-obstacle","title":"Abstracted dynamic obstacle","text":"

Abstracted obstacle data has following information.

Name Type Description pose geometry_msgs::msg::Pose pose of the obstacle classifications std::vector<autoware_auto_perception_msgs::msg::ObjectClassification> classifications with probability shape autoware_auto_perception_msgs::msg::Shape shape of the obstacle predicted_paths std::vector<DynamicObstacle::PredictedPath> predicted paths with confidence. this data doesn't have time step because we use minimum and maximum velocity instead. min_velocity_mps float minimum velocity of the obstacle. specified by parameter of dynamic_obstacle.min_vel_kmph max_velocity_mps float maximum velocity of the obstacle. specified by parameter of dynamic_obstacle.max_vel_kmph

Enter the maximum/minimum velocity of the object as a parameter, adding enough margin to the expected velocity. This parameter is used to create polygons for collision detection.

Future work: Determine the maximum/minimum velocity from the estimated velocity with covariance of the object

"},{"location":"planning/behavior_velocity_planner/run-out-design/#3-types-of-detection-method","title":"3 types of detection method","text":"

We have 3 types of detection method to meet different safety and availability requirements. The characteristics of them are shown in the table below. Method of Object has high availability (less false positive) because it detects only objects whose predicted path is on the lane. However, sometimes it is not safe because perception may fail to detect obstacles or generate incorrect predicted paths. On the other hand, method of Points has high safety (less false negative) because it uses pointcloud as input. Since points don't have a predicted path, the path that moves in the direction normal to the path of ego vehicle is considered to be the predicted path of abstracted dynamic obstacle data. However, without proper adjustment of filter of points, it may detect a lot of points and it will result in very low availability. Method of ObjectWithoutPath has the characteristics of an intermediate of Object and Points.

Method Description Object use an object with the predicted path for collision detection. ObjectWithoutPath use an object but not use the predicted path for collision detection. replace the path assuming that an object jumps out to the lane at specified velocity. Points use filtered points for collision detection. the path is created assuming that points jump out to the lane. points are regarded as an small circular shaped obstacle.

"},{"location":"planning/behavior_velocity_planner/run-out-design/#exclude-obstacles-outside-of-partition","title":"Exclude obstacles outside of partition","text":"

This module can exclude the obstacles outside of partition such as guardrail, fence, and wall. We need lanelet map that has the information of partition to use this feature. By this feature, we can reduce unnecessary deceleration by obstacles that are unlikely to jump out to the lane. You can choose whether to use this feature by parameter of use_partition_lanelet.

"},{"location":"planning/behavior_velocity_planner/run-out-design/#collision-detection","title":"Collision detection","text":""},{"location":"planning/behavior_velocity_planner/run-out-design/#detect-collision-with-dynamic-obstacles","title":"Detect collision with dynamic obstacles","text":"

Along the ego vehicle path, determine the points where collision detection is to be performed for each detection_span.

The travel times to the each points are calculated from the expected target velocity.

For the each points, collision detection is performed using the footprint polygon of the ego vehicle and the polygon of the predicted location of the obstacles. The predicted location of the obstacles is described as rectangle or polygon that has the range calculated by min velocity, max velocity and the ego vehicle's travel time to the point. If the input type of the dynamic obstacle is Points, the obstacle shape is defined as a small cylinder.

Multiple points are detected as collision points because collision detection is calculated between two polygons. So we select the point that is on the same side as the obstacle and close to ego vehicle as the collision point.

"},{"location":"planning/behavior_velocity_planner/run-out-design/#insert-velocity","title":"Insert velocity","text":""},{"location":"planning/behavior_velocity_planner/run-out-design/#insert-velocity-to-decelerate-for-obstacles","title":"Insert velocity to decelerate for obstacles","text":"

If the collision is detected, stop point is inserted on distance of base link to front + stop margin from the selected collision point. The base link to front means the distance between base_link (center of rear-wheel axis) and front of the car. Stop margin is determined by the parameter of stop_margin.

"},{"location":"planning/behavior_velocity_planner/run-out-design/#insert-velocity-to-approach-the-obstacles","title":"Insert velocity to approach the obstacles","text":"

If you select the method of Points or ObjectWithoutPath, sometimes ego keeps stopping in front of the obstacle. To avoid this problem, This feature has option to approach the obstacle with slow velocity after stopping. If the parameter of approaching.enable is set to true, ego will approach the obstacle after ego stopped for state.stop_time_thresh seconds. The maximum velocity of approaching can be specified by the parameter of approaching.limit_vel_kmph. The decision to approach the obstacle is determined by a simple state transition as following image.

"},{"location":"planning/behavior_velocity_planner/run-out-design/#limit-velocity-with-specified-jerk-and-acc-limit","title":"Limit velocity with specified jerk and acc limit","text":"

The maximum slowdown velocity is calculated in order not to slowdown too much. See the Occlusion Spot document for more details. You can choose whether to use this feature by parameter of slow_down_limit.enable.

"},{"location":"planning/behavior_velocity_planner/run-out-design/#module-parameters","title":"Module Parameters","text":"Parameter Type Description detection_method string [-] candidate: Object, ObjectWithoutPath, Points use_partition_lanelet bool [-] whether to use partition lanelet map data specify_decel_jerk bool [-] whether to specify jerk when ego decelerates stop_margin double [m] the vehicle decelerates to be able to stop with this margin passing_margin double [m] the vehicle begins to accelerate if the vehicle's front in predicted position is ahead of the obstacle + this margin deceleration_jerk double [m/s^3] ego decelerates with this jerk when stopping for obstacles detection_distance double [m] ahead distance from ego to detect the obstacles detection_span double [m] calculate collision with this span to reduce calculation time min_vel_ego_kmph double [km/h] min velocity to calculate time to collision Parameter /detection_area Type Description margin_ahead double [m] ahead margin for detection area polygon margin_behind double [m] behind margin for detection area polygon Parameter /dynamic_obstacle Type Description min_vel_kmph double [km/h] minimum velocity for dynamic obstacles max_vel_kmph double [km/h] maximum velocity for dynamic obstacles diameter double [m] diameter of obstacles. used for creating dynamic obstacles from points height double [m] height of obstacles. used for creating dynamic obstacles from points max_prediction_time double [sec] create predicted path until this time time_step double [sec] time step for each path step. used for creating dynamic obstacles from points or objects without path points_interval double [m] divide obstacle points into groups with this interval, and detect only lateral nearest point. used only for Points method Parameter /approaching Type Description enable bool [-] whether to enable approaching after stopping margin double [m] distance on how close ego approaches the obstacle limit_vel_kmph double [km/h] limit velocity for approaching after stopping Parameter /state Type Description stop_thresh double [m/s] threshold to decide if ego is stopping stop_time_thresh double [sec] threshold for stopping time to transit to approaching state disable_approach_dist double [m] end the approaching state if distance to the obstacle is longer than this value keep_approach_duration double [sec] keep approach state for this duration to avoid chattering of state transition Parameter /slow_down_limit Type Description enable bool [-] whether to enable to limit velocity with max jerk and acc max_jerk double [m/s^3] minimum jerk deceleration for safe brake. max_acc double [m/s^2] minimum accel deceleration for safe brake."},{"location":"planning/behavior_velocity_planner/run-out-design/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":""},{"location":"planning/behavior_velocity_planner/speed-bump-design/","title":"Speed bump design","text":""},{"location":"planning/behavior_velocity_planner/speed-bump-design/#speed-bump","title":"Speed Bump","text":""},{"location":"planning/behavior_velocity_planner/speed-bump-design/#role","title":"Role","text":"

This module plans the velocity of the related part of the path in case there is speed bump regulatory element referring to it.

"},{"location":"planning/behavior_velocity_planner/speed-bump-design/#activation-timing","title":"Activation Timing","text":"

The manager launch speed bump scene module when there is speed bump regulatory element referring to the reference path.

"},{"location":"planning/behavior_velocity_planner/speed-bump-design/#module-parameters","title":"Module Parameters","text":"Parameter Type Description slow_start_margin double [m] margin for ego vehicle to slow down before speed_bump slow_end_margin double [m] margin for ego vehicle to accelerate after speed_bump print_debug_info bool whether debug info will be printed or not"},{"location":"planning/behavior_velocity_planner/speed-bump-design/#speed-calculation","title":"Speed Calculation","text":" Parameter Type Description min_height double [m] minimum height assumption of the speed bump max_height double [m] maximum height assumption of the speed bump min_speed double [m/s] minimum speed assumption of slow down speed max_speed double [m/s] maximum speed assumption of slow down speed"},{"location":"planning/behavior_velocity_planner/speed-bump-design/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

Note: If in speed bump annotation slow_down_speed tag is used then calculating the speed wrt the speed bump height will be ignored. In such case, specified slow_down_speed value in [kph] is being used.

"},{"location":"planning/behavior_velocity_planner/speed-bump-design/#future-work","title":"Future Work","text":""},{"location":"planning/behavior_velocity_planner/stop-line-design/","title":"Stop line design","text":""},{"location":"planning/behavior_velocity_planner/stop-line-design/#stop-line","title":"Stop Line","text":""},{"location":"planning/behavior_velocity_planner/stop-line-design/#role","title":"Role","text":"

This module plans velocity so that the vehicle can stop right before stop lines and restart driving after stopped.

"},{"location":"planning/behavior_velocity_planner/stop-line-design/#activation-timing","title":"Activation Timing","text":"

This module is activated when there is a stop line in a target lane.

"},{"location":"planning/behavior_velocity_planner/stop-line-design/#module-parameters","title":"Module Parameters","text":"Parameter Type Description stop_margin double a margin that the vehicle tries to stop before stop_line stop_check_dist double when the vehicle is within stop_check_dist from stop_line and stopped, move to STOPPED state hold_stop_margin_distance double [m] parameter for restart prevention (See Algorithm section)"},{"location":"planning/behavior_velocity_planner/stop-line-design/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"planning/behavior_velocity_planner/stop-line-design/#flowchart","title":"Flowchart","text":"

This algorithm is based on segment. segment consists of two node points. It's useful for removing boundary conditions because if segment(i) exists we can assume node(i) and node(i+1) exist.

First, this algorithm finds a collision between reference path and stop line. Then, we can get collision segment and collision point.

Next, based on collision point, it finds offset segment by iterating backward points up to a specific offset length. The offset length is stop_margin(parameter) + base_link to front(to adjust head pose to stop line). Then, we can get offset segment and offset from segment start.

After that, we can calculate a offset point from offset segment and offset. This will be stop_pose.

"},{"location":"planning/behavior_velocity_planner/stop-line-design/#restart-prevention","title":"Restart prevention","text":"

If it needs X meters (e.g. 0.5 meters) to stop once the vehicle starts moving due to the poor vehicle control performance, the vehicle goes over the stopping position that should be strictly observed when the vehicle starts to moving in order to approach the near stop point (e.g. 0.3 meters away).

This module has parameter hold_stop_margin_distance in order to prevent from these redundant restart. If the vehicle is stopped within hold_stop_margin_distance meters from stop point of the module (_front_to_stop_line < hold_stop_margin_distance), the module judges that the vehicle has already stopped for the module's stop point and plans to keep stopping current position even if the vehicle is stopped due to other factors.

parameters

outside the hold_stop_margin_distance

inside the hold_stop_margin_distance"},{"location":"planning/behavior_velocity_planner/traffic-light-design/","title":"Traffic light design","text":""},{"location":"planning/behavior_velocity_planner/traffic-light-design/#traffic-light","title":"Traffic Light","text":""},{"location":"planning/behavior_velocity_planner/traffic-light-design/#role","title":"Role","text":"

Judgement whether a vehicle can go into an intersection or not by internal and external traffic light status, and planning a velocity of the stop if necessary. This module is designed for rule-based velocity decision that is easy for developers to design its behavior. It generates proper velocity for traffic light scene.

In addition, the STOP/GO interface of behavior_velocity_planner allows external users / modules (e.g. remote operation) to intervene the decision of the internal perception. This function is expected to be used, for example, for remote intervention in detection failure or gathering information on operator decisions during development.

"},{"location":"planning/behavior_velocity_planner/traffic-light-design/#limitations","title":"Limitations","text":"

This module allows developers to design STOP/GO in traffic light module using specific rules. Due to the property of rule-based planning, the algorithm is greatly depends on object detection and perception accuracy considering traffic light. Also, this module only handles STOP/Go at traffic light scene, so rushing or quick decision according to traffic condition is future work.

"},{"location":"planning/behavior_velocity_planner/traffic-light-design/#activation-timing","title":"Activation Timing","text":"

This module is activated when there is traffic light in ego lane.

"},{"location":"planning/behavior_velocity_planner/traffic-light-design/#algorithm","title":"Algorithm","text":"
  1. Obtains a traffic light mapped to the route and a stop line correspond to the traffic light from a map information.

  2. Uses the highest reliability one of the traffic light recognition result and if the color of that was red, generates a stop point.

  3. When vehicle current velocity is

    • higher than 2.0m/s \u21d2 pass judge(using next slide formula)
    • lower than 2.0m/s \u21d2 stop
  4. When it to be judged that vehicle can\u2019t stop before stop line, autoware chooses one of the following behaviors

    • \"can pass through\" stop line during yellow lamp => pass
    • \"can\u2019t pass through\" stop line during yellow lamp => emergency stop
"},{"location":"planning/behavior_velocity_planner/traffic-light-design/#dilemma-zone","title":"Dilemma Zone","text":" "},{"location":"planning/behavior_velocity_planner/traffic-light-design/#module-parameters","title":"Module Parameters","text":"Parameter Type Description stop_margin double [m] margin before stop point tl_state_timeout double [s] time out for detected traffic light result. external_tl_state_timeout double [s] time out for external traffic input yellow_lamp_period double [s] time for yellow lamp enable_pass_judge bool [-] whether to use pass judge"},{"location":"planning/behavior_velocity_planner/traffic-light-design/#flowchart","title":"Flowchart","text":""},{"location":"planning/behavior_velocity_planner/traffic-light-design/#known-limits","title":"Known Limits","text":""},{"location":"planning/behavior_velocity_planner/virtual-traffic-light-design/","title":"Virtual traffic light design","text":""},{"location":"planning/behavior_velocity_planner/virtual-traffic-light-design/#virtual-traffic-light","title":"Virtual Traffic Light","text":""},{"location":"planning/behavior_velocity_planner/virtual-traffic-light-design/#role","title":"Role","text":"

Autonomous vehicles have to cooperate with the infrastructures such as:

The following items are example cases:

  1. Traffic control by traffic lights with V2X support

  2. Intersection coordination of multiple vehicles by FMS.

It's possible to make each function individually, however, the use cases can be generalized with these three elements.

  1. start: Start a cooperation procedure after the vehicle enters a certain zone.
  2. stop: Stop at a defined stop line according to the status received from infrastructures.
  3. end: Finalize the cooperation procedure after the vehicle reaches the exit zone. This should be done within the range of stable communication.

This module sends/receives status from infrastructures and plans the velocity of the cooperation result.

"},{"location":"planning/behavior_velocity_planner/virtual-traffic-light-design/#system-configuration-diagram","title":"System Configuration Diagram","text":"

Planner and each infrastructure communicate with each other using common abstracted messages.

FMS: Intersection coordination when multiple vehicles are in operation and the relevant lane is occupied

Support different communication methods for different infrastructures

Have different meta-information for each geographic location

FMS: Fleet Management System

"},{"location":"planning/behavior_velocity_planner/virtual-traffic-light-design/#module-parameters","title":"Module Parameters","text":"Parameter Type Description max_delay_sec double [s] maximum allowed delay for command near_line_distance double [m] threshold distance to stop line to check ego stop. dead_line_margin double [m] threshold distance that this module continue to insert stop line. hold_stop_margin_distance double [m] parameter for restart prevention (See following section) check_timeout_after_stop_line bool [-] check timeout to stop when linkage is disconnected"},{"location":"planning/behavior_velocity_planner/virtual-traffic-light-design/#restart-prevention","title":"Restart prevention","text":"

If it needs X meters (e.g. 0.5 meters) to stop once the vehicle starts moving due to the poor vehicle control performance, the vehicle goes over the stopping position that should be strictly observed when the vehicle starts to moving in order to approach the near stop point (e.g. 0.3 meters away).

This module has parameter hold_stop_margin_distance in order to prevent from these redundant restart. If the vehicle is stopped within hold_stop_margin_distance meters from stop point of the module (_front_to_stop_line < hold_stop_margin_distance), the module judges that the vehicle has already stopped for the module's stop point and plans to keep stopping current position even if the vehicle is stopped due to other factors.

parameters

outside the hold_stop_margin_distance

inside the hold_stop_margin_distance"},{"location":"planning/behavior_velocity_planner/virtual-traffic-light-design/#flowchart","title":"Flowchart","text":""},{"location":"planning/behavior_velocity_planner/virtual-traffic-light-design/#map-format","title":"Map Format","text":" \\begin{align} l_{\\mathrm{min}} = -\\frac{v_0^2}{2 a_{\\mathrm{min}}} \\end{align}"},{"location":"planning/behavior_velocity_planner/virtual-traffic-light-design/#known-limits","title":"Known Limits","text":""},{"location":"planning/costmap_generator/","title":"costmap_generator","text":""},{"location":"planning/costmap_generator/#costmap_generator_node","title":"costmap_generator_node","text":"

This node reads PointCloud and/or DynamicObjectArray and creates an OccupancyGrid and GridMap. VectorMap(Lanelet2) is optional.

"},{"location":"planning/costmap_generator/#input-topics","title":"Input topics","text":"Name Type Description ~input/objects autoware_auto_perception_msgs::PredictedObjects predicted objects, for obstacles areas ~input/points_no_ground sensor_msgs::PointCloud2 ground-removed points, for obstacle areas which can't be detected as objects ~input/vector_map autoware_auto_mapping_msgs::HADMapBin vector map, for drivable areas ~input/scenario tier4_planning_msgs::Scenario scenarios to be activated, for node activation"},{"location":"planning/costmap_generator/#output-topics","title":"Output topics","text":"Name Type Description ~output/grid_map grid_map_msgs::GridMap costmap as GridMap, values are from 0.0 to 1.0 ~output/occupancy_grid nav_msgs::OccupancyGrid costmap as OccupancyGrid, values are from 0 to 100"},{"location":"planning/costmap_generator/#output-tfs","title":"Output TFs","text":"

None

"},{"location":"planning/costmap_generator/#how-to-launch","title":"How to launch","text":"
  1. Write your remapping info in costmap_generator.launch or add args when executing roslaunch
  2. Run roslaunch costmap_generator costmap_generator.launch
"},{"location":"planning/costmap_generator/#parameters","title":"Parameters","text":"Name Type Description update_rate double timer's update rate activate_by_scenario bool if true, activate by scenario = parking. Otherwise, activate if vehicle is inside parking lot. use_objects bool whether using ~input/objects or not use_points bool whether using ~input/points_no_ground or not use_wayarea bool whether using wayarea from ~input/vector_map or not use_parkinglot bool whether using parkinglot from ~input/vector_map or not costmap_frame string created costmap's coordinate vehicle_frame string vehicle's coordinate map_frame string map's coordinate grid_min_value double minimum cost for gridmap grid_max_value double maximum cost for gridmap grid_resolution double resolution for gridmap grid_length_x int size of gridmap for x direction grid_length_y int size of gridmap for y direction grid_position_x int offset from coordinate in x direction grid_position_y int offset from coordinate in y direction maximum_lidar_height_thres double maximum height threshold for pointcloud data minimum_lidar_height_thres double minimum height threshold for pointcloud data expand_rectangle_size double expand object's rectangle with this value size_of_expansion_kernel int kernel size for blurring effect on object's costmap"},{"location":"planning/costmap_generator/#flowchart","title":"Flowchart","text":""},{"location":"planning/external_velocity_limit_selector/","title":"External Velocity Limit Selector","text":""},{"location":"planning/external_velocity_limit_selector/#purpose","title":"Purpose","text":"

The external_velocity_limit_selector_node is a node that keeps consistency of external velocity limits. This module subscribes

  1. velocity limit command sent by API,
  2. velocity limit command sent by Autoware internal modules.

VelocityLimit.msg contains not only max velocity but also information about the acceleration/jerk constraints on deceleration. The external_velocity_limit_selector_node integrates the lowest velocity limit and the highest jerk constraint to calculate the hardest velocity limit that protects all the deceleration points and max velocities sent by API and Autoware internal modules.

"},{"location":"planning/external_velocity_limit_selector/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

WIP

"},{"location":"planning/external_velocity_limit_selector/#inputs","title":"Inputs","text":"Name Type Description ~input/velocity_limit_from_api tier4_planning_msgs::VelocityLimit velocity limit from api ~input/velocity_limit_from_internal tier4_planning_msgs::VelocityLimit velocity limit from autoware internal modules ~input/velocity_limit_clear_command_from_internal tier4_planning_msgs::VelocityLimitClearCommand velocity limit clear command"},{"location":"planning/external_velocity_limit_selector/#outputs","title":"Outputs","text":"Name Type Description ~output/max_velocity tier4_planning_msgs::VelocityLimit current information of the hardest velocity limit"},{"location":"planning/external_velocity_limit_selector/#parameters","title":"Parameters","text":"Parameter Type Description max_velocity double default max velocity [m/s] normal.min_acc double minimum acceleration [m/ss] normal.max_acc double maximum acceleration [m/ss] normal.min_jerk double minimum jerk [m/sss] normal.max_jerk double maximum jerk [m/sss] limit.min_acc double minimum acceleration to be observed [m/ss] limit.max_acc double maximum acceleration to be observed [m/ss] limit.min_jerk double minimum jerk to be observed [m/sss] limit.max_jerk double maximum jerk to be observed [m/sss]"},{"location":"planning/external_velocity_limit_selector/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"planning/external_velocity_limit_selector/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"planning/external_velocity_limit_selector/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"planning/external_velocity_limit_selector/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"planning/external_velocity_limit_selector/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"planning/freespace_planner/","title":"The freespace_planner","text":""},{"location":"planning/freespace_planner/#freespace_planner_node","title":"freespace_planner_node","text":"

freespace_planner_node is a global path planner node that plans trajectory in the space having static/dynamic obstacles. This node is currently based on Hybrid A* search algorithm in freespace_planning_algorithms package. Other algorithms such as rrt* will be also added and selectable in the future.

Note Due to the constraint of trajectory following, the output trajectory will be split to include only the single direction path. In other words, the output trajectory doesn't include both forward and backward trajectories at once.

"},{"location":"planning/freespace_planner/#input-topics","title":"Input topics","text":"Name Type Description ~input/route autoware_auto_planning_msgs::Route route and goal pose ~input/occupancy_grid nav_msgs::OccupancyGrid costmap, for drivable areas ~input/odometry nav_msgs::Odometry vehicle velocity, for checking whether vehicle is stopped ~input/scenario tier4_planning_msgs::Scenario scenarios to be activated, for node activation"},{"location":"planning/freespace_planner/#output-topics","title":"Output topics","text":"Name Type Description ~output/trajectory autoware_auto_planning_msgs::Trajectory trajectory to be followed is_completed bool (implemented as rosparam) whether all split trajectory are published"},{"location":"planning/freespace_planner/#output-tfs","title":"Output TFs","text":"

None

"},{"location":"planning/freespace_planner/#how-to-launch","title":"How to launch","text":"
  1. Write your remapping info in freespace_planner.launch or add args when executing roslaunch
  2. roslaunch freespace_planner freespace_planner.launch
"},{"location":"planning/freespace_planner/#parameters","title":"Parameters","text":""},{"location":"planning/freespace_planner/#node-parameters","title":"Node parameters","text":"Parameter Type Description planning_algorithms string algorithms used in the node vehicle_shape_margin_m float collision margin in planning algorithm update_rate double timer's update rate waypoints_velocity double velocity in output trajectory (currently, only constant velocity is supported) th_arrived_distance_m double threshold distance to check if vehicle has arrived at the trajectory's endpoint th_stopped_time_sec double threshold time to check if vehicle is stopped th_stopped_velocity_mps double threshold velocity to check if vehicle is stopped th_course_out_distance_m double threshold distance to check if vehicle is out of course vehicle_shape_margin_m double vehicle margin replan_when_obstacle_found bool whether replanning when obstacle has found on the trajectory replan_when_course_out bool whether replanning when vehicle is out of course"},{"location":"planning/freespace_planner/#planner-common-parameters","title":"Planner common parameters","text":"Parameter Type Description time_limit double time limit of planning minimum_turning_radius double minimum turning radius of robot maximum_turning_radius double maximum turning radius of robot theta_size double the number of angle's discretization lateral_goal_range double goal range of lateral position longitudinal_goal_range double goal range of longitudinal position angle_goal_range double goal range of angle curve_weight double additional cost factor for curve actions reverse_weight double additional cost factor for reverse actions obstacle_threshold double threshold for regarding a certain grid as obstacle"},{"location":"planning/freespace_planner/#a-search-parameters","title":"A* search parameters","text":"Parameter Type Description only_behind_solutions bool whether restricting the solutions to be behind the goal use_back bool whether using backward trajectory distance_heuristic_weight double heuristic weight for estimating node's cost"},{"location":"planning/freespace_planner/#rrt-search-parameters","title":"RRT* search parameters","text":"Parameter Type Description max planning time double maximum planning time [msec] (used only when enable_update is set true) enable_update bool whether update after feasible solution found until max_planning time elapse use_informed_sampling bool Use informed RRT* (of Gammell et al.) neighbor_radius double neighbor radius of RRT* algorithm margin double safety margin ensured in path's collision checking in RRT* algorithm"},{"location":"planning/freespace_planner/#flowchart","title":"Flowchart","text":""},{"location":"planning/freespace_planning_algorithms/","title":"freespace planning algorithms","text":""},{"location":"planning/freespace_planning_algorithms/#role","title":"Role","text":"

This package is for development of path planning algorithms in free space.

"},{"location":"planning/freespace_planning_algorithms/#implemented-algorithms","title":"Implemented algorithms","text":"

Please see rrtstar.md for a note on the implementation for informed-RRT*.

NOTE: As for RRT*, one can choose whether update after feasible solution found in RRT*. If not doing so, the algorithm is the almost (but exactly because of rewiring procedure) same as vanilla RRT. If you choose update, then you have option if the sampling after feasible solution found is \"informed\". If set true, then the algorithm is equivalent to informed RRT\\* of Gammell et al. 2014.

"},{"location":"planning/freespace_planning_algorithms/#algorithm-selection","title":"Algorithm selection","text":"

There is a trade-off between algorithm speed and resulting solution quality. When we sort the algorithms by the spectrum of (high quality solution/ slow) -> (low quality solution / fast) it would be A* -> informed RRT* -> RRT. Note that in almost all case informed RRT* is better than RRT* for solution quality given the same computational time budget. So, RRT* is omitted in the comparison.

Some selection criteria would be:

"},{"location":"planning/freespace_planning_algorithms/#guide-to-implement-a-new-algorithm","title":"Guide to implement a new algorithm","text":""},{"location":"planning/freespace_planning_algorithms/#running-the-standalone-tests-and-visualization","title":"Running the standalone tests and visualization","text":"

Building the package with ros-test and run tests:

colcon build --packages-select freespace_planning_algorithms\ncolcon test --packages-select freespace_planning_algorithms\n

Inside the test, simulation results are stored in /tmp/fpalgos-{algorithm_type}-case{scenario_number} as a rosbag. Loading these resulting files, by using test/debug_plot.py, one can create plots visualizing the path and obstacles as shown in the figures below. The created figures are then again saved in /tmp.

"},{"location":"planning/freespace_planning_algorithms/#a-single-curvature-case","title":"A* (single curvature case)","text":""},{"location":"planning/freespace_planning_algorithms/#informed-rrt-with-200-msec-time-budget","title":"informed RRT* with 200 msec time budget","text":""},{"location":"planning/freespace_planning_algorithms/#rrt-without-update-almost-same-as-rrt","title":"RRT* without update (almost same as RRT)","text":"

The black cells, green box, and red box, respectively, indicate obstacles, start configuration, and goal configuration. The sequence of the blue boxes indicate the solution path.

"},{"location":"planning/freespace_planning_algorithms/#license-notice","title":"License notice","text":"

Files src/reeds_shepp.cpp and include/astar_search/reeds_shepp.h are fetched from pyReedsShepp. Note that the implementation in pyReedsShepp is also heavily based on the code in ompl. Both pyReedsShepp and ompl are distributed under 3-clause BSD license.

"},{"location":"planning/freespace_planning_algorithms/rrtstar/","title":"Rrtstar","text":""},{"location":"planning/freespace_planning_algorithms/rrtstar/#note-on-implementation-of-informed-rrt","title":"Note on implementation of informed RRT*","text":""},{"location":"planning/freespace_planning_algorithms/rrtstar/#preliminary-knowledge-on-informed-rrt","title":"Preliminary knowledge on informed-RRT*","text":"

Let us define f(x) as minimum cost of the path when path is constrained to pass through x (so path will be x_{\\mathrm{start}} \\to \\mathrm{x} \\to \\mathrm{x_{\\mathrm{goal}}}). Also, let us define c_{\\mathrm{best}} as the current minimum cost of the feasible paths. Let us define a set $ X(f) = \\left{ x \\in X | f(x) < c*{\\mathrm{best}} \\right} $. If we could sample a new point from X_f instead of X as in vanilla RRT*, chance that c*{\\mathrm{best}} is updated is increased, thus the convergence rate is improved.

In most case, f(x) is unknown, thus it is straightforward to approximate the function f by a heuristic function \\hat{f}. A heuristic function is admissible if \\forall x \\in X, \\hat{f}(x) < f(x), which is sufficient condition of conversion to optimal path. The good heuristic function \\hat{f} has two properties: 1) it is an admissible tight lower bound of f and 2) sampling from X(\\hat{f}) is easy.

According to Gammell et al [1], a good heuristic function when path is always straight is \\hat{f}(x) = ||x_{\\mathrm{start}} - x|| + ||x - x_{\\mathrm{goal}}||. If we don't assume any obstacle information the heuristic is tightest. Also, X(\\hat{f}) is hyper-ellipsoid, and hence sampling from it can be done analytically.

"},{"location":"planning/freespace_planning_algorithms/rrtstar/#modification-to-fit-reeds-sheep-path-case","title":"Modification to fit reeds-sheep path case","text":"

In the vehicle case, state is x = (x_{1}, x_{2}, \\theta). Unlike normal informed-RRT* where we can connect path by a straight line, here we connect the vehicle path by a reeds-sheep path. So, we need some modification of the original algorithm a bit. To this end, one might first consider a heuristic function \\hat{f}_{\\mathrm{RS}}(x) = \\mathrm{RS}(x_{\\mathrm{start}}, x) + \\mathrm{RS}(x, x_{\\mathrm{goal}}) < f(x) where \\mathrm{RS} computes reeds-sheep distance. Though it is good in the sense of tightness, however, sampling from X(\\hat{f}_{RS}) is really difficult. Therefore, we use \\hat{f}_{euc} = ||\\mathrm{pos}(x_{\\mathrm{start}}) - \\mathrm{pos}(x)|| + ||\\mathrm{pos}(x)- \\mathrm{pos}(x_{\\mathrm{goal}})||, which is admissible because \\forall x \\in X, \\hat{f}_{euc}(x) < \\hat{f}_{\\mathrm{RS}}(x) < f(x). Here, \\mathrm{pos} function returns position (x_{1}, x_{2}) of the vehicle.

Sampling from X(\\hat{f}_{\\mathrm{euc}}) is easy because X(\\hat{f}_{\\mathrm{euc}}) = \\mathrm{Ellipse} \\times (-\\pi, \\pi]. Here \\mathrm{Ellipse}'s focal points are x_{\\mathrm{start}} and x_{\\mathrm{goal}} and conjugate diameters is $\\sqrt{c^{2}{\\mathrm{best}} - ||\\mathrm{pos}(x}) - \\mathrm{pos}(x_{\\mathrm{goal}}))|| } $ (similar to normal informed-rrtstar's ellipsoid). Please notice that \\theta can be arbitrary because \\hat{f}_{\\mathrm{euc}} is independent of \\theta.

[1] Gammell et al., \"Informed RRT*: Optimal sampling-based path planning focused via direct sampling of an admissible ellipsoidal heuristic.\" IROS (2014)

"},{"location":"planning/mission_planner/","title":"Mission Planner","text":""},{"location":"planning/mission_planner/#purpose","title":"Purpose","text":"

Mission Planner calculates a route that navigates from the current ego pose to the goal pose following the given check points. The route is made of a sequence of lanes on a static map. Dynamic objects (e.g. pedestrians and other vehicles) and dynamic map information (e.g. road construction which blocks some lanes) are not considered during route planning. Therefore, the output topic is only published when the goal pose or check points are given and will be latched until the new goal pose or check points are given.

The core implementation does not depend on a map format. In current Autoware.universe, only Lanelet2 map format is supported.

"},{"location":"planning/mission_planner/#interfaces","title":"Interfaces","text":""},{"location":"planning/mission_planner/#parameters","title":"Parameters","text":"Name Type Description map_frame string The frame name for map arrival_check_angle_deg double Angle threshold for goal check arrival_check_distance double Distance threshold for goal check arrival_check_duration double Duration threshold for goal check goal_angle_threshold double Max goal pose angle for goal approve enable_correct_goal_pose bool Enabling correction of goal pose according to the closest lanelet orientation"},{"location":"planning/mission_planner/#services","title":"Services","text":"Name Type Description /planning/routing/clear_route autoware_adapi_v1_msgs::srv::ClearRoute route clear request /planning/routing/set_route_points autoware_adapi_v1_msgs::srv::SetRoutePoints route request with pose waypoints /planning/routing/set_route autoware_planning_msgs::srv::SetRoute route request with HAD map format"},{"location":"planning/mission_planner/#subscriptions","title":"Subscriptions","text":"Name Type Description input/vector_map autoware_auto_mapping_msgs/HADMapBin vector map of Lanelet2 input/modified_goal geometry_msgs/PoseStamped goal pose for arrival check"},{"location":"planning/mission_planner/#publications","title":"Publications","text":"Name Type Description /planning/routing/route_state autoware_adapi_v1_msgs::msg::RouteState route state /planning/routing/route autoware_planning_msgs/LaneletRoute route debug/route_marker visualization_msgs::msg::MarkerArray route marker for debug debug/goal_footprint visualization_msgs::msg::MarkerArray goal footprint for debug"},{"location":"planning/mission_planner/#route-section","title":"Route section","text":"

Route section, whose type is autoware_planning_msgs/LaneletSegment, is a \"slice\" of a road that bundles lane changeable lanes. Note that the most atomic unit of route is autoware_auto_mapping_msgs/LaneletPrimitive, which has the unique id of a lane in a vector map and its type. Therefore, route message does not contain geometric information about the lane since we did not want to have planning module\u2019s message to have dependency on map data structure.

The ROS message of route section contains following three elements for each route section.

"},{"location":"planning/mission_planner/#goal-validation","title":"Goal Validation","text":"

The mission planner has control mechanism to validate the given goal pose and create a route. If goal pose angle between goal pose lanelet and goal pose' yaw is greater than goal_angle_threshold parameter, the goal is rejected. Another control mechanism is the creation of a footprint of the goal pose according to the dimensions of the vehicle and checking whether this footprint is within the lanelets. If goal footprint exceeds lanelets, then the goal is rejected.

At the image below, there are sample goal pose validation cases.

"},{"location":"planning/mission_planner/#implementation","title":"Implementation","text":""},{"location":"planning/mission_planner/#mission-planner_1","title":"Mission Planner","text":"

Two callbacks (goal and check points) are a trigger for route planning. Routing graph, which plans route in Lanelet2, must be created before those callbacks, and this routing graph is created in vector map callback.

plan route is explained in detail in the following section.

"},{"location":"planning/mission_planner/#route-planner","title":"Route Planner","text":"

plan route is executed with check points including current ego pose and goal pose.

plan path between each check points firstly calculates closest lanes to start and goal pose. Then routing graph of Lanelet2 plans the shortest path from start and goal pose.

initialize route lanelets initializes route handler, and calculates route_lanelets. route_lanelets, all of which will be registered in route sections, are lanelets next to the lanelets in the planned path, and used when planning lane change. To calculate route_lanelets,

  1. All the neighbor (right and left) lanes for the planned path which is lane-changeable is memorized as route_lanelets.
  2. All the neighbor (right and left) lanes for the planned path which is not lane-changeable is memorized as candidate_lanelets.
  3. If the following and previous lanelets of each candidate_lanelets are route_lanelets, the candidate_lanelet is registered as route_lanelets
    • This is because even though candidate_lanelet (an adjacent lane) is not lane-changeable, we can pass the candidate_lanelet without lane change if the following and previous lanelets of the candidate_lanelet are route_lanelets

get preferred lanelets extracts preferred_primitive from route_lanelets with the route handler.

create route sections extracts primitives from route_lanelets for each route section with the route handler, and creates route sections.

"},{"location":"planning/mission_planner/#limitations","title":"Limitations","text":""},{"location":"planning/motion_velocity_smoother/","title":"Motion Velocity Smoother","text":""},{"location":"planning/motion_velocity_smoother/#purpose","title":"Purpose","text":"

motion_velocity_smoother outputs a desired velocity profile on a reference trajectory. This module plans a velocity profile within the limitations of the velocity, the acceleration and the jerk to realize both the maximization of velocity and the ride quality. We call this module motion_velocity_smoother because the limitations of the acceleration and the jerk means the smoothness of the velocity profile.

"},{"location":"planning/motion_velocity_smoother/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"planning/motion_velocity_smoother/#flow-chart","title":"Flow chart","text":""},{"location":"planning/motion_velocity_smoother/#extract-trajectory","title":"Extract trajectory","text":"

For the point on the reference trajectory closest to the center of the rear wheel axle of the vehicle, it extracts the reference path between extract_behind_dist behind and extract_ahead_dist ahead.

"},{"location":"planning/motion_velocity_smoother/#apply-external-velocity-limit","title":"Apply external velocity limit","text":"

It applies the velocity limit input from the external of motion_velocity_smoother. Remark that the external velocity limit is different from the velocity limit already set on the map and the reference trajectory. The external velocity is applied at the position that it is able to reach the velocity limit with the deceleration and the jerk constraints set as the parameter.

"},{"location":"planning/motion_velocity_smoother/#apply-stop-approaching-velocity","title":"Apply stop approaching velocity","text":"

It applies the velocity limit near the stopping point. This function is used to approach near the obstacle or improve the accuracy of stopping.

"},{"location":"planning/motion_velocity_smoother/#apply-lateral-acceleration-limit","title":"Apply lateral acceleration limit","text":"

It applies the velocity limit to decelerate at the curve. It calculates the velocity limit from the curvature of the reference trajectory and the maximum lateral acceleration max_lateral_accel. The velocity limit is set as not to fall under min_curve_velocity.

Note: velocity limit that requests larger than nominal.jerk is not applied. In other words, even if a sharp curve is planned just in front of the ego, no deceleration is performed.

"},{"location":"planning/motion_velocity_smoother/#apply-steering-rate-limit","title":"Apply steering rate limit","text":"

It calculates the desired steering angles of trajectory points. and it applies the steering rate limit. If the (steering_angle_rate > max_steering_angle_rate), it decreases the velocity of the trajectory point to acceptable velocity.

"},{"location":"planning/motion_velocity_smoother/#resample-trajectory","title":"Resample trajectory","text":"

It resamples the points on the reference trajectory with designated time interval. Note that the range of the length of the trajectory is set between min_trajectory_length and max_trajectory_length, and the distance between two points is longer than min_trajectory_interval_distance. It samples densely up to the distance traveled between resample_time with the current velocity, then samples sparsely after that. By sampling according to the velocity, both calculation load and accuracy are achieved since it samples finely at low velocity and coarsely at high velocity.

"},{"location":"planning/motion_velocity_smoother/#calculate-initial-state","title":"Calculate initial state","text":"

Calculate initial values for velocity planning. The initial values are calculated according to the situation as shown in the following table.

Situation Initial velocity Initial acceleration First calculation Current velocity 0.0 Engaging engage_velocity engage_acceleration Deviate between the planned velocity and the current velocity Current velocity Previous planned value Normal Previous planned value Previous planned value"},{"location":"planning/motion_velocity_smoother/#smooth-velocity","title":"Smooth velocity","text":"

It plans the velocity. The algorithm of velocity planning is chosen from JerkFiltered, L2 and Linf, and it is set in the launch file. In these algorithms, they use OSQP[1] as the solver of the optimization.

"},{"location":"planning/motion_velocity_smoother/#jerkfiltered","title":"JerkFiltered","text":"

It minimizes the sum of the minus of the square of the velocity and the square of the violation of the velocity limit, the acceleration limit and the jerk limit.

"},{"location":"planning/motion_velocity_smoother/#l2","title":"L2","text":"

It minimizes the sum of the minus of the square of the velocity, the square of the the pseudo-jerk[2] and the square of the violation of the velocity limit and the acceleration limit.

"},{"location":"planning/motion_velocity_smoother/#linf","title":"Linf","text":"

It minimizes the sum of the minus of the square of the velocity, the maximum absolute value of the the pseudo-jerk[2] and the square of the violation of the velocity limit and the acceleration limit.

"},{"location":"planning/motion_velocity_smoother/#post-process","title":"Post process","text":"

It performs the post-process of the planned velocity.

After the optimization, a resampling called post resampling is performed before passing the optimized trajectory to the next node. Since the required path interval from optimization may be different from the one for the next module, post resampling helps to fill this gap. Therefore, in post resampling, it is necessary to check the path specification of the following module to determine the parameters. Note that if the computational load of the optimization algorithm is high and the path interval is sparser than the path specification of the following module in the first resampling, post resampling would resample the trajectory densely. On the other hand, if the computational load of the optimization algorithm is small and the path interval is denser than the path specification of the following module in the first resampling, the path is sparsely resampled according to the specification of the following module.

"},{"location":"planning/motion_velocity_smoother/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"planning/motion_velocity_smoother/#input","title":"Input","text":"Name Type Description ~/input/trajectory autoware_auto_planning_msgs/Trajectory Reference trajectory /planning/scenario_planning/max_velocity std_msgs/Float32 External velocity limit [m/s] /localization/kinematic_state nav_msgs/Odometry Current odometry /tf tf2_msgs/TFMessage TF /tf_static tf2_msgs/TFMessage TF static"},{"location":"planning/motion_velocity_smoother/#output","title":"Output","text":"Name Type Description ~/output/trajectory autoware_auto_planning_msgs/Trajectory Modified trajectory /planning/scenario_planning/current_max_velocity std_msgs/Float32 Current external velocity limit [m/s] ~/closest_velocity std_msgs/Float32 Planned velocity closest to ego base_link (for debug) ~/closest_acceleration std_msgs/Float32 Planned acceleration closest to ego base_link (for debug) ~/closest_jerk std_msgs/Float32 Planned jerk closest to ego base_link (for debug) ~/debug/trajectory_raw autoware_auto_planning_msgs/Trajectory Extracted trajectory (for debug) ~/debug/trajectory_external_velocity_limited autoware_auto_planning_msgs/Trajectory External velocity limited trajectory (for debug) ~/debug/trajectory_lateral_acc_filtered autoware_auto_planning_msgs/Trajectory Lateral acceleration limit filtered trajectory (for debug) ~/debug/trajectory_steering_rate_limited autoware_auto_planning_msgs/Trajectory Steering angle rate limit filtered trajectory (for debug) ~/debug/trajectory_time_resampled autoware_auto_planning_msgs/Trajectory Time resampled trajectory (for debug) ~/distance_to_stopline std_msgs/Float32 Distance to stop line from current ego pose (max 50 m) (for debug) ~/stop_speed_exceeded std_msgs/Bool It publishes true if planned velocity on the point which the maximum velocity is zero is over threshold"},{"location":"planning/motion_velocity_smoother/#parameters","title":"Parameters","text":""},{"location":"planning/motion_velocity_smoother/#constraint-parameters","title":"Constraint parameters","text":"Name Type Description Default value max_velocity double Max velocity limit [m/s] 20.0 max_accel double Max acceleration limit [m/ss] 1.0 min_decel double Min deceleration limit [m/ss] -0.5 stop_decel double Stop deceleration value at a stop point [m/ss] 0.0 max_jerk double Max jerk limit [m/sss] 1.0 min_jerk double Min jerk limit [m/sss] -0.5"},{"location":"planning/motion_velocity_smoother/#external-velocity-limit-parameter","title":"External velocity limit parameter","text":"Name Type Description Default value margin_to_insert_external_velocity_limit double margin distance to insert external velocity limit [m] 0.3"},{"location":"planning/motion_velocity_smoother/#curve-parameters","title":"Curve parameters","text":"Name Type Description Default value max_lateral_accel double Max lateral acceleration limit [m/ss] 0.5 min_curve_velocity double Min velocity at lateral acceleration limit [m/ss] 2.74 decel_distance_before_curve double Distance to slowdown before a curve for lateral acceleration limit [m] 3.5 decel_distance_after_curve double Distance to slowdown after a curve for lateral acceleration limit [m] 2.0 min_decel_for_lateral_acc_lim_filter double Deceleration limit to avoid sudden braking by the lateral acceleration filter [m/ss]. Strong limitation degrades the deceleration response to the appearance of sharp curves due to obstacle avoidance, etc. -2.5"},{"location":"planning/motion_velocity_smoother/#engage-replan-parameters","title":"Engage & replan parameters","text":"Name Type Description Default value replan_vel_deviation double Velocity deviation to replan initial velocity [m/s] 5.53 engage_velocity double Engage velocity threshold [m/s] (if the trajectory velocity is higher than this value, use this velocity for engage vehicle speed) 0.25 engage_acceleration double Engage acceleration [m/ss] (use this acceleration when engagement) 0.1 engage_exit_ratio double Exit engage sequence to normal velocity planning when the velocity exceeds engage_exit_ratio x engage_velocity. 0.5 stop_dist_to_prohibit_engage double If the stop point is in this distance, the speed is set to 0 not to move the vehicle [m] 0.5"},{"location":"planning/motion_velocity_smoother/#stopping-velocity-parameters","title":"Stopping velocity parameters","text":"Name Type Description Default value stopping_velocity double change target velocity to this value before v=0 point [m/s] 2.778 stopping_distance double distance for the stopping_velocity [m]. 0 means the stopping velocity is not applied. 0.0"},{"location":"planning/motion_velocity_smoother/#extraction-parameters","title":"Extraction parameters","text":"Name Type Description Default value extract_ahead_dist double Forward trajectory distance used for planning [m] 200.0 extract_behind_dist double backward trajectory distance used for planning [m] 5.0 delta_yaw_threshold double Allowed delta yaw between ego pose and trajectory pose [radian] 1.0472"},{"location":"planning/motion_velocity_smoother/#resampling-parameters","title":"Resampling parameters","text":"Name Type Description Default value max_trajectory_length double Max trajectory length for resampling [m] 200.0 min_trajectory_length double Min trajectory length for resampling [m] 30.0 resample_time double Resample total time [s] 10.0 dense_dt double resample time interval for dense sampling [s] 0.1 dense_min_interval_distance double minimum points-interval length for dense sampling [m] 0.1 sparse_dt double resample time interval for sparse sampling [s] 0.5 sparse_min_interval_distance double minimum points-interval length for sparse sampling [m] 4.0"},{"location":"planning/motion_velocity_smoother/#resampling-parameters-for-post-process","title":"Resampling parameters for post process","text":"Name Type Description Default value post_max_trajectory_length double max trajectory length for resampling [m] 300.0 post_min_trajectory_length double min trajectory length for resampling [m] 30.0 post_resample_time double resample total time for dense sampling [s] 10.0 post_dense_dt double resample time interval for dense sampling [s] 0.1 post_dense_min_interval_distance double minimum points-interval length for dense sampling [m] 0.1 post_sparse_dt double resample time interval for sparse sampling [s] 0.1 post_sparse_min_interval_distance double minimum points-interval length for sparse sampling [m] 1.0"},{"location":"planning/motion_velocity_smoother/#limit-steering-angle-rate-parameters","title":"Limit steering angle rate parameters","text":"Name Type Description Default value max_steering_angle_rate double Maximum steering angle rate [degree/s] 40.0 resample_ds double Distance between trajectory points [m] 0.1 curvature_threshold double If curvature > curvature_threshold, steeringRateLimit is triggered [1/m] 0.02 curvature_calculation_distance double Distance of points while curvature is calculating [m] 1.0"},{"location":"planning/motion_velocity_smoother/#weights-for-optimization","title":"Weights for optimization","text":""},{"location":"planning/motion_velocity_smoother/#jerkfiltered_1","title":"JerkFiltered","text":"Name Type Description Default value jerk_weight double Weight for \"smoothness\" cost for jerk 10.0 over_v_weight double Weight for \"over speed limit\" cost 100000.0 over_a_weight double Weight for \"over accel limit\" cost 5000.0 over_j_weight double Weight for \"over jerk limit\" cost 1000.0"},{"location":"planning/motion_velocity_smoother/#l2_1","title":"L2","text":"Name Type Description Default value pseudo_jerk_weight double Weight for \"smoothness\" cost 100.0 over_v_weight double Weight for \"over speed limit\" cost 100000.0 over_a_weight double Weight for \"over accel limit\" cost 1000.0"},{"location":"planning/motion_velocity_smoother/#linf_1","title":"Linf","text":"Name Type Description Default value pseudo_jerk_weight double Weight for \"smoothness\" cost 100.0 over_v_weight double Weight for \"over speed limit\" cost 100000.0 over_a_weight double Weight for \"over accel limit\" cost 1000.0"},{"location":"planning/motion_velocity_smoother/#others","title":"Others","text":"Name Type Description Default value over_stop_velocity_warn_thr double Threshold to judge that the optimized velocity exceeds the input velocity on the stop point [m/s] 1.389"},{"location":"planning/motion_velocity_smoother/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"planning/motion_velocity_smoother/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"planning/motion_velocity_smoother/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"planning/motion_velocity_smoother/#optional-referencesexternal-links","title":"(Optional) References/External links","text":"

[1] B. Stellato, et al., \"OSQP: an operator splitting solver for quadratic programs\", Mathematical Programming Computation, 2020, 10.1007/s12532-020-00179-2.

[2] Y. Zhang, et al., \"Toward a More Complete, Flexible, and Safer Speed Planning for Autonomous Driving via Convex Optimization\", Sensors, vol. 18, no. 7, p. 2185, 2018, 10.3390/s18072185

"},{"location":"planning/motion_velocity_smoother/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"planning/motion_velocity_smoother/README.ja/","title":"Motion Velocity Smoother","text":""},{"location":"planning/motion_velocity_smoother/README.ja/#purpose","title":"Purpose","text":"

motion_velocity_smoother\u306f\u76ee\u6a19\u8ecc\u9053\u4e0a\u306e\u5404\u70b9\u306b\u304a\u3051\u308b\u671b\u307e\u3057\u3044\u8eca\u901f\u3092\u8a08\u753b\u3057\u3066\u51fa\u529b\u3059\u308b\u30e2\u30b8\u30e5\u30fc\u30eb\u3067\u3042\u308b\u3002 \u3053\u306e\u30e2\u30b8\u30e5\u30fc\u30eb\u306f\u3001\u901f\u5ea6\u306e\u6700\u5927\u5316\u3068\u4e57\u308a\u5fc3\u5730\u306e\u826f\u3055\u3092\u4e21\u7acb\u3059\u308b\u305f\u3081\u306b\u3001\u4e8b\u524d\u306b\u6307\u5b9a\u3055\u308c\u305f\u5236\u9650\u901f\u5ea6\u3001\u5236\u9650\u52a0\u901f\u5ea6\u304a\u3088\u3073\u5236\u9650\u8e8d\u5ea6\u306e\u7bc4\u56f2\u3067\u8eca\u901f\u3092\u8a08\u753b\u3059\u308b\u3002 \u52a0\u901f\u5ea6\u3084\u8e8d\u5ea6\u306e\u5236\u9650\u3092\u4e0e\u3048\u308b\u3053\u3068\u306f\u8eca\u901f\u306e\u5909\u5316\u3092\u6ed1\u3089\u304b\u306b\u3059\u308b\u3053\u3068\u306b\u5bfe\u5fdc\u3059\u308b\u305f\u3081\u3001\u3053\u306e\u30e2\u30b8\u30e5\u30fc\u30eb\u3092motion_velocity_smoother\u3068\u547c\u3093\u3067\u3044\u308b\u3002

"},{"location":"planning/motion_velocity_smoother/README.ja/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"planning/motion_velocity_smoother/README.ja/#flow-chart","title":"Flow chart","text":""},{"location":"planning/motion_velocity_smoother/README.ja/#extract-trajectory","title":"Extract trajectory","text":"

\u81ea\u8eca\u5f8c\u8f2a\u8ef8\u4e2d\u5fc3\u4f4d\u7f6e\u306b\u6700\u3082\u8fd1\u3044\u53c2\u7167\u7d4c\u8def\u4e0a\u306e\u70b9\u306b\u5bfe\u3057\u3001extract_behind_dist\u3060\u3051\u623b\u3063\u305f\u70b9\u304b\u3089extract_ahead_dist\u3060\u3051\u9032\u3093\u3060\u70b9\u307e\u3067\u306e\u53c2\u7167\u7d4c\u8def\u3092\u629c\u304d\u51fa\u3059\u3002

"},{"location":"planning/motion_velocity_smoother/README.ja/#apply-external-velocity-limit","title":"Apply external velocity limit","text":"

\u30e2\u30b8\u30e5\u30fc\u30eb\u5916\u90e8\u304b\u3089\u6307\u5b9a\u3055\u308c\u305f\u901f\u5ea6\u5236\u9650\u3092\u9069\u7528\u3059\u308b\u3002 \u3053\u3053\u3067\u6271\u3046\u5916\u90e8\u306e\u901f\u5ea6\u5236\u9650\u306f/planning/scenario_planning/max_velocity\u306e topic \u3067\u6e21\u3055\u308c\u308b\u3082\u306e\u3067\u3001\u5730\u56f3\u4e0a\u3067\u8a2d\u5b9a\u3055\u308c\u305f\u901f\u5ea6\u5236\u9650\u306a\u3069\u3001\u53c2\u7167\u7d4c\u8def\u306b\u3059\u3067\u306b\u8a2d\u5b9a\u3055\u308c\u3066\u3044\u308b\u5236\u9650\u901f\u5ea6\u3068\u306f\u5225\u3067\u3042\u308b\u3002 \u5916\u90e8\u304b\u3089\u6307\u5b9a\u3055\u308c\u308b\u901f\u5ea6\u5236\u9650\u306f\u3001\u30d1\u30e9\u30e1\u30fc\u30bf\u3067\u6307\u5b9a\u3055\u308c\u3066\u3044\u308b\u6e1b\u901f\u5ea6\u304a\u3088\u3073\u8e8d\u5ea6\u306e\u5236\u9650\u306e\u7bc4\u56f2\u3067\u6e1b\u901f\u53ef\u80fd\u306a\u4f4d\u7f6e\u304b\u3089\u901f\u5ea6\u5236\u9650\u3092\u9069\u7528\u3059\u308b\u3002

"},{"location":"planning/motion_velocity_smoother/README.ja/#apply-stop-approaching-velocity","title":"Apply stop approaching velocity","text":"

\u505c\u6b62\u70b9\u306b\u8fd1\u3065\u3044\u305f\u3068\u304d\u306e\u901f\u5ea6\u3092\u8a2d\u5b9a\u3059\u308b\u3002\u969c\u5bb3\u7269\u8fd1\u508d\u307e\u3067\u8fd1\u3065\u304f\u5834\u5408\u3084\u3001\u6b63\u7740\u7cbe\u5ea6\u5411\u4e0a\u306a\u3069\u306e\u76ee\u7684\u306b\u7528\u3044\u308b\u3002

"},{"location":"planning/motion_velocity_smoother/README.ja/#apply-lateral-acceleration-limit","title":"Apply lateral acceleration limit","text":"

\u7d4c\u8def\u306e\u66f2\u7387\u306b\u5fdc\u3058\u3066\u3001\u6307\u5b9a\u3055\u308c\u305f\u6700\u5927\u6a2a\u52a0\u901f\u5ea6max_lateral_accel\u3092\u8d85\u3048\u306a\u3044\u901f\u5ea6\u3092\u5236\u9650\u901f\u5ea6\u3068\u3057\u3066\u8a2d\u5b9a\u3059\u308b\u3002\u305f\u3060\u3057\u3001\u5236\u9650\u901f\u5ea6\u306fmin_curve_velocity\u3092\u4e0b\u56de\u3089\u306a\u3044\u3088\u3046\u306b\u8a2d\u5b9a\u3059\u308b\u3002

"},{"location":"planning/motion_velocity_smoother/README.ja/#resample-trajectory","title":"Resample trajectory","text":"

\u6307\u5b9a\u3055\u308c\u305f\u6642\u9593\u9593\u9694\u3067\u7d4c\u8def\u306e\u70b9\u3092\u518d\u30b5\u30f3\u30d7\u30eb\u3059\u308b\u3002\u305f\u3060\u3057\u3001\u7d4c\u8def\u5168\u4f53\u306e\u9577\u3055\u306fmin_trajectory_length\u304b\u3089max_trajectory_length\u306e\u9593\u3068\u306a\u308b\u3088\u3046\u306b\u518d\u30b5\u30f3\u30d7\u30eb\u3092\u884c\u3044\u3001\u70b9\u306e\u9593\u9694\u306fmin_trajectory_interval_distance\u3088\u308a\u5c0f\u3055\u304f\u306a\u3089\u306a\u3044\u3088\u3046\u306b\u3059\u308b\u3002 \u73fe\u5728\u8eca\u901f\u3067resample_time\u306e\u9593\u9032\u3080\u8ddd\u96e2\u307e\u3067\u306f\u5bc6\u306b\u30b5\u30f3\u30d7\u30ea\u30f3\u30b0\u3057\u3001\u305d\u308c\u4ee5\u964d\u306f\u758e\u306b\u30b5\u30f3\u30d7\u30ea\u30f3\u30b0\u3059\u308b\u3002 \u3053\u306e\u65b9\u6cd5\u3067\u30b5\u30f3\u30d7\u30ea\u30f3\u30b0\u3059\u308b\u3053\u3068\u3067\u3001\u4f4e\u901f\u6642\u306f\u5bc6\u306b\u3001\u9ad8\u901f\u6642\u306f\u758e\u306b\u30b5\u30f3\u30d7\u30eb\u3055\u308c\u308b\u305f\u3081\u3001\u505c\u6b62\u7cbe\u5ea6\u3068\u8a08\u7b97\u8ca0\u8377\u8efd\u6e1b\u306e\u4e21\u7acb\u3092\u56f3\u3063\u3066\u3044\u308b\u3002

"},{"location":"planning/motion_velocity_smoother/README.ja/#calculate-initial-state","title":"Calculate initial state","text":"

\u901f\u5ea6\u8a08\u753b\u306e\u305f\u3081\u306e\u521d\u671f\u5024\u3092\u8a08\u7b97\u3059\u308b\u3002\u521d\u671f\u5024\u306f\u72b6\u6cc1\u306b\u5fdc\u3058\u3066\u4e0b\u8868\u306e\u3088\u3046\u306b\u8a08\u7b97\u3059\u308b\u3002

\u72b6\u6cc1 \u521d\u671f\u901f\u5ea6 \u521d\u671f\u52a0\u901f\u5ea6 \u6700\u521d\u306e\u8a08\u7b97\u6642 \u73fe\u5728\u8eca\u901f 0.0 \u767a\u9032\u6642 engage_velocity engage_acceleration \u73fe\u5728\u8eca\u901f\u3068\u8a08\u753b\u8eca\u901f\u304c\u4e56\u96e2 \u73fe\u5728\u8eca\u901f \u524d\u56de\u8a08\u753b\u5024 \u901a\u5e38\u6642 \u524d\u56de\u8a08\u753b\u5024 \u524d\u56de\u8a08\u753b\u5024"},{"location":"planning/motion_velocity_smoother/README.ja/#smooth-velocity","title":"Smooth velocity","text":"

\u901f\u5ea6\u306e\u8a08\u753b\u3092\u884c\u3046\u3002\u901f\u5ea6\u8a08\u753b\u306e\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u306fJerkFiltered, L2, Linf\u306e 3 \u7a2e\u985e\u306e\u3046\u3061\u304b\u3089\u30b3\u30f3\u30d5\u30a3\u30b0\u3067\u6307\u5b9a\u3059\u308b\u3002 \u6700\u9069\u5316\u306e\u30bd\u30eb\u30d0\u306f OSQP[1]\u3092\u5229\u7528\u3059\u308b\u3002

"},{"location":"planning/motion_velocity_smoother/README.ja/#jerkfiltered","title":"JerkFiltered","text":"

\u901f\u5ea6\u306e 2 \u4e57\uff08\u6700\u5c0f\u5316\u3067\u8868\u3059\u305f\u3081\u8ca0\u5024\u3067\u8868\u73fe\uff09\u3001\u5236\u9650\u901f\u5ea6\u9038\u8131\u91cf\u306e 2 \u4e57\u3001\u5236\u9650\u52a0\u5ea6\u9038\u8131\u91cf\u306e 2 \u4e57\u3001\u5236\u9650\u30b8\u30e3\u30fc\u30af\u9038\u8131\u91cf\u306e 2 \u4e57\u3001\u30b8\u30e3\u30fc\u30af\u306e 2 \u4e57\u306e\u7dcf\u548c\u3092\u6700\u5c0f\u5316\u3059\u308b\u3002

"},{"location":"planning/motion_velocity_smoother/README.ja/#l2","title":"L2","text":"

\u901f\u5ea6\u306e 2 \u4e57\uff08\u6700\u5c0f\u5316\u3067\u8868\u3059\u305f\u3081\u8ca0\u5024\u3067\u8868\u73fe\uff09\u3001\u5236\u9650\u901f\u5ea6\u9038\u8131\u91cf\u306e 2 \u4e57\u3001\u5236\u9650\u52a0\u5ea6\u9038\u8131\u91cf\u306e 2 \u4e57\u3001\u7591\u4f3c\u30b8\u30e3\u30fc\u30af[2]\u306e 2 \u4e57\u306e\u7dcf\u548c\u3092\u6700\u5c0f\u5316\u3059\u308b\u3002

"},{"location":"planning/motion_velocity_smoother/README.ja/#linf","title":"Linf","text":"

\u901f\u5ea6\u306e 2 \u4e57\uff08\u6700\u5c0f\u5316\u3067\u8868\u3059\u305f\u3081\u8ca0\u5024\u3067\u8868\u73fe\uff09\u3001\u5236\u9650\u901f\u5ea6\u9038\u8131\u91cf\u306e 2 \u4e57\u3001\u5236\u9650\u52a0\u5ea6\u9038\u8131\u91cf\u306e 2 \u4e57\u306e\u7dcf\u548c\u3068\u7591\u4f3c\u30b8\u30e3\u30fc\u30af[2]\u306e\u7d76\u5bfe\u6700\u5927\u5024\u306e\u548c\u306e\u6700\u5c0f\u5316\u3059\u308b\u3002

"},{"location":"planning/motion_velocity_smoother/README.ja/#post-process","title":"Post process","text":"

\u8a08\u753b\u3055\u308c\u305f\u8ecc\u9053\u306e\u5f8c\u51e6\u7406\u3092\u884c\u3046\u3002

\u6700\u9069\u5316\u306e\u8a08\u7b97\u304c\u7d42\u308f\u3063\u305f\u3042\u3068\u3001\u6b21\u306e\u30ce\u30fc\u30c9\u306b\u7d4c\u8def\u3092\u6e21\u3059\u524d\u306bpost resampling\u3068\u547c\u3070\u308c\u308b\u30ea\u30b5\u30f3\u30d7\u30ea\u30f3\u30b0\u3092\u884c\u3046\u3002\u3053\u3053\u3067\u518d\u5ea6\u30ea\u30b5\u30f3\u30d7\u30ea\u30f3\u30b0\u3092\u884c\u3063\u3066\u3044\u308b\u7406\u7531\u3068\u3057\u3066\u306f\u3001\u6700\u9069\u5316\u524d\u3067\u5fc5\u8981\u306a\u7d4c\u8def\u9593\u9694\u3068\u5f8c\u6bb5\u306e\u30e2\u30b8\u30e5\u30fc\u30eb\u306b\u6e21\u3059\u7d4c\u8def\u9593\u9694\u304c\u5fc5\u305a\u3057\u3082\u4e00\u81f4\u3057\u3066\u3044\u306a\u3044\u304b\u3089\u3067\u3042\u308a\u3001\u305d\u306e\u5dee\u3092\u57cb\u3081\u308b\u305f\u3081\u306b\u518d\u5ea6\u30b5\u30f3\u30d7\u30ea\u30f3\u30b0\u3092\u884c\u3063\u3066\u3044\u308b\u3002\u305d\u306e\u305f\u3081\u3001post resampling\u3067\u306f\u5f8c\u6bb5\u30e2\u30b8\u30e5\u30fc\u30eb\u306e\u7d4c\u8def\u4ed5\u69d8\u3092\u78ba\u8a8d\u3057\u3066\u30d1\u30e9\u30e1\u30fc\u30bf\u3092\u6c7a\u3081\u308b\u5fc5\u8981\u304c\u3042\u308b\u3002\u306a\u304a\u3001\u6700\u9069\u5316\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u306e\u8a08\u7b97\u8ca0\u8377\u304c\u9ad8\u304f\u3001\u6700\u521d\u306e\u30ea\u30b5\u30f3\u30d7\u30ea\u30f3\u30b0\u3067\u7d4c\u8def\u9593\u9694\u304c\u5f8c\u6bb5\u30e2\u30b8\u30e5\u30fc\u30eb\u306e\u7d4c\u8def\u4ed5\u69d8\u3088\u308a\u758e\u306b\u306a\u3063\u3066\u3044\u308b\u5834\u5408\u3001post resampling\u3067\u7d4c\u8def\u3092\u871c\u306b\u30ea\u30b5\u30f3\u30d7\u30ea\u30f3\u30b0\u3059\u308b\u3002\u9006\u306b\u6700\u9069\u5316\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u306e\u8a08\u7b97\u8ca0\u8377\u304c\u5c0f\u3055\u304f\u3001\u6700\u521d\u306e\u30ea\u30b5\u30f3\u30d7\u30ea\u30f3\u30b0\u3067\u7d4c\u8def\u9593\u9694\u304c\u5f8c\u6bb5\u306e\u7d4c\u8def\u4ed5\u69d8\u3088\u308a\u871c\u306b\u306a\u3063\u3066\u3044\u308b\u5834\u5408\u306f\u3001post resampling\u3067\u7d4c\u8def\u3092\u305d\u306e\u4ed5\u69d8\u306b\u5408\u308f\u305b\u3066\u758e\u306b\u30ea\u30b5\u30f3\u30d7\u30ea\u30f3\u30b0\u3059\u308b\u3002

"},{"location":"planning/motion_velocity_smoother/README.ja/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"planning/motion_velocity_smoother/README.ja/#input","title":"Input","text":"Name Type Description ~/input/trajectory autoware_auto_planning_msgs/Trajectory Reference trajectory /planning/scenario_planning/max_velocity std_msgs/Float32 External velocity limit [m/s] /localization/kinematic_state nav_msgs/Odometry Current odometry /tf tf2_msgs/TFMessage TF /tf_static tf2_msgs/TFMessage TF static"},{"location":"planning/motion_velocity_smoother/README.ja/#output","title":"Output","text":"Name Type Description ~/output/trajectory autoware_auto_planning_msgs/Trajectory Modified trajectory /planning/scenario_planning/current_max_velocity std_msgs/Float32 Current external velocity limit [m/s] ~/closest_velocity std_msgs/Float32 Planned velocity closest to ego base_link (for debug) ~/closest_acceleration std_msgs/Float32 Planned acceleration closest to ego base_link (for debug) ~/closest_jerk std_msgs/Float32 Planned jerk closest to ego base_link (for debug) ~/debug/trajectory_raw autoware_auto_planning_msgs/Trajectory Extracted trajectory (for debug) ~/debug/trajectory_external_velocity_limited autoware_auto_planning_msgs/Trajectory External velocity limited trajectory (for debug) ~/debug/trajectory_lateral_acc_filtered autoware_auto_planning_msgs/Trajectory Lateral acceleration limit filtered trajectory (for debug) ~/debug/trajectory_time_resampled autoware_auto_planning_msgs/Trajectory Time resampled trajectory (for debug) ~/distance_to_stopline std_msgs/Float32 Distance to stop line from current ego pose (max 50 m) (for debug) ~/stop_speed_exceeded std_msgs/Bool It publishes true if planned velocity on the point which the maximum velocity is zero is over threshold"},{"location":"planning/motion_velocity_smoother/README.ja/#parameters","title":"Parameters","text":""},{"location":"planning/motion_velocity_smoother/README.ja/#constraint-parameters","title":"Constraint parameters","text":"Name Type Description Default value max_velocity double Max velocity limit [m/s] 20.0 max_accel double Max acceleration limit [m/ss] 1.0 min_decel double Min deceleration limit [m/ss] -0.5 stop_decel double Stop deceleration value at a stop point [m/ss] 0.0 max_jerk double Max jerk limit [m/sss] 1.0 min_jerk double Min jerk limit [m/sss] -0.5"},{"location":"planning/motion_velocity_smoother/README.ja/#external-velocity-limit-parameter","title":"External velocity limit parameter","text":"Name Type Description Default value margin_to_insert_external_velocity_limit double margin distance to insert external velocity limit [m] 0.3"},{"location":"planning/motion_velocity_smoother/README.ja/#curve-parameters","title":"Curve parameters","text":"Name Type Description Default value max_lateral_accel double Max lateral acceleration limit [m/ss] 0.5 min_curve_velocity double Min velocity at lateral acceleration limit [m/ss] 2.74 decel_distance_before_curve double Distance to slowdown before a curve for lateral acceleration limit [m] 3.5 decel_distance_after_curve double Distance to slowdown after a curve for lateral acceleration limit [m] 2.0"},{"location":"planning/motion_velocity_smoother/README.ja/#engage-replan-parameters","title":"Engage & replan parameters","text":"Name Type Description Default value replan_vel_deviation double Velocity deviation to replan initial velocity [m/s] 5.53 engage_velocity double Engage velocity threshold [m/s] (if the trajectory velocity is higher than this value, use this velocity for engage vehicle speed) 0.25 engage_acceleration double Engage acceleration [m/ss] (use this acceleration when engagement) 0.1 engage_exit_ratio double Exit engage sequence to normal velocity planning when the velocity exceeds engage_exit_ratio x engage_velocity. 0.5 stop_dist_to_prohibit_engage double If the stop point is in this distance, the speed is set to 0 not to move the vehicle [m] 0.5"},{"location":"planning/motion_velocity_smoother/README.ja/#stopping-velocity-parameters","title":"Stopping velocity parameters","text":"Name Type Description Default value stopping_velocity double change target velocity to this value before v=0 point [m/s] 2.778 stopping_distance double distance for the stopping_velocity [m]. 0 means the stopping velocity is not applied. 0.0"},{"location":"planning/motion_velocity_smoother/README.ja/#extraction-parameters","title":"Extraction parameters","text":"Name Type Description Default value extract_ahead_dist double Forward trajectory distance used for planning [m] 200.0 extract_behind_dist double backward trajectory distance used for planning [m] 5.0 delta_yaw_threshold double Allowed delta yaw between ego pose and trajectory pose [radian] 1.0472"},{"location":"planning/motion_velocity_smoother/README.ja/#resampling-parameters","title":"Resampling parameters","text":"Name Type Description Default value max_trajectory_length double Max trajectory length for resampling [m] 200.0 min_trajectory_length double Min trajectory length for resampling [m] 30.0 resample_time double Resample total time [s] 10.0 dense_resample_dt double resample time interval for dense sampling [s] 0.1 dense_min_interval_distance double minimum points-interval length for dense sampling [m] 0.1 sparse_resample_dt double resample time interval for sparse sampling [s] 0.5 sparse_min_interval_distance double minimum points-interval length for sparse sampling [m] 4.0"},{"location":"planning/motion_velocity_smoother/README.ja/#resampling-parameters-for-post-process","title":"Resampling parameters for post process","text":"Name Type Description Default value post_max_trajectory_length double max trajectory length for resampling [m] 300.0 post_min_trajectory_length double min trajectory length for resampling [m] 30.0 post_resample_time double resample total time for dense sampling [s] 10.0 post_dense_resample_dt double resample time interval for dense sampling [s] 0.1 post_dense_min_interval_distance double minimum points-interval length for dense sampling [m] 0.1 post_sparse_resample_dt double resample time interval for sparse sampling [s] 0.1 post_sparse_min_interval_distance double minimum points-interval length for sparse sampling [m] 1.0"},{"location":"planning/motion_velocity_smoother/README.ja/#weights-for-optimization","title":"Weights for optimization","text":""},{"location":"planning/motion_velocity_smoother/README.ja/#jerkfiltered_1","title":"JerkFiltered","text":"Name Type Description Default value jerk_weight double Weight for \"smoothness\" cost for jerk 10.0 over_v_weight double Weight for \"over speed limit\" cost 100000.0 over_a_weight double Weight for \"over accel limit\" cost 5000.0 over_j_weight double Weight for \"over jerk limit\" cost 1000.0"},{"location":"planning/motion_velocity_smoother/README.ja/#l2_1","title":"L2","text":"Name Type Description Default value pseudo_jerk_weight double Weight for \"smoothness\" cost 100.0 over_v_weight double Weight for \"over speed limit\" cost 100000.0 over_a_weight double Weight for \"over accel limit\" cost 1000.0"},{"location":"planning/motion_velocity_smoother/README.ja/#linf_1","title":"Linf","text":"Name Type Description Default value pseudo_jerk_weight double Weight for \"smoothness\" cost 100.0 over_v_weight double Weight for \"over speed limit\" cost 100000.0 over_a_weight double Weight for \"over accel limit\" cost 1000.0"},{"location":"planning/motion_velocity_smoother/README.ja/#others","title":"Others","text":"Name Type Description Default value over_stop_velocity_warn_thr double Threshold to judge that the optimized velocity exceeds the input velocity on the stop point [m/s] 1.389"},{"location":"planning/motion_velocity_smoother/README.ja/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"planning/motion_velocity_smoother/README.ja/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"planning/motion_velocity_smoother/README.ja/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"planning/motion_velocity_smoother/README.ja/#optional-referencesexternal-links","title":"(Optional) References/External links","text":"

[1] B. Stellato, et al., \"OSQP: an operator splitting solver for quadratic programs\", Mathematical Programming Computation, 2020, 10.1007/s12532-020-00179-2.

[2] Y. Zhang, et al., \"Toward a More Complete, Flexible, and Safer Speed Planning for Autonomous Driving via Convex Optimization\", Sensors, vol. 18, no. 7, p. 2185, 2018, 10.3390/s18072185

"},{"location":"planning/motion_velocity_smoother/README.ja/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"planning/obstacle_avoidance_planner/","title":"Index","text":""},{"location":"planning/obstacle_avoidance_planner/#purpose","title":"Purpose","text":"

This package generates a trajectory that is kinematically-feasible to drive and collision-free based on the input path, drivable area. Only position and orientation of trajectory are updated in this module, and velocity is just taken over from the one in the input path.

"},{"location":"planning/obstacle_avoidance_planner/#feature","title":"Feature","text":"

This package is able to

Note that the velocity is just taken over from the input path.

"},{"location":"planning/obstacle_avoidance_planner/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"planning/obstacle_avoidance_planner/#input","title":"input","text":"Name Type Description ~/input/path autoware_auto_planning_msgs/msg/Path Reference path and the corresponding drivable area ~/input/odometry nav_msgs/msg/Odometry Current Velocity of ego vehicle"},{"location":"planning/obstacle_avoidance_planner/#output","title":"output","text":"Name Type Description ~/output/trajectory autoware_auto_planning_msgs/msg/Trajectory Optimized trajectory that is feasible to drive and collision-free"},{"location":"planning/obstacle_avoidance_planner/#flowchart","title":"Flowchart","text":"

Flowchart of functions is explained here.

"},{"location":"planning/obstacle_avoidance_planner/#createplannerdata","title":"createPlannerData","text":"

The following data for planning is created.

struct PlannerData\n{\n// input\nHeader header;\nstd::vector<TrajectoryPoint> traj_points; // converted from the input path\nstd::vector<geometry_msgs::msg::Point> left_bound;\nstd::vector<geometry_msgs::msg::Point> right_bound;\n\n// ego\ngeometry_msgs::msg::Pose ego_pose;\ndouble ego_vel;\n};\n
"},{"location":"planning/obstacle_avoidance_planner/#check-replan","title":"check replan","text":"

When one of the following conditions are met, trajectory optimization will be executed. Otherwise, previously optimized trajectory is used with updating the velocity from the latest input path.

max_path_shape_around_ego_lat_dist

"},{"location":"planning/obstacle_avoidance_planner/#getebtrajectory","title":"getEBTrajectory","text":"

The latter optimization (model predictive trajectory) assumes that the reference path is smooth enough. This function makes the input path smooth by elastic band.

More details about elastic band can be seen here.

"},{"location":"planning/obstacle_avoidance_planner/#getmodelpredictivetrajectory","title":"getModelPredictiveTrajectory","text":"

This module makes the trajectory kinematically-feasible and collision-free. We define vehicle pose in the frenet coordinate, and minimize tracking errors by optimization. This optimization considers vehicle kinematics and collision checking with road boundary and obstacles. To decrease the computation cost, the optimization is applied to the shorter trajectory (default: 50 [m]) than the whole trajectory, and concatenate the remained trajectory with the optimized one at last.

The trajectory just in front of the ego must not be changed a lot so that the steering wheel will be stable. Therefore, we use the previously generated trajectory in front of the ego.

Optimization center on the vehicle, that tries to locate just on the trajectory, can be tuned along side the vehicle vertical axis. This parameter mpt.kinematics.optimization center offset is defined as the signed length from the back-wheel center to the optimization center. Some examples are shown in the following figure, and it is shown that the trajectory of vehicle shape differs according to the optimization center even if the reference trajectory (green one) is the same.

More details can be seen here.

"},{"location":"planning/obstacle_avoidance_planner/#insertzerovelocityoutsidedrivablearea","title":"insertZeroVelocityOutsideDrivableArea","text":"

Optimized trajectory is too short for velocity planning, therefore extend the trajectory by concatenating the optimized trajectory and the behavior path considering drivability. Generated trajectory is checked if it is inside the drivable area or not, and if outside drivable area, output a trajectory inside drivable area with the behavior path or the previously generated trajectory.

As described above, the behavior path is separated into two paths: one is for optimization and the other is the remain. The first path becomes optimized trajectory, and the second path just is transformed to a trajectory. Then a trajectory inside the drivable area is calculated as follows.

Optimization failure is dealt with the same as if the optimized trajectory is outside the drivable area. The output trajectory is memorized as a previously generated trajectory for the next cycle.

Rationale In the current design, since there are some modelling errors, the constraints are considered to be soft constraints. Therefore, we have to make sure that the optimized trajectory is inside the drivable area or not after optimization.

"},{"location":"planning/obstacle_avoidance_planner/#alignvelocity","title":"alignVelocity","text":"

Velocity is assigned in the result trajectory from the velocity in the behavior path. The shapes of the trajectory and the path are different, therefore the each nearest trajectory point to the path is searched and interpolated linearly.

"},{"location":"planning/obstacle_avoidance_planner/#limitation","title":"Limitation","text":""},{"location":"planning/obstacle_avoidance_planner/#comparison-to-other-methods","title":"Comparison to other methods","text":"

Trajectory planning problem that satisfies kinematically-feasibility and collision-free has two main characteristics that makes hard to be solved: one is non-convex and the other is high dimension. Based on the characteristics, we investigate pros/cons of the typical planning methods: optimization-based, sampling-based, and learning-based method.

"},{"location":"planning/obstacle_avoidance_planner/#optimization-based-method","title":"Optimization-based method","text":""},{"location":"planning/obstacle_avoidance_planner/#sampling-based-method","title":"Sampling-based method","text":""},{"location":"planning/obstacle_avoidance_planner/#learning-based-method","title":"Learning-based method","text":"

Based on these pros/cons, we chose the optimization-based planner first. Although it has a cons to converge to the local minima, it can get a good solution by the preprocessing to approximate the problem to convex that almost equals to the original non-convex problem.

"},{"location":"planning/obstacle_avoidance_planner/#how-to-tune-parameters","title":"How to Tune Parameters","text":""},{"location":"planning/obstacle_avoidance_planner/#drivability-in-narrow-roads","title":"Drivability in narrow roads","text":""},{"location":"planning/obstacle_avoidance_planner/#computation-time","title":"Computation time","text":" "},{"location":"planning/obstacle_avoidance_planner/#robustness","title":"Robustness","text":""},{"location":"planning/obstacle_avoidance_planner/#other-options","title":"Other options","text":""},{"location":"planning/obstacle_avoidance_planner/#how-to-debug","title":"How To Debug","text":"

How to debug can be seen here.

"},{"location":"planning/obstacle_avoidance_planner/docs/debug/","title":"Debug","text":""},{"location":"planning/obstacle_avoidance_planner/docs/debug/#debug-visualization","title":"Debug visualization","text":"

Topics for debugging will be explained in this section.

"},{"location":"planning/obstacle_avoidance_planner/docs/debug/#calculation-cost","title":"Calculation cost","text":"

Obstacle avoidance planner consists of many functions such as clearance map generation, boundary search, reference path smoothing, trajectory optimization, ... We can see the calculation time for each function as follows.

"},{"location":"planning/obstacle_avoidance_planner/docs/debug/#raw-data","title":"Raw data","text":"
$ ros2 topic echo /planning/scenario_planning/lane_driving/motion_planning/obstacle_avoidance_planner/debug/calculation_time --field data\n\n      getDrivableAreaInCV:= 0.21 [ms]\ngetClearanceMap:= 1.327 [ms]\ndrawObstaclesOnImage:= 0.436 [ms]\ngetAreaWithObjects:= 0.029 [ms]\ngetClearanceMap:= 2.186 [ms]\ngetMaps:= 4.213 [ms]\ncalculateTrajectory:= 2.417 [ms]\ngetOptimizedTrajectory:= 5.203 [ms]\ngetEBTrajectory:= 5.231 [ms]\ncalcBounds:= 0.821 [ms]\ncalcVehicleBounds:= 0.27 [ms]\ngetReferencePoints:= 2.866 [ms]\ngenerateMPTMatrix:= 0.195 [ms]\ngenerateValueMatrix:= 0.019 [ms]\ngetObjectiveMatrix:= 0.559 [ms]\ngetConstraintMatrix:= 1.776 [ms]\ninitOsqp:= 9.074 [ms]\nsolveOsqp:= 3.809 [ms]\nexecuteOptimization:= 15.46 [ms]\ngetMPTPoints:= 0.049 [ms]\ngetModelPredictiveTrajectory:= 18.928 [ms]\noptimizeTrajectory:= 24.234 [ms]\ninsertZeroVelocityOutsideDrivableArea:= 0.023 [ms]\ngetDebugVisualizationMarker:= 0.446 [ms]\npublishDebugVisualizationMarker:= 2.146 [ms]\ngetDebugVisualizationWallMarker:= 0.001 [ms]\npublishDebugVisualizationWallMarker:= 0.013 [ms]\npublishDebugDataInOptimization:= 2.696 [ms]\ngetExtendedTrajectory:= 0.016 [ms]\ngenerateFineTrajectoryPoints:= 0.118 [ms]\nalignVelocity:= 1.227 [ms]\ngeneratePostProcessedTrajectory:= 1.375 [ms]\nmakePrevTrajectories:= 1.411 [ms]\ngenerateOptimizedTrajectory:= 33.284 [ms]\ngetExtendedTrajectory:= 0.018 [ms]\ngenerateFineTrajectoryPoints:= 0.084 [ms]\nalignVelocity:= 1.109 [ms]\ngeneratePostProcessedTrajectory:= 1.217 [ms]\ngetDebugCostMap * 3:= 0 [ms]\npublishDebugDataInMain:= 0.023 [ms]\npathCallback:= 34.614 [ms]\n
"},{"location":"planning/obstacle_avoidance_planner/docs/debug/#plot","title":"Plot","text":"

With a script, we can plot some of above time as follows.

python3 scripts/calclation_time_analyzer.py -f \"solveOsqp,initOsqp,pathCallback\"\n

"},{"location":"planning/obstacle_avoidance_planner/docs/eb/","title":"Elastic band","text":""},{"location":"planning/obstacle_avoidance_planner/docs/eb/#abstract","title":"Abstract","text":"

Elastic band smooths the input path. Since the latter optimization (model predictive trajectory) is calculated on the frenet frame, path smoothing is applied here so that the latter optimization will be stable.

Note that this smoothing process does not consider collision checking. Therefore the output path may have a collision with road boundaries or obstacles.

"},{"location":"planning/obstacle_avoidance_planner/docs/eb/#flowchart","title":"Flowchart","text":""},{"location":"planning/obstacle_avoidance_planner/docs/eb/#formulation","title":"Formulation","text":"

We formulate a quadratic problem minimizing the diagonal length of the rhombus on each point generated by the current point and its previous and next points, shown as the red vector's length.

Assuming that k'th point is \\boldsymbol{p}_k = (x_k, y_k), the objective function is as follows.

\\begin{align} \\ J & = \\min \\sum_{k=1}^{n-2} ||(\\boldsymbol{p}_{k+1} - \\boldsymbol{p}_{k}) - (\\boldsymbol{p}_{k} - \\boldsymbol{p}_{k-1})||^2 \\\\ \\ & = \\min \\sum_{k=1}^{n-2} ||\\boldsymbol{p}_{k+1} - 2 \\boldsymbol{p}_{k} + \\boldsymbol{p}_{k-1}||^2 \\\\ \\ & = \\min \\sum_{k=1}^{n-2} \\{(x_{k+1} - x_k + x_{k-1})^2 + (y_{k+1} - y_k + y_{k-1})^2\\} \\\\ \\ & = \\min \\begin{pmatrix} \\ x_0 \\\\ \\ x_1 \\\\ \\ x_2 \\\\ \\vdots \\\\ \\ x_{n-3}\\\\ \\ x_{n-2} \\\\ \\ x_{n-1} \\\\ \\ y_0 \\\\ \\ y_1 \\\\ \\ y_2 \\\\ \\vdots \\\\ \\ y_{n-3}\\\\ \\ y_{n-2} \\\\ \\ y_{n-1} \\\\ \\end{pmatrix}^T \\begin{pmatrix} 1 & -2 & 1 & 0 & \\dots& \\\\ -2 & 5 & -4 & 1 & 0 &\\dots \\\\ 1 & -4 & 6 & -4 & 1 & \\\\ 0 & 1 & -4 & 6 & -4 & \\\\ \\vdots & 0 & \\ddots&\\ddots& \\ddots \\\\ & \\vdots & & & \\\\ & & & 1 & -4 & 6 & -4 & 1 \\\\ & & & & 1 & -4 & 5 & -2 \\\\ & & & & & 1 & -2 & 1& \\\\ & & & & & & & &1 & -2 & 1 & 0 & \\dots& \\\\ & & & & & & & &-2 & 5 & -4 & 1 & 0 &\\dots \\\\ & & & & & & & &1 & -4 & 6 & -4 & 1 & \\\\ & & & & & & & &0 & 1 & -4 & 6 & -4 & \\\\ & & & & & & & &\\vdots & 0 & \\ddots&\\ddots& \\ddots \\\\ & & & & & & & & & \\vdots & & & \\\\ & & & & & & & & & & & 1 & -4 & 6 & -4 & 1 \\\\ & & & & & & & & & & & & 1 & -4 & 5 & -2 \\\\ & & & & & & & & & & & & & 1 & -2 & 1& \\\\ \\end{pmatrix} \\begin{pmatrix} \\ x_0 \\\\ \\ x_1 \\\\ \\ x_2 \\\\ \\vdots \\\\ \\ x_{n-3}\\\\ \\ x_{n-2} \\\\ \\ x_{n-1} \\\\ \\ y_0 \\\\ \\ y_1 \\\\ \\ y_2 \\\\ \\vdots \\\\ \\ y_{n-3}\\\\ \\ y_{n-2} \\\\ \\ y_{n-1} \\\\ \\end{pmatrix} \\end{align}

Regarding the constraint, the distance that each point can move is limited so that the path will not changed a lot but will be smoother. In detail, the longitudinal distance that each point can move is zero, and the lateral distance is parameterized as eb.clearance.clearance_for_fix, eb.clearance.clearance_for_joint and eb.clearance.clearance_for_smooth.

The following figure describes how to constrain the lateral distance to move. The red line is where the point can move. The points for the upper and lower bound are described as (x_k^u, y_k^u) and (x_k^l, y_k^l), respectively.

Based on the line equation whose slope angle is \\theta_k and that passes through (x_k, y_k), (x_k^u, y_k^u) and (x_k^l, y_k^l), the lateral constraint is formulated as follows.

C_k^l \\leq C_k \\leq C_k^u

In addition, the beginning point is fixed and the end point as well if the end point is considered as the goal. This constraint can be applied with the upper equation by changing the distance that each point can move.

"},{"location":"planning/obstacle_avoidance_planner/docs/mpt/","title":"Model predictive trajectory","text":""},{"location":"planning/obstacle_avoidance_planner/docs/mpt/#abstract","title":"Abstract","text":"

Model Predictive Trajectory (MPT) calculates the trajectory that meets the following conditions.

Conditions for collision free is considered to be not hard constraints but soft constraints. When the optimization failed or the optimized trajectory is not collision free, the output trajectory will be previously generated trajectory.

Trajectory near the ego must be stable, therefore the condition where trajectory points near the ego are the same as previously generated trajectory is considered, and this is the only hard constraints in MPT.

"},{"location":"planning/obstacle_avoidance_planner/docs/mpt/#flowchart","title":"Flowchart","text":""},{"location":"planning/obstacle_avoidance_planner/docs/mpt/#vehicle-kinematics","title":"Vehicle kinematics","text":"

As the following figure, we consider the bicycle kinematics model in the frenet frame to track the reference path. At time step k, we define lateral distance to the reference path, heading angle against the reference path, and steer angle as y_k, \\theta_k, and \\delta_k respectively.

Assuming that the commanded steer angle is \\delta_{des, k}, the kinematics model in the frenet frame is formulated as follows. We also assume that the steer angle \\delta_k is first-order lag to the commanded one.

\\begin{align} y_{k+1} & = y_{k} + v \\sin \\theta_k dt \\\\ \\theta_{k+1} & = \\theta_k + \\frac{v \\tan \\delta_k}{L}dt - \\kappa_k v \\cos \\theta_k dt \\\\ \\delta_{k+1} & = \\delta_k - \\frac{\\delta_k - \\delta_{des,k}}{\\tau}dt \\end{align}"},{"location":"planning/obstacle_avoidance_planner/docs/mpt/#linearization","title":"Linearization","text":"

Then we linearize these equations. y_k and \\theta_k are tracking errors, so we assume that those are small enough. Therefore \\sin \\theta_k \\approx \\theta_k.

Since \\delta_k is a steer angle, it is not always small. By using a reference steer angle \\delta_{\\mathrm{ref}, k} calculated by the reference path curvature \\kappa_k, we express \\delta_k with a small value \\Delta \\delta_k.

Note that the steer angle \\delta_k is within the steer angle limitation \\delta_{\\max}. When the reference steer angle \\delta_{\\mathrm{ref}, k} is larger than the steer angle limitation \\delta_{\\max}, and \\delta_{\\mathrm{ref}, k} is used to linearize the steer angle, \\Delta \\delta_k is \\Delta \\delta_k = \\delta - \\delta_{\\mathrm{ref}, k} = \\delta_{\\max} - \\delta_{\\mathrm{ref}, k}, and the absolute \\Delta \\delta_k gets larger. Therefore, we have to apply the steer angle limitation to \\delta_{\\mathrm{ref}, k} as well.

\\begin{align} \\delta_{\\mathrm{ref}, k} & = \\mathrm{clamp}(\\arctan(L \\kappa_k), -\\delta_{\\max}, \\delta_{\\max}) \\\\ \\delta_k & = \\delta_{\\mathrm{ref}, k} + \\Delta \\delta_k, \\ \\Delta \\delta_k \\ll 1 \\\\ \\end{align}

\\mathrm{clamp}(v, v_{\\min}, v_{\\max}) is a function to convert v to be larger than v_{\\min} and smaller than v_{\\max}.

Using this \\delta_{\\mathrm{ref}, k}, \\tan \\delta_k is linearized as follows.

\\begin{align} \\tan \\delta_k & \\approx \\tan \\delta_{\\mathrm{ref}, k} + \\left.\\frac{d \\tan \\delta}{d \\delta}\\right|_{\\delta = \\delta_{\\mathrm{ref}, k}} \\Delta \\delta_k \\\\ & = \\tan \\delta_{\\mathrm{ref}, k} + \\left.\\frac{d \\tan \\delta}{d \\delta}\\right|_{\\delta = \\delta_{\\mathrm{ref}, k}} (\\delta_{\\mathrm{ref}, k} - \\delta_k) \\\\ & = \\tan \\delta_{\\mathrm{ref}, k} - \\frac{\\delta_{\\mathrm{ref}, k}}{\\cos^2 \\delta_{\\mathrm{ref}, k}} + \\frac{1}{\\cos^2 \\delta_{\\mathrm{ref}, k}} \\delta_k \\end{align}"},{"location":"planning/obstacle_avoidance_planner/docs/mpt/#one-step-state-equation","title":"One-step state equation","text":"

Based on the linearization, the error kinematics is formulated with the following linear equations,

\\begin{align} \\begin{pmatrix} y_{k+1} \\\\ \\theta_{k+1} \\end{pmatrix} = \\begin{pmatrix} 1 & v dt \\\\ 0 & 1 \\\\ \\end{pmatrix} \\begin{pmatrix} y_k \\\\ \\theta_k \\\\ \\end{pmatrix} + \\begin{pmatrix} 0 \\\\ \\frac{v dt}{L \\cos^{2} \\delta_{\\mathrm{ref}, k}} \\\\ \\end{pmatrix} \\delta_{k} + \\begin{pmatrix} 0 \\\\ \\frac{v \\tan(\\delta_{\\mathrm{ref}, k}) dt}{L} - \\frac{v \\delta_{\\mathrm{ref}, k} dt}{L \\cos^{2} \\delta_{\\mathrm{ref}, k}} - \\kappa_k v dt\\\\ \\end{pmatrix} \\end{align}

which can be formulated as follows with the state \\boldsymbol{x}, control input u and some matrices, where \\boldsymbol{x} = (y_k, \\theta_k)

\\begin{align} \\boldsymbol{x}_{k+1} = A_k \\boldsymbol{x}_k + \\boldsymbol{b}_k u_k + \\boldsymbol{w}_k \\end{align}"},{"location":"planning/obstacle_avoidance_planner/docs/mpt/#time-series-state-equation","title":"Time-series state equation","text":"

Then, we formulate time-series state equation by concatenating states, control inputs and matrices respectively as

\\begin{align} \\boldsymbol{x} = A \\boldsymbol{x}_0 + B \\boldsymbol{u} + \\boldsymbol{w} \\end{align}

where

\\begin{align} \\boldsymbol{x} = (\\boldsymbol{x}^T_1, \\boldsymbol{x}^T_2, \\boldsymbol{x}^T_3, \\dots, \\boldsymbol{x}^T_{n-1})^T \\\\ \\boldsymbol{u} = (u_0, u_1, u_2, \\dots, u_{n-2})^T \\\\ \\boldsymbol{w} = (\\boldsymbol{w}^T_0, \\boldsymbol{w}^T_1, \\boldsymbol{w}^T_2, \\dots, \\boldsymbol{w}^T_{n-1})^T. \\\\ \\end{align}

In detail, each matrices are constructed as follows.

\\begin{align} \\begin{pmatrix} \\boldsymbol{x}_1 \\\\ \\boldsymbol{x}_2 \\\\ \\boldsymbol{x}_3 \\\\ \\vdots \\\\ \\boldsymbol{x}_{n-1} \\end{pmatrix} = \\begin{pmatrix} A_0 \\\\ A_1 A_0 \\\\ A_2 A_1 A_0\\\\ \\vdots \\\\ \\prod\\limits_{k=0}^{n-1} A_{k} \\end{pmatrix} \\boldsymbol{x}_0 + \\begin{pmatrix} B_0 & 0 & & \\dots & 0 \\\\ A_0 B_0 & B_1 & 0 & \\dots & 0 \\\\ A_1 A_0 B_0 & A_0 B_1 & B_2 & \\dots & 0 \\\\ \\vdots & \\vdots & & \\ddots & 0 \\\\ \\prod\\limits_{k=0}^{n-3} A_k B_0 & \\prod\\limits_{k=0}^{n-4} A_k B_1 & \\dots & A_0 B_{n-3} & B_{n-2} \\end{pmatrix} \\begin{pmatrix} u_0 \\\\ u_1 \\\\ u_2 \\\\ \\vdots \\\\ u_{n-2} \\end{pmatrix} + \\begin{pmatrix} I & 0 & & \\dots & 0 \\\\ A_0 & I & 0 & \\dots & 0 \\\\ A_1 A_0 & A_0 & I & \\dots & 0 \\\\ \\vdots & \\vdots & & \\ddots & 0 \\\\ \\prod\\limits_{k=0}^{n-3} A_k & \\prod\\limits_{k=0}^{n-4} A_k & \\dots & A_0 & I \\end{pmatrix} \\begin{pmatrix} \\boldsymbol{w}_0 \\\\ \\boldsymbol{w}_1 \\\\ \\boldsymbol{w}_2 \\\\ \\vdots \\\\ \\boldsymbol{w}_{n-2} \\end{pmatrix} \\end{align}"},{"location":"planning/obstacle_avoidance_planner/docs/mpt/#free-boundary-conditioned-time-series-state-equation","title":"Free-boundary-conditioned time-series state equation","text":"

For path planning which does not start from the current ego pose, \\boldsymbol{x}_0 should be the design variable of optimization. Therefore, we make \\boldsymbol{u}' by concatenating \\boldsymbol{x}_0 and \\boldsymbol{u}, and redefine \\boldsymbol{x} as follows.

\\begin{align} \\boldsymbol{u}' & = (\\boldsymbol{x}^T_0, \\boldsymbol{u}^T)^T \\\\ \\boldsymbol{x} & = (\\boldsymbol{x}^T_0, \\boldsymbol{x}^T_1, \\boldsymbol{x}^T_2, \\dots, \\boldsymbol{x}^T_{n-1})^T \\end{align}

Then we get the following state equation

\\begin{align} \\boldsymbol{x}' = B \\boldsymbol{u}' + \\boldsymbol{w}, \\end{align}

which is in detail

\\begin{align} \\begin{pmatrix} \\boldsymbol{x}_0 \\\\ \\boldsymbol{x}_1 \\\\ \\boldsymbol{x}_2 \\\\ \\boldsymbol{x}_3 \\\\ \\vdots \\\\ \\boldsymbol{x}_{n-1} \\end{pmatrix} = \\begin{pmatrix} I & 0 & \\dots & & & 0 \\\\ A_0 & B_0 & 0 & & \\dots & 0 \\\\ A_1 A_0 & A_0 B_0 & B_1 & 0 & \\dots & 0 \\\\ A_2 A_1 A_0 & A_1 A_0 B_0 & A_0 B_1 & B_2 & \\dots & 0 \\\\ \\vdots & \\vdots & \\vdots & & \\ddots & 0 \\\\ \\prod\\limits_{k=0}^{n-1} A_k & \\prod\\limits_{k=0}^{n-3} A_k B_0 & \\prod\\limits_{k=0}^{n-4} A_k B_1 & \\dots & A_0 B_{n-3} & B_{n-2} \\end{pmatrix} \\begin{pmatrix} \\boldsymbol{x}_0 \\\\ u_0 \\\\ u_1 \\\\ u_2 \\\\ \\vdots \\\\ u_{n-2} \\end{pmatrix} + \\begin{pmatrix} 0 & \\dots & & & 0 \\\\ I & 0 & & \\dots & 0 \\\\ A_0 & I & 0 & \\dots & 0 \\\\ A_1 A_0 & A_0 & I & \\dots & 0 \\\\ \\vdots & \\vdots & & \\ddots & 0 \\\\ \\prod\\limits_{k=0}^{n-3} A_k & \\prod\\limits_{k=0}^{n-4} A_k & \\dots & A_0 & I \\end{pmatrix} \\begin{pmatrix} \\boldsymbol{w}_0 \\\\ \\boldsymbol{w}_1 \\\\ \\boldsymbol{w}_2 \\\\ \\vdots \\\\ \\boldsymbol{w}_{n-2} \\end{pmatrix}. \\end{align}"},{"location":"planning/obstacle_avoidance_planner/docs/mpt/#objective-function","title":"Objective function","text":"

The objective function for smoothing and tracking is shown as follows, which can be formulated with value function matrices Q, R.

\\begin{align} J_1 (\\boldsymbol{x}', \\boldsymbol{u}') & = w_y \\sum_{k} y_k^2 + w_{\\theta} \\sum_{k} \\theta_k^2 + w_{\\delta} \\sum_k \\delta_k^2 + w_{\\dot{\\delta}} \\sum_k \\dot{\\delta}_k^2 + w_{\\ddot{\\delta}} \\sum_k \\ddot{\\delta}_k^2 \\\\ & = \\boldsymbol{x}'^T Q \\boldsymbol{x}' + \\boldsymbol{u}'^T R \\boldsymbol{u}' \\\\ & = \\boldsymbol{u}'^T H \\boldsymbol{u}' + \\boldsymbol{u}'^T \\boldsymbol{f} \\end{align}

As mentioned before, the constraints to be collision free with obstacles and road boundaries are formulated to be soft constraints. Assuming that the lateral distance to the road boundaries or obstacles from the back wheel center, front wheel center, and the point between them are y_{\\mathrm{base}, k}, y_{\\mathrm{top}, k}, y_{\\mathrm{mid}, k} respectively, and slack variables for each point are \\lambda_{\\mathrm{base}}, \\lambda_{\\mathrm{top}}, \\lambda_{\\mathrm{mid}}, the soft constraints can be formulated as follows.

y_{\\mathrm{base}, k, \\min} - \\lambda_{\\mathrm{base}, k} \\leq y_{\\mathrm{base}, k} (y_k) \\leq y_{\\mathrm{base}, k, \\max} + \\lambda_{\\mathrm{base}, k}\\\\ y_{\\mathrm{top}, k, \\min} - \\lambda_{\\mathrm{top}, k} \\leq y_{\\mathrm{top}, k} (y_k) \\leq y_{\\mathrm{top}, k, \\max} + \\lambda_{\\mathrm{top}, k}\\\\ y_{\\mathrm{mid}, k, \\min} - \\lambda_{\\mathrm{mid}, k} \\leq y_{\\mathrm{mid}, k} (y_k) \\leq y_{\\mathrm{mid}, k, \\max} + \\lambda_{\\mathrm{mid}, k} \\\\ 0 \\leq \\lambda_{\\mathrm{base}, k} \\\\ 0 \\leq \\lambda_{\\mathrm{top}, k} \\\\ 0 \\leq \\lambda_{\\mathrm{mid}, k}

Since y_{\\mathrm{base}, k}, y_{\\mathrm{top}, k}, y_{\\mathrm{mid}, k} is formulated as a linear function of y_k, the objective function for soft constraints is formulated as follows.

\\begin{align} J_2 & (\\boldsymbol{\\lambda}_\\mathrm{base}, \\boldsymbol{\\lambda}_\\mathrm{top}, \\boldsymbol {\\lambda}_\\mathrm{mid})\\\\ & = w_{\\mathrm{base}} \\sum_{k} \\lambda_{\\mathrm{base}, k} + w_{\\mathrm{mid}} \\sum_k \\lambda_{\\mathrm{mid}, k} + w_{\\mathrm{top}} \\sum_k \\lambda_{\\mathrm{top}, k} \\end{align}

Slack variables are also design variables for optimization. We define a vector \\boldsymbol{v}, that concatenates all the design variables.

\\begin{align} \\boldsymbol{v} = \\begin{pmatrix} \\boldsymbol{u}'^T & \\boldsymbol{\\lambda}_\\mathrm{base}^T & \\boldsymbol{\\lambda}_\\mathrm{top}^T & \\boldsymbol{\\lambda}_\\mathrm{mid}^T \\end{pmatrix}^T \\end{align}

The summation of these two objective functions is the objective function for the optimization problem.

\\begin{align} \\min_{\\boldsymbol{v}} J (\\boldsymbol{v}) = \\min_{\\boldsymbol{v}} J_1 (\\boldsymbol{u}') + J_2 (\\boldsymbol{\\lambda}_\\mathrm{base}, \\boldsymbol{\\lambda}_\\mathrm{top}, \\boldsymbol{\\lambda}_\\mathrm{mid}) \\end{align}

As mentioned before, we use hard constraints where some trajectory points in front of the ego are the same as the previously generated trajectory points. This hard constraints is formulated as follows.

\\begin{align} \\delta_k = \\delta_{k}^{\\mathrm{prev}} (0 \\leq i \\leq N_{\\mathrm{fix}}) \\end{align}

Finally we transform those objective functions to the following QP problem, and solve it.

\\begin{align} \\min_{\\boldsymbol{v}} \\ & \\frac{1}{2} \\boldsymbol{v}^T \\boldsymbol{H} \\boldsymbol{v} + \\boldsymbol{f} \\boldsymbol{v} \\\\ \\mathrm{s.t.} \\ & \\boldsymbol{b}_{lower} \\leq \\boldsymbol{A} \\boldsymbol{v} \\leq \\boldsymbol{b}_{upper} \\end{align}"},{"location":"planning/obstacle_avoidance_planner/docs/mpt/#constraints","title":"Constraints","text":""},{"location":"planning/obstacle_avoidance_planner/docs/mpt/#steer-angle-limitation","title":"Steer angle limitation","text":"

Steer angle has a limitation \\delta_{max} and \\delta_{min}. Therefore we add linear inequality equations.

\\begin{align} \\delta_{min} \\leq \\delta_i \\leq \\delta_{max} \\end{align}"},{"location":"planning/obstacle_avoidance_planner/docs/mpt/#collision-free","title":"Collision free","text":"

To realize collision-free trajectory planning, we have to formulate constraints that the vehicle is inside the road and also does not collide with obstacles in linear equations. For linearity, we implemented some methods to approximate the vehicle shape with a set of circles, that is reliable and easy to implement.

Now we formulate the linear constraints where a set of circles on each trajectory point is collision-free. By using the drivable area, we calculate upper and lower boundaries along reference points, which will be interpolated on any position on the trajectory. NOTE that upper and lower boundary is left and right, respectively.

Assuming that upper and lower boundaries are b_l, b_u respectively, and r is a radius of a circle, lateral deviation of the circle center y' has to be

b_l + r \\leq y' \\leq b_u - r.

Based on the following figure, y' can be formulated as follows.

\\begin{align} y' & = L \\sin(\\theta + \\beta) + y \\cos \\beta - l \\sin(\\gamma - \\phi_a) \\\\ & = L \\sin \\theta \\cos \\beta + L \\cos \\theta \\sin \\beta + y \\cos \\beta - l \\sin(\\gamma - \\phi_a) \\\\ & \\approx L \\theta \\cos \\beta + L \\sin \\beta + y \\cos \\beta - l \\sin(\\gamma - \\phi_a) \\end{align} b_l + r - \\lambda \\leq y' \\leq b_u - r + \\lambda. \\begin{align} y' & = C_1 \\boldsymbol{x} + C_2 \\\\ & = C_1 (B \\boldsymbol{v} + \\boldsymbol{w}) + C_2 \\\\ & = C_1 B \\boldsymbol{v} + \\boldsymbol{w} + C_2 \\end{align}

Note that longitudinal position of the circle center and the trajectory point to calculate boundaries are different. But each boundaries are vertical against the trajectory, resulting in less distortion by the longitudinal position difference since road boundaries does not change so much. For example, if the boundaries are not vertical against the trajectory and there is a certain difference of longitudinal position between the circe center and the trajectory point, we can easily guess that there is much more distortion when comparing lateral deviation and boundaries.

\\begin{align} A_{blk} & = \\begin{pmatrix} C_1 B & O & \\dots & O & I_{N_{ref} \\times N_{ref}} & O \\dots & O\\\\ -C_1 B & O & \\dots & O & I & O \\dots & O\\\\ O & O & \\dots & O & I & O \\dots & O \\end{pmatrix} \\in \\boldsymbol{R}^{3 N_{ref} \\times D_v + N_{circle} N_{ref}} \\\\ \\boldsymbol{b}_{lower, blk} & = \\begin{pmatrix} \\boldsymbol{b}_{lower} - C_1 \\boldsymbol{w} - C_2 \\\\ -\\boldsymbol{b}_{upper} + C_1 \\boldsymbol{w} + C_2 \\\\ O \\end{pmatrix} \\in \\boldsymbol{R}^{3 N_{ref}} \\\\ \\boldsymbol{b}_{upper, blk} & = \\boldsymbol{\\infty} \\in \\boldsymbol{R}^{3 N_{ref}} \\end{align}

We will explain options for optimization.

"},{"location":"planning/obstacle_avoidance_planner/docs/mpt/#l-infinity-optimization","title":"L-infinity optimization","text":"

The above formulation is called L2 norm for slack variables. Instead, if we use L-infinity norm where slack variables are shared by enabling l_inf_norm.

\\begin{align} A_{blk} = \\begin{pmatrix} C_1 B & I_{N_{ref} \\times N_{ref}} \\\\ -C_1 B & I \\\\ O & I \\end{pmatrix} \\in \\boldsymbol{R}^{3N_{ref} \\times D_v + N_{ref}} \\end{align}"},{"location":"planning/obstacle_avoidance_planner/docs/mpt/#two-step-soft-constraints","title":"Two-step soft constraints","text":"\\begin{align} \\boldsymbol{v}' = \\begin{pmatrix} \\boldsymbol{v} \\\\ \\boldsymbol{\\lambda}^{soft_1} \\\\ \\boldsymbol{\\lambda}^{soft_2} \\\\ \\end{pmatrix} \\in \\boldsymbol{R}^{D_v + 2N_{slack}} \\end{align}

* depends on whether to use L2 norm or L-infinity optimization.

\\begin{align} A_{blk} & = \\begin{pmatrix} A^{soft_1}_{blk} \\\\ A^{soft_2}_{blk} \\\\ \\end{pmatrix}\\\\ & = \\begin{pmatrix} C_1^{soft_1} B & & \\\\ -C_1^{soft_1} B & \\Huge{*} & \\Huge{O} \\\\ O & & \\\\ C_1^{soft_2} B & & \\\\ -C_1^{soft_2} B & \\Huge{O} & \\Huge{*} \\\\ O & & \\end{pmatrix} \\in \\boldsymbol{R}^{6 N_{ref} \\times D_v + 2 N_{slack}} \\end{align}

N_{slack} is N_{circle} when L2 optimization, or 1 when L-infinity optimization. N_{circle} is the number of circles to check collision.

"},{"location":"planning/obstacle_cruise_planner/","title":"Obstacle Cruise Planner","text":""},{"location":"planning/obstacle_cruise_planner/#overview","title":"Overview","text":"

The obstacle_cruise_planner package has following modules.

"},{"location":"planning/obstacle_cruise_planner/#interfaces","title":"Interfaces","text":""},{"location":"planning/obstacle_cruise_planner/#input-topics","title":"Input topics","text":"Name Type Description ~/input/trajectory autoware_auto_planning_msgs::Trajectory input trajectory ~/input/objects autoware_auto_perception_msgs::PredictedObjects dynamic objects ~/input/odometry nav_msgs::msg::Odometry ego odometry"},{"location":"planning/obstacle_cruise_planner/#output-topics","title":"Output topics","text":"Name Type Description ~/output/trajectory autoware_auto_planning_msgs::Trajectory output trajectory ~/output/velocity_limit tier4_planning_msgs::VelocityLimit velocity limit for cruising ~/output/clear_velocity_limit tier4_planning_msgs::VelocityLimitClearCommand clear command for velocity limit ~/output/stop_reasons tier4_planning_msgs::StopReasonArray reasons that make the vehicle to stop"},{"location":"planning/obstacle_cruise_planner/#design","title":"Design","text":"

Design for the following functions is defined here.

A data structure for cruise and stop planning is as follows. This planner data is created first, and then sent to the planning algorithm.

struct ObstacleCruisePlannerData\n{\nrclcpp::Time current_time;\nautoware_auto_planning_msgs::msg::Trajectory traj;\ngeometry_msgs::msg::Pose current_pose;\ndouble current_vel;\ndouble current_acc;\nstd::vector<TargetObstacle> target_obstacles;\n};\n
struct TargetObstacle\n{\nrclcpp::Time time_stamp;\ngeometry_msgs::msg::Pose pose;\nbool orientation_reliable;\ndouble velocity;\nbool velocity_reliable;\nObjectClassification classification;\nstd::string uuid;\nstd::vector<PredictedPath> predicted_paths;\nstd::vector<geometry_msgs::msg::PointStamped> collision_points;\nbool has_stopped;\n};\n
"},{"location":"planning/obstacle_cruise_planner/#obstacle-candidates-selection","title":"Obstacle candidates selection","text":"

In this function, target obstacles for stopping or cruising are selected based on their pose and velocity.

By default, objects that realize one of the following conditions are considered to be the target obstacle candidates. Some terms will be defined in the following subsections.

Note that currently the obstacle candidates selection algorithm is for autonomous driving. However, we have following parameters as well for stop and cruise respectively so that we can extend the obstacles candidates selection algorithm for non vehicle robots. By default, unknown and vehicles are obstacles to cruise and stop, and non vehicles are obstacles just to stop.

Parameter Type Description cruise_obstacle_type.unknown bool flag to consider unknown objects as being cruised cruise_obstacle_type.car bool flag to consider unknown objects as being cruised cruise_obstacle_type.truck bool flag to consider unknown objects as being cruised ... bool ... stop_obstacle_type.unknown bool flag to consider unknown objects as being stopped ... bool ..."},{"location":"planning/obstacle_cruise_planner/#inside-the-detection-area","title":"Inside the detection area","text":"

To calculate obstacles inside the detection area, firstly, obstacles whose distance to the trajectory is less than rough_detection_area_expand_width are selected. Then, the detection area, which is a trajectory with some lateral margin, is calculated as shown in the figure. The detection area width is a vehicle's width + detection_area_expand_width, and it is represented as a polygon resampled with decimate_trajectory_step_length longitudinally. The roughly selected obstacles inside the detection area are considered as inside the detection area.

This two-step detection is used for calculation efficiency since collision checking of polygons is heavy. Boost.Geometry is used as a library to check collision among polygons.

In the obstacle_filtering namespace,

Parameter Type Description rough_detection_area_expand_width double rough lateral margin for rough detection area expansion [m] detection_area_expand_width double lateral margin for precise detection area expansion [m] decimate_trajectory_step_length double longitudinal step length to calculate trajectory polygon for collision checking [m]"},{"location":"planning/obstacle_cruise_planner/#far-crossing-vehicles","title":"Far crossing vehicles","text":"

Near crossing vehicles (= not far crossing vehicles) are defined as vehicle objects realizing either of following conditions.

Assuming t_1 to be the time for the ego to reach the current crossing obstacle position with the constant velocity motion, and t_2 to be the time for the crossing obstacle to go outside the detection area, if the following condition is realized, the crossing vehicle will be ignored.

t_1 - t_2 > \\mathrm{margin\\_for\\_collision\\_time}

In the obstacle_filtering namespace,

Parameter Type Description crossing_obstacle_velocity_threshold double velocity threshold to decide crossing obstacle [m/s] crossing_obstacle_traj_angle_threshold double yaw threshold of crossing obstacle against the nearest trajectory point [rad] collision_time_margin double time threshold of collision between obstacle and ego [s]"},{"location":"planning/obstacle_cruise_planner/#near-cut-in-vehicles","title":"Near Cut-in vehicles","text":"

Near Cut-in vehicles are defined as vehicle objects

In the obstacle_filtering namespace,

Parameter Type Description ego_obstacle_overlap_time_threshold double time threshold to decide cut-in obstacle for cruise or stop [s] max_prediction_time_for_collision_check double prediction time to check collision between obstacle and ego [s] outside_obstacle_min_velocity_threshold double minimum velocity threshold of target obstacle for cut-in detection [m/s]"},{"location":"planning/obstacle_cruise_planner/#stop-planning","title":"Stop planning","text":"Parameter Type Description common.min_strong_accel double ego's minimum acceleration to stop [m/ss] common.safe_distance_margin double distance with obstacles for stop [m] common.terminal_safe_distance_margin double terminal_distance with obstacles for stop, which cannot be exceed safe distance margin [m]

The role of the stop planning is keeping a safe distance with static vehicle objects or dynamic/static non vehicle objects.

The stop planning just inserts the stop point in the trajectory to keep a distance with obstacles inside the detection area. The safe distance is parameterized as common.safe_distance_margin. When it stops at the end of the trajectory, and obstacle is on the same point, the safe distance becomes terminal_safe_distance_margin.

When inserting the stop point, the required acceleration for the ego to stop in front of the stop point is calculated. If the acceleration is less than common.min_strong_accel, the stop planning will be cancelled since this package does not assume a strong sudden brake for emergency.

"},{"location":"planning/obstacle_cruise_planner/#adaptive-cruise-planning","title":"Adaptive cruise planning","text":"Parameter Type Description common.safe_distance_margin double minimum distance with obstacles for cruise [m]

The role of the adaptive cruise planning is keeping a safe distance with dynamic vehicle objects with smoothed velocity transition. This includes not only cruising a front vehicle, but also reacting a cut-in and cut-out vehicle.

The safe distance is calculated dynamically based on the Responsibility-Sensitive Safety (RSS) by the following equation.

d_{rss} = v_{ego} t_{idling} + \\frac{1}{2} a_{ego} t_{idling}^2 + \\frac{v_{ego}^2}{2 a_{ego}} - \\frac{v_{obstacle}^2}{2 a_{obstacle}},

assuming that d_rss is the calculated safe distance, t_{idling} is the idling time for the ego to detect the front vehicle's deceleration, v_{ego} is the ego's current velocity, v_{obstacle} is the front obstacle's current velocity, a_{ego} is the ego's acceleration, and a_{obstacle} is the obstacle's acceleration. These values are parameterized as follows. Other common values such as ego's minimum acceleration is defined in common.param.yaml.

Parameter Type Description common.idling_time double idling time for the ego to detect the front vehicle starting deceleration [s] common.min_ego_accel_for_rss double ego's acceleration for RSS [m/ss] common.min_object_accel_for_rss double front obstacle's acceleration for RSS [m/ss]

The detailed formulation is as follows.

d_{error} = d - d_{rss} \\\\ d_{normalized} = lpf(d_{error} / d_{obstacle}) \\\\ d_{quad, normalized} = sign(d_{normalized}) *d_{normalized}*d_{normalized} \\\\ v_{pid} = pid(d_{quad, normalized}) \\\\ v_{add} = v_{pid} > 0 ? v_{pid}* w_{acc} : v_{pid} \\\\ v_{target} = max(v_{ego} + v_{add}, v_{min, cruise}) Variable Description d actual distance to obstacle d_{rss} ideal distance to obstacle based on RSS v_{min, cruise} min_cruise_target_vel w_{acc} output_ratio_during_accel lpf(val) apply low-pass filter to val pid(val) apply pid to val"},{"location":"planning/obstacle_cruise_planner/#implementation","title":"Implementation","text":""},{"location":"planning/obstacle_cruise_planner/#flowchart","title":"Flowchart","text":"

Successive functions consist of obstacle_cruise_planner as follows.

Various algorithms for stop and cruise planning will be implemented, and one of them is designated depending on the use cases. The core algorithm implementation generateTrajectory depends on the designated algorithm.

"},{"location":"planning/obstacle_cruise_planner/#algorithm-selection","title":"Algorithm selection","text":"

Currently, only a PID-based planner is supported. Each planner will be explained in the following.

Parameter Type Description common.planning_method string cruise and stop planning algorithm, selected from \"pid_base\""},{"location":"planning/obstacle_cruise_planner/#pid-based-planner","title":"PID-based planner","text":""},{"location":"planning/obstacle_cruise_planner/#stop-planning_1","title":"Stop planning","text":"

In the pid_based_planner namespace,

Parameter Type Description obstacle_velocity_threshold_from_cruise_to_stop double obstacle velocity threshold to be stopped from cruised [m/s]

Only one obstacle is targeted for the stop planning. It is the obstacle among obstacle candidates whose velocity is less than obstacle_velocity_threshold_from_cruise_to_stop, and which is the nearest to the ego along the trajectory. A stop point is inserted keepingcommon.safe_distance_margin distance between the ego and obstacle.

Note that, as explained in the stop planning design, a stop planning which requires a strong acceleration (less than common.min_strong_accel) will be canceled.

"},{"location":"planning/obstacle_cruise_planner/#adaptive-cruise-planning_1","title":"Adaptive cruise planning","text":"

In the pid_based_planner namespace,

Parameter Type Description kp double p gain for pid control [-] ki double i gain for pid control [-] kd double d gain for pid control [-] output_ratio_during_accel double The output velocity will be multiplied by the ratio during acceleration to follow the front vehicle. [-] vel_to_acc_weight double target acceleration is target velocity * vel_to_acc_weight [-] min_cruise_target_vel double minimum target velocity during cruise [m/s]

In order to keep the safe distance, the target velocity and acceleration is calculated and sent as an external velocity limit to the velocity smoothing package (motion_velocity_smoother by default). The target velocity and acceleration is respectively calculated with the PID controller according to the error between the reference safe distance and the actual distance.

"},{"location":"planning/obstacle_cruise_planner/#optimization-based-planner","title":"Optimization-based planner","text":"

under construction

"},{"location":"planning/obstacle_cruise_planner/#minor-functions","title":"Minor functions","text":""},{"location":"planning/obstacle_cruise_planner/#prioritization-of-behavior-modules-stop-point","title":"Prioritization of behavior module's stop point","text":"

When stopping for a pedestrian walking on the crosswalk, the behavior module inserts the zero velocity in the trajectory in front of the crosswalk. Also obstacle_cruise_planner's stop planning also works, and the ego may not reach the behavior module's stop point since the safe distance defined in obstacle_cruise_planner may be longer than the behavior module's safe distance. To resolve this non-alignment of the stop point between the behavior module and obstacle_cruise_planner, common.min_behavior_stop_margin is defined. In the case of the crosswalk described above, obstacle_cruise_planner inserts the stop point with a distance common.min_behavior_stop_margin at minimum between the ego and obstacle.

Parameter Type Description common.min_behavior_stop_margin double minimum stop margin when stopping with the behavior module enabled [m]"},{"location":"planning/obstacle_cruise_planner/#a-function-to-keep-the-closest-stop-obstacle-in-target-obstacles","title":"A function to keep the closest stop obstacle in target obstacles","text":"

In order to keep the closest stop obstacle in the target obstacles, we check whether it is disappeared or not from the target obstacles in the checkConsistency function. If the previous closest stop obstacle is remove from the lists, we keep it in the lists for stop_obstacle_hold_time_threshold seconds. Note that if a new stop obstacle appears and the previous closest obstacle removes from the lists, we do not add it to the target obstacles again.

Parameter Type Description obstacle_filtering.stop_obstacle_hold_time_threshold double maximum time for holding closest stop obstacle [s]"},{"location":"planning/obstacle_cruise_planner/#visualization-for-debugging","title":"Visualization for debugging","text":""},{"location":"planning/obstacle_cruise_planner/#detection-area","title":"Detection area","text":"

Green polygons which is a detection area is visualized by detection_polygons in the ~/debug/marker topic.

"},{"location":"planning/obstacle_cruise_planner/#collision-point","title":"Collision point","text":"

Red point which is a collision point with obstacle is visualized by collision_points in the ~/debug/marker topic.

"},{"location":"planning/obstacle_cruise_planner/#obstacle-for-cruise","title":"Obstacle for cruise","text":"

Yellow sphere which is a obstacle for cruise is visualized by obstacles_to_cruise in the ~/debug/marker topic.

"},{"location":"planning/obstacle_cruise_planner/#obstacle-for-stop","title":"Obstacle for stop","text":"

Red sphere which is a obstacle for stop is visualized by obstacles_to_stop in the ~/debug/marker topic.

"},{"location":"planning/obstacle_cruise_planner/#obstacle-cruise-wall","title":"Obstacle cruise wall","text":"

Yellow wall which means a safe distance to cruise if the ego's front meets the wall is visualized in the ~/debug/cruise_wall_marker topic.

"},{"location":"planning/obstacle_cruise_planner/#obstacle-stop-wall","title":"Obstacle stop wall","text":"

Red wall which means a safe distance to stop if the ego's front meets the wall is visualized in the ~/debug/stop_wall_marker topic.

"},{"location":"planning/obstacle_cruise_planner/#known-limits","title":"Known Limits","text":""},{"location":"planning/obstacle_stop_planner/","title":"Obstacle Stop Planner","text":""},{"location":"planning/obstacle_stop_planner/#overview","title":"Overview","text":"

obstacle_stop_planner has following modules

"},{"location":"planning/obstacle_stop_planner/#input-topics","title":"Input topics","text":"Name Type Description ~/input/pointcloud sensor_msgs::PointCloud2 obstacle pointcloud ~/input/trajectory autoware_auto_planning_msgs::Trajectory trajectory ~/input/vector_map autoware_auto_mapping_msgs::HADMapBin vector map ~/input/odometry nav_msgs::Odometry vehicle velocity ~/input/dynamic_objects autoware_auto_perception_msgs::PredictedObjects dynamic objects ~/input/expand_stop_range tier4_planning_msgs::msg::ExpandStopRange expand stop range"},{"location":"planning/obstacle_stop_planner/#output-topics","title":"Output topics","text":"Name Type Description ~output/trajectory autoware_auto_planning_msgs::Trajectory trajectory to be followed ~output/stop_reasons tier4_planning_msgs::StopReasonArray reasons that cause the vehicle to stop"},{"location":"planning/obstacle_stop_planner/#common-parameter","title":"Common Parameter","text":"Parameter Type Description enable_slow_down bool enable slow down planner [-] max_velocity double max velocity [m/s] chattering_threshold double even if the obstacle disappears, the stop judgment continues for chattering_threshold [s] enable_z_axis_obstacle_filtering bool filter obstacles in z axis (height) [-] z_axis_filtering_buffer double additional buffer for z axis filtering [m]]"},{"location":"planning/obstacle_stop_planner/#obstacle-stop-planner_1","title":"Obstacle Stop Planner","text":""},{"location":"planning/obstacle_stop_planner/#role","title":"Role","text":"

This module inserts the stop point before the obstacle with margin. In nominal case, the margin is the sum of baselink_to_front and max_longitudinal_margin. The baselink_to_front means the distance between baselink(center of rear-wheel axis) and front of the car. The detection area is generated along the processed trajectory as following figure. (This module cut off the input trajectory behind the ego position and decimates the trajectory points for reducing computational costs.)

parameters for obstacle stop planner

target for obstacle stop planner

If another stop point has already been inserted by other modules within max_longitudinal_margin, the margin is the sum of baselink_to_front and min_longitudinal_margin. This feature exists to avoid stopping unnaturally position. (For example, the ego stops unnaturally far away from stop line of crosswalk that pedestrians cross to without this feature.)

minimum longitudinal margin

The module searches the obstacle pointcloud within detection area. When the pointcloud is found, Adaptive Cruise Controller modules starts to work. only when Adaptive Cruise Controller modules does not insert target velocity, the stop point is inserted to the trajectory. The stop point means the point with 0 velocity.

"},{"location":"planning/obstacle_stop_planner/#restart-prevention","title":"Restart prevention","text":"

If it needs X meters (e.g. 0.5 meters) to stop once the vehicle starts moving due to the poor vehicle control performance, the vehicle goes over the stopping position that should be strictly observed when the vehicle starts to moving in order to approach the near stop point (e.g. 0.3 meters away).

This module has parameter hold_stop_margin_distance in order to prevent from these redundant restart. If the vehicle is stopped within hold_stop_margin_distance meters from stop point of the module, the module judges that the vehicle has already stopped for the module's stop point and plans to keep stopping current position even if the vehicle is stopped due to other factors.

parameters

outside the hold_stop_margin_distance

inside the hold_stop_margin_distance"},{"location":"planning/obstacle_stop_planner/#parameters","title":"Parameters","text":""},{"location":"planning/obstacle_stop_planner/#stop-position","title":"Stop position","text":"Parameter Type Description max_longitudinal_margin double margin between obstacle and the ego's front [m] max_longitudinal_margin_behind_goal double margin between obstacle and the ego's front when the stop point is behind the goal[m] min_longitudinal_margin double if any obstacle exists within max_longitudinal_margin, this module set margin as the value of stop margin to min_longitudinal_margin [m] hold_stop_margin_distance double parameter for restart prevention (See above section) [m]"},{"location":"planning/obstacle_stop_planner/#obstacle-detection-area","title":"Obstacle detection area","text":"Parameter Type Description lateral_margin double lateral margin from the vehicle footprint for collision obstacle detection area [m] step_length double step length for pointcloud search range [m] enable_stop_behind_goal_for_obstacle bool enabling extend trajectory after goal lane for obstacle detection"},{"location":"planning/obstacle_stop_planner/#flowchart","title":"Flowchart","text":""},{"location":"planning/obstacle_stop_planner/#slow-down-planner","title":"Slow Down Planner","text":""},{"location":"planning/obstacle_stop_planner/#role_1","title":"Role","text":"

This module inserts the slow down section before the obstacle with forward margin and backward margin. The forward margin is the sum of baselink_to_front and longitudinal_forward_margin, and the backward margin is the sum of baselink_to_front and longitudinal_backward_margin. The ego keeps slow down velocity in slow down section. The velocity is calculated the following equation.

v_{target} = v_{min} + \\frac{l_{ld} - l_{vw}/2}{l_{margin}} (v_{max} - v_{min} )

The above equation means that the smaller the lateral deviation of the pointcloud, the lower the velocity of the slow down section.

parameters for slow down planner

target for slow down planner"},{"location":"planning/obstacle_stop_planner/#parameters_1","title":"Parameters","text":""},{"location":"planning/obstacle_stop_planner/#slow-down-section","title":"Slow down section","text":"Parameter Type Description longitudinal_forward_margin double margin between obstacle and the ego's front [m] longitudinal_backward_margin double margin between obstacle and the ego's rear [m]"},{"location":"planning/obstacle_stop_planner/#obstacle-detection-area_1","title":"Obstacle detection area","text":"Parameter Type Description lateral_margin double lateral margin from the vehicle footprint for slow down obstacle detection area [m]"},{"location":"planning/obstacle_stop_planner/#slow-down-target-velocity","title":"Slow down target velocity","text":"Parameter Type Description max_slow_down_velocity double max slow down velocity [m/s] min_slow_down_velocity double min slow down velocity [m/s]"},{"location":"planning/obstacle_stop_planner/#flowchart_1","title":"Flowchart","text":""},{"location":"planning/obstacle_stop_planner/#adaptive-cruise-controller","title":"Adaptive Cruise Controller","text":""},{"location":"planning/obstacle_stop_planner/#role_2","title":"Role","text":"

Adaptive Cruise Controller module embeds maximum velocity in trajectory when there is a dynamic point cloud on the trajectory. The value of maximum velocity depends on the own velocity, the velocity of the point cloud ( = velocity of the front car), and the distance to the point cloud (= distance to the front car).

Parameter Type Description adaptive_cruise_control.use_object_to_estimate_vel bool use dynamic objects for estimating object velocity or not adaptive_cruise_control.use_pcl_to_estimate_vel bool use raw pointclouds for estimating object velocity or not adaptive_cruise_control.consider_obj_velocity bool consider forward vehicle velocity to calculate target velocity in adaptive cruise or not adaptive_cruise_control.obstacle_velocity_thresh_to_start_acc double start adaptive cruise control when the velocity of the forward obstacle exceeds this value [m/s] adaptive_cruise_control.obstacle_velocity_thresh_to_stop_acc double stop acc when the velocity of the forward obstacle falls below this value [m/s] adaptive_cruise_control.emergency_stop_acceleration double supposed minimum acceleration (deceleration) in emergency stop [m/ss] adaptive_cruise_control.emergency_stop_idling_time double supposed idling time to start emergency stop [s] adaptive_cruise_control.min_dist_stop double minimum distance of emergency stop [m] adaptive_cruise_control.obstacle_emergency_stop_acceleration double supposed minimum acceleration (deceleration) in emergency stop [m/ss] adaptive_cruise_control.max_standard_acceleration double supposed maximum acceleration in active cruise control [m/ss] adaptive_cruise_control.min_standard_acceleration double supposed minimum acceleration (deceleration) in active cruise control [m/ss] adaptive_cruise_control.standard_idling_time double supposed idling time to react object in active cruise control [s] adaptive_cruise_control.min_dist_standard double minimum distance in active cruise control [m] adaptive_cruise_control.obstacle_min_standard_acceleration double supposed minimum acceleration of forward obstacle [m/ss] adaptive_cruise_control.margin_rate_to_change_vel double rate of margin distance to insert target velocity [-] adaptive_cruise_control.use_time_compensation_to_calc_distance bool use time-compensation to calculate distance to forward vehicle adaptive_cruise_control.p_coefficient_positive double coefficient P in PID control (used when target dist -current_dist >=0) [-] adaptive_cruise_control.p_coefficient_negative double coefficient P in PID control (used when target dist -current_dist <0) [-] adaptive_cruise_control.d_coefficient_positive double coefficient D in PID control (used when delta_dist >=0) [-] adaptive_cruise_control.d_coefficient_negative double coefficient D in PID control (used when delta_dist <0) [-] adaptive_cruise_control.object_polygon_length_margin double The distance to extend the polygon length the object in pointcloud-object matching [m] adaptive_cruise_control.object_polygon_width_margin double The distance to extend the polygon width the object in pointcloud-object matching [m] adaptive_cruise_control.valid_estimated_vel_diff_time double Maximum time difference treated as continuous points in speed estimation using a point cloud [s] adaptive_cruise_control.valid_vel_que_time double Time width of information used for speed estimation in speed estimation using a point cloud [s] adaptive_cruise_control.valid_estimated_vel_max double Maximum value of valid speed estimation results in speed estimation using a point cloud [m/s] adaptive_cruise_control.valid_estimated_vel_min double Minimum value of valid speed estimation results in speed estimation using a point cloud [m/s] adaptive_cruise_control.thresh_vel_to_stop double Embed a stop line if the maximum speed calculated by ACC is lower than this speed [m/s] adaptive_cruise_control.lowpass_gain_of_upper_velocity double Lowpass-gain of target velocity adaptive_cruise_control.use_rough_velocity_estimation: bool Use rough estimated velocity if the velocity estimation is failed adaptive_cruise_control.rough_velocity_rate double In the rough velocity estimation, the velocity of front car is estimated as self current velocity * this value"},{"location":"planning/obstacle_stop_planner/#flowchart_2","title":"Flowchart","text":"

(*1) The target vehicle point is calculated as a closest obstacle PointCloud from ego along the trajectory.

(*2) The sources of velocity estimation can be changed by the following ROS parameters.

This module works only when the target point is found in the detection area of the Obstacle stop planner module.

The first process of this module is to estimate the velocity of the target vehicle point. The velocity estimation uses the velocity information of dynamic objects or the travel distance of the target vehicle point from the previous step. The dynamic object information is primal, and the travel distance estimation is used as a backup in case of the perception failure. If the target vehicle point is contained in the bounding box of a dynamic object geometrically, the velocity of the dynamic object is used as the target point velocity. Otherwise, the target point velocity is calculated by the travel distance of the target point from the previous step; that is (current_position - previous_position) / dt. Note that this travel distance based estimation fails when the target point is detected in the first time (it mainly happens in the cut-in situation). To improve the stability of the estimation, the median of the calculation result for several steps is used.

If the calculated velocity is within the threshold range, it is used as the target point velocity.

Only when the estimation is succeeded and the estimated velocity exceeds the value of obstacle_stop_velocity_thresh_*, the distance to the pointcloud from self-position is calculated. For prevent chattering in the mode transition, obstacle_velocity_thresh_to_start_acc is used for the threshold to start adaptive cruise, and obstacle_velocity_thresh_to_stop_acc is used for the threshold to stop adaptive cruise. When the calculated distance value exceeds the emergency distance d\\_{emergency} calculated by emergency_stop parameters, target velocity to insert is calculated.

The emergency distance d\\_{emergency} is calculated as follows.

d_{emergency} = d_{margin_{emergency}} + t_{idling_{emergency}} \\cdot v_{ego} + (-\\frac{v_{ego}^2}{2 \\cdot a_{ego_{emergency}}}) - (-\\frac{v_{obj}^2}{2 \\cdot a_{obj_{emergency}}})

The target velocity is determined to keep the distance to the obstacle pointcloud from own vehicle at the standard distance d\\_{standard} calculated as following. Therefore, if the distance to the obstacle pointcloud is longer than standard distance, The target velocity becomes higher than the current velocity, and vice versa. For keeping the distance, a PID controller is used.

d_{standard} = d_{margin_{standard}} + t_{idling_{standard}} \\cdot v_{ego} + (-\\frac{v_{ego}^2}{2 \\cdot a_{ego_{standard}}}) - (-\\frac{v_{obj}^2}{2 \\cdot a_{obj_{standard}}})

If the target velocity exceeds the value of thresh_vel_to_stop, the target velocity is embedded in the trajectory.

"},{"location":"planning/obstacle_stop_planner/#known-limits","title":"Known Limits","text":" "},{"location":"planning/obstacle_velocity_limiter/","title":"Obstacle Velocity Limiter","text":""},{"location":"planning/obstacle_velocity_limiter/#purpose","title":"Purpose","text":"

This node limits the velocity when driving in the direction of an obstacle. For example, it allows to reduce the velocity when driving close to a guard rail in a curve.

Without this node With this node"},{"location":"planning/obstacle_velocity_limiter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

Using a parameter min_ttc (minimum time to collision), the node set velocity limits such that no collision with an obstacle would occur, even without new control inputs for a duration of min_ttc.

To achieve this, the motion of the ego vehicle is simulated forward in time at each point of the trajectory to create a corresponding footprint. If the footprint collides with some obstacle, the velocity at the trajectory point is reduced such that the new simulated footprint do not have any collision.

"},{"location":"planning/obstacle_velocity_limiter/#simulated-motion-footprint-and-collision-distance","title":"Simulated Motion, Footprint, and Collision Distance","text":"

The motion of the ego vehicle is simulated at each trajectory point using the heading, velocity, and steering defined at the point. Footprints are then constructed from these simulations and checked for collision. If a collision is found, the distance from the trajectory point is used to calculate the adjusted velocity that would produce a collision-free footprint. Parameter simulation.distance_method allow to switch between an exact distance calculation and a less expensive approximation using a simple euclidean distance.

Two models can be selected with parameter simulation.model for simulating the motion of the vehicle: a simple particle model and a more complicated bicycle model.

"},{"location":"planning/obstacle_velocity_limiter/#particle-model","title":"Particle Model","text":"

The particle model uses the constant heading and velocity of the vehicle at a trajectory point to simulate the future motion. The simulated forward motion corresponds to a straight line and the footprint to a rectangle.

"},{"location":"planning/obstacle_velocity_limiter/#footprint","title":"Footprint","text":"

The rectangle footprint is built from 2 lines parallel to the simulated forward motion and at a distance of half the vehicle width.

"},{"location":"planning/obstacle_velocity_limiter/#distance","title":"Distance","text":"

When a collision point is found within the footprint, the distance is calculated as described in the following figure.

"},{"location":"planning/obstacle_velocity_limiter/#bicycle-model","title":"Bicycle Model","text":"

The bicycle model uses the constant heading, velocity, and steering of the vehicle at a trajectory point to simulate the future motion. The simulated forward motion corresponds to an arc around the circle of curvature associated with the steering. Uncertainty in the steering can be introduced with the simulation.steering_offset parameter which will generate a range of motion from a left-most to a right-most steering. This results in 3 curved lines starting from the same trajectory point. A parameter simulation.nb_points is used to adjust the precision of these lines, with a minimum of 2 resulting in straight lines and higher values increasing the precision of the curves.

By default, the steering values contained in the trajectory message are used. Parameter trajectory_preprocessing.calculate_steering_angles allows to recalculate these values when set to true.

"},{"location":"planning/obstacle_velocity_limiter/#footprint_1","title":"Footprint","text":"

The footprint of the bicycle model is created from lines parallel to the left and right simulated motion at a distance of half the vehicle width. In addition, the two points on the left and right of the end point of the central simulated motion are used to complete the polygon.

"},{"location":"planning/obstacle_velocity_limiter/#distance_1","title":"Distance","text":"

The distance to a collision point is calculated by finding the curvature circle passing through the trajectory point and the collision point.

"},{"location":"planning/obstacle_velocity_limiter/#obstacle-detection","title":"Obstacle Detection","text":"

Obstacles are represented as points or linestrings (i.e., sequence of points) around the obstacles and are constructed from an occupancy grid, a pointcloud, or the lanelet map. The lanelet map is always checked for obstacles but the other source is switched using parameter obstacles.dynamic_source.

To efficiently find obstacles intersecting with a footprint, they are stored in a R-tree. Two trees are used, one for the obstacle points, and one for the obstacle linestrings (which are decomposed into segments to simplify the R-tree).

"},{"location":"planning/obstacle_velocity_limiter/#obstacle-masks","title":"Obstacle masks","text":""},{"location":"planning/obstacle_velocity_limiter/#dynamic-obstacles","title":"Dynamic obstacles","text":"

Moving obstacles such as other cars should not be considered by this module. These obstacles are detected by the perception modules and represented as polygons. Obstacles inside these polygons are ignored.

Only dynamic obstacles with a velocity above parameter obstacles.dynamic_obstacles_min_vel are removed.

To deal with delays and precision errors, the polygons can be enlarged with parameter obstacles.dynamic_obstacles_buffer.

"},{"location":"planning/obstacle_velocity_limiter/#obstacles-outside-of-the-safety-envelope","title":"Obstacles outside of the safety envelope","text":"

Obstacles that are not inside any forward simulated footprint are ignored if parameter obstacles.filter_envelope is set to true. The safety envelope polygon is built from all the footprints and used as a positive mask on the occupancy grid or pointcloud.

This option can reduce the total number of obstacles which reduces the cost of collision detection. However, the cost of masking the envelope is usually too high to be interesting.

"},{"location":"planning/obstacle_velocity_limiter/#obstacles-on-the-ego-path","title":"Obstacles on the ego path","text":"

If parameter obstacles.ignore_obstacles_on_path is set to true, a polygon mask is built from the trajectory and the vehicle dimension. Any obstacle in this polygon is ignored.

The size of the polygon can be increased using parameter obstacles.ignore_extra_distance which is added to the vehicle lateral offset.

This option is a bit expensive and should only be used in case of noisy dynamic obstacles where obstacles are wrongly detected on the ego path, causing unwanted velocity limits.

"},{"location":"planning/obstacle_velocity_limiter/#lanelet-map","title":"Lanelet Map","text":"

Information about static obstacles can be stored in the Lanelet map using the value of the type tag of linestrings. If any linestring has a type with one of the value from parameter obstacles.static_map_tags, then it will be used as an obstacle.

Obstacles from the lanelet map are not impacted by the masks.

"},{"location":"planning/obstacle_velocity_limiter/#occupancy-grid","title":"Occupancy Grid","text":"

Masking is performed by iterating through the cells inside each polygon mask using the grid_map_utils::PolygonIterator function. A threshold is then applied to only keep cells with an occupancy value above parameter obstacles.occupancy_grid_threshold. Finally, the image is converted to an image and obstacle linestrings are extracted using the opencv function findContour.

"},{"location":"planning/obstacle_velocity_limiter/#pointcloud","title":"Pointcloud","text":"

Masking is performed using the pcl::CropHull function. Points from the pointcloud are then directly used as obstacles.

"},{"location":"planning/obstacle_velocity_limiter/#velocity-adjustment","title":"Velocity Adjustment","text":"

If a collision is found, the velocity at the trajectory point is adjusted such that the resulting footprint would no longer collide with an obstacle: velocity = \\frac{dist\\_to\\_collision}{min\\_ttc}

To prevent sudden deceleration of the ego vehicle, the parameter max_deceleration limits the deceleration relative to the current ego velocity. For a trajectory point occurring at a duration t in the future (calculated from the original velocity profile), the adjusted velocity cannot be set lower than v_{current} - t * max\\_deceleration.

Furthermore, a parameter min_adjusted_velocity provides a lower bound on the modified velocity.

"},{"location":"planning/obstacle_velocity_limiter/#trajectory-preprocessing","title":"Trajectory preprocessing","text":"

The node only modifies part of the input trajectory, starting from the current ego position. Parameter trajectory_preprocessing.start_distance is used to adjust how far ahead of the ego position the velocities will start being modified. Parameters trajectory_preprocessing.max_length and trajectory_preprocessing.max_duration are used to control how much of the trajectory will see its velocity adjusted.

To reduce computation cost at the cost of precision, the trajectory can be downsampled using parameter trajectory_preprocessing.downsample_factor. For example a value of 1 means all trajectory points will be evaluated while a value of 10 means only 1/10th of the points will be evaluated.

"},{"location":"planning/obstacle_velocity_limiter/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"planning/obstacle_velocity_limiter/#inputs","title":"Inputs","text":"Name Type Description ~/input/trajectory autoware_auto_planning_msgs/Trajectory Reference trajectory ~/input/occupancy_grid nav_msgs/OccupancyGrid Occupancy grid with obstacle information ~/input/obstacle_pointcloud sensor_msgs/PointCloud2 Pointcloud containing only obstacle points ~/input/dynamic_obstacles autoware_auto_perception_msgs/PredictedObjects Dynamic objects ~/input/odometry nav_msgs/Odometry Odometry used to retrieve the current ego velocity ~/input/map autoware_auto_mapping_msgs/HADMapBin Vector map used to retrieve static obstacles"},{"location":"planning/obstacle_velocity_limiter/#outputs","title":"Outputs","text":"Name Type Description ~/output/trajectory autoware_auto_planning_msgs/Trajectory Trajectory with adjusted velocities ~/output/debug_markers visualization_msgs/MarkerArray Debug markers (envelopes, obstacle polygons) ~/output/runtime_microseconds std_msgs/Int64 Time taken to calculate the trajectory (in microseconds)"},{"location":"planning/obstacle_velocity_limiter/#parameters","title":"Parameters","text":"Name Type Description min_ttc float [s] required minimum time with no collision at each point of the trajectory assuming constant heading and velocity. distance_buffer float [m] required distance buffer with the obstacles. min_adjusted_velocity float [m/s] minimum adjusted velocity this node can set. max_deceleration float [m/s\u00b2] maximum deceleration an adjusted velocity can cause. trajectory_preprocessing.start_distance float [m] controls from which part of the trajectory (relative to the current ego pose) the velocity is adjusted. trajectory_preprocessing.max_length float [m] controls the maximum length (starting from the start_distance) where the velocity is adjusted. trajectory_preprocessing.max_distance float [s] controls the maximum duration (measured from the start_distance) where the velocity is adjusted. trajectory_preprocessing.downsample_factor int trajectory downsampling factor to allow tradeoff between precision and performance. trajectory_preprocessing.calculate_steering_angle bool if true, the steering angles of the trajectory message are not used but are recalculated. simulation.model string model to use for forward simulation. Either \"particle\" or \"bicycle\". simulation.distance_method string method to use for calculating distance to collision. Either \"exact\" or \"approximation\". simulation.steering_offset float offset around the steering used by the bicycle model. simulation.nb_points int number of points used to simulate motion with the bicycle model. obstacles.dynamic_source string source of dynamic obstacle used for collision checking. Can be \"occupancy_grid\", \"point_cloud\", or \"static_only\" (no dynamic obstacle). obstacles.occupancy_grid_threshold int value in the occupancy grid above which a cell is considered an obstacle. obstacles.dynamic_obstacles_buffer float buffer around dynamic obstacles used when masking an obstacle in order to prevent noise. obstacles.dynamic_obstacles_min_vel float velocity above which to mask a dynamic obstacle. obstacles.static_map_tags string list linestring of the lanelet map with this tags are used as obstacles. obstacles.filter_envelope bool wether to use the safety envelope to filter the dynamic obstacles source."},{"location":"planning/obstacle_velocity_limiter/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

The velocity profile produced by this node is not meant to be a realistic velocity profile and can contain sudden jumps of velocity with no regard for acceleration and jerk. This velocity profile is meant to be used as an upper bound on the actual velocity of the vehicle.

"},{"location":"planning/obstacle_velocity_limiter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":"

The critical case for this node is when an obstacle is falsely detected very close to the trajectory such that the corresponding velocity suddenly becomes very low. This can cause a sudden brake and two mechanisms can be used to mitigate these errors.

Parameter min_adjusted_velocity allow to set a minimum to the adjusted velocity, preventing the node to slow down the vehicle too much. Parameter max_deceleration allow to set a maximum deceleration (relative to the current ego velocity) that the adjusted velocity would incur.

"},{"location":"planning/obstacle_velocity_limiter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"planning/obstacle_velocity_limiter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"planning/obstacle_velocity_limiter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"planning/planning_debug_tools/Readme/","title":"Planning Debug Tools","text":"

This package contains several planning-related debug tools.

"},{"location":"planning/planning_debug_tools/Readme/#trajectory-analyzer","title":"Trajectory analyzer","text":"

The trajectory_analyzer visualizes the information (speed, curvature, yaw, etc) along the trajectory. This feature would be helpful for purposes such as \"investigating the reason why the vehicle decelerates here\". This feature employs the OSS PlotJuggler.

"},{"location":"planning/planning_debug_tools/Readme/#stop-reason-visualizer","title":"Stop reason visualizer","text":"

This is to visualize stop factor and reason. see the details

"},{"location":"planning/planning_debug_tools/Readme/#how-to-use","title":"How to use","text":"

please launch the analyzer node

ros2 launch planning_debug_tools trajectory_analyzer.launch.xml\n

and visualize the analyzed data on the plot juggler following below.

"},{"location":"planning/planning_debug_tools/Readme/#setup-plotjuggler","title":"setup PlotJuggler","text":"

For the first time, please add the following code to reactive script and save it as the picture below! (Looking for the way to automatically load the configuration file...)

You can customize what you plot by editing this code.

in Global code

behavior_path = '/planning/scenario_planning/lane_driving/behavior_planning/path_with_lane_id/debug_info'\nbehavior_velocity = '/planning/scenario_planning/lane_driving/behavior_planning/path/debug_info'\nmotion_avoid = '/planning/scenario_planning/lane_driving/motion_planning/obstacle_avoidance_planner/trajectory/debug_info'\nmotion_smoother_latacc = '/planning/scenario_planning/motion_velocity_smoother/debug/trajectory_lateral_acc_filtered/debug_info'\nmotion_smoother = '/planning/scenario_planning/trajectory/debug_info'\n

in function(tracker_time)

PlotCurvatureOverArclength('k_behavior_path', behavior_path, tracker_time)\nPlotCurvatureOverArclength('k_behavior_velocity', behavior_velocity, tracker_time)\nPlotCurvatureOverArclength('k_motion_avoid', motion_avoid, tracker_time)\nPlotCurvatureOverArclength('k_motion_smoother', motion_smoother, tracker_time)\n\nPlotVelocityOverArclength('v_behavior_path', behavior_path, tracker_time)\nPlotVelocityOverArclength('v_behavior_velocity', behavior_velocity, tracker_time)\nPlotVelocityOverArclength('v_motion_avoid', motion_avoid, tracker_time)\nPlotVelocityOverArclength('v_motion_smoother_latacc', motion_smoother_latacc, tracker_time)\nPlotVelocityOverArclength('v_motion_smoother', motion_smoother, tracker_time)\n\nPlotAccelerationOverArclength('a_behavior_path', behavior_path, tracker_time)\nPlotAccelerationOverArclength('a_behavior_velocity', behavior_velocity, tracker_time)\nPlotAccelerationOverArclength('a_motion_avoid', motion_avoid, tracker_time)\nPlotAccelerationOverArclength('a_motion_smoother_latacc', motion_smoother_latacc, tracker_time)\nPlotAccelerationOverArclength('a_motion_smoother', motion_smoother, tracker_time)\n\nPlotYawOverArclength('yaw_behavior_path', behavior_path, tracker_time)\nPlotYawOverArclength('yaw_behavior_velocity', behavior_velocity, tracker_time)\nPlotYawOverArclength('yaw_motion_avoid', motion_avoid, tracker_time)\nPlotYawOverArclength('yaw_motion_smoother_latacc', motion_smoother_latacc, tracker_time)\nPlotYawOverArclength('yaw_motion_smoother', motion_smoother, tracker_time)\n\nPlotCurrentVelocity('localization_kinematic_state', '/localization/kinematic_state', tracker_time)\n

in Function Library

function PlotValue(name, path, timestamp, value)\n  new_series = ScatterXY.new(name)\n  index = 0\n  while(true) do\n    series_k = TimeseriesView.find( string.format( \"%s/\"..value..\".%d\", path, index) )\n    series_s = TimeseriesView.find( string.format( \"%s/arclength.%d\", path, index) )\n    series_size = TimeseriesView.find( string.format( \"%s/size\", path) )\n\n    if series_k == nil or series_s == nil then break end\n\n    k = series_k:atTime(timestamp)\n    s = series_s:atTime(timestamp)\n    size = series_size:atTime(timestamp)\n\n    if index >= size then break end\n\n    new_series:push_back(s,k)\n    index = index+1\n  end\nend\n\nfunction PlotCurvatureOverArclength(name, path, timestamp)\n  PlotValue(name, path, timestamp,\"curvature\")\nend\n\nfunction PlotVelocityOverArclength(name, path, timestamp)\n  PlotValue(name, path, timestamp,\"velocity\")\nend\n\nfunction PlotAccelerationOverArclength(name, path, timestamp)\n  PlotValue(name, path, timestamp,\"acceleration\")\nend\n\nfunction PlotYawOverArclength(name, path, timestamp)\n  PlotValue(name, path, timestamp,\"yaw\")\nend\n\nfunction PlotCurrentVelocity(name, kinematics_name, timestamp)\n  new_series = ScatterXY.new(name)\n  series_v = TimeseriesView.find( string.format( \"%s/twist/twist/linear/x\", kinematics_name))\n  if series_v == nil then\n    print(\"error\")\n    return\n  end\n  v = series_v:atTime(timestamp)\n  new_series:push_back(0.0, v)\nend\n

Then, run the plot juggler.

"},{"location":"planning/planning_debug_tools/Readme/#how-to-customize-the-plot","title":"How to customize the plot","text":"

Add Path/PathWithLaneIds/Trajectory topics you want to plot in the trajectory_analyzer.launch.xml, then the analyzed topics for these messages will be published with TrajectoryDebugINfo.msg type. You can then visualize these data by editing the reactive script on the PlotJuggler.

"},{"location":"planning/planning_debug_tools/Readme/#requirements","title":"Requirements","text":"

The version of the plotJuggler must be > 3.5.0

"},{"location":"planning/planning_debug_tools/Readme/#closest-velocity-checker","title":"Closest velocity checker","text":"

This node prints the velocity information indicated by planning/control modules on a terminal. For trajectories calculated by planning modules, the target velocity on the trajectory point which is closest to the ego vehicle is printed. For control commands calculated by control modules, the target velocity and acceleration is directly printed. This feature would be helpful for purposes such as \"investigating the reason why the vehicle does not move\".

You can launch by

ros2 run planning_debug_tools closest_velocity_checker.py\n

"},{"location":"planning/planning_debug_tools/Readme/#trajectory-visualizer","title":"Trajectory visualizer","text":"

The old version of the trajectory analyzer. It is written in Python and more flexible, but very slow.

"},{"location":"planning/planning_debug_tools/Readme/#for-other-use-case-experimental","title":"For other use case (experimental)","text":"

To see behavior velocity planner's internal plath with lane id add below example value to behavior velocity analyzer and set is_publish_debug_path: true

crosswalk ='/planning/scenario_planning/lane_driving/behavior_planning/behavior_velocity_planner/debug/path_with_lane_id/crosswalk/debug_info'\nintersection ='/planning/scenario_planning/lane_driving/behavior_planning/behavior_velocity_planner/debug/path_with_lane_id/intersection/debug_info'\ntraffic_light ='/planning/scenario_planning/lane_driving/behavior_planning/behavior_velocity_planner/debug/path_with_lane_id/traffic_light/debug_info'\nmerge_from_private ='/planning/scenario_planning/lane_driving/behavior_planning/behavior_velocity_planner/debug/path_with_lane_id/merge_from_private/debug_info'\nocclusion_spot ='/planning/scenario_planning/lane_driving/behavior_planning/behavior_velocity_planner/debug/path_with_lane_id/occlusion_spot/debug_info'\n
PlotVelocityOverArclength('v_crosswalk', crosswalk, tracker_time)\nPlotVelocityOverArclength('v_intersection', intersection, tracker_time)\nPlotVelocityOverArclength('v_merge_from_private', merge_from_private, tracker_time)\nPlotVelocityOverArclength('v_traffic_light', traffic_light, tracker_time)\nPlotVelocityOverArclength('v_occlusion', occlusion_spot, tracker_time)\n\nPlotYawOverArclength('yaw_crosswalk', crosswalk, tracker_time)\nPlotYawOverArclength('yaw_intersection', intersection, tracker_time)\nPlotYawOverArclength('yaw_merge_from_private', merge_from_private, tracker_time)\nPlotYawOverArclength('yaw_traffic_light', traffic_light, tracker_time)\nPlotYawOverArclength('yaw_occlusion', occlusion_spot, tracker_time)\n\nPlotCurrentVelocity('localization_kinematic_state', '/localization/kinematic_state', tracker_time)\n
"},{"location":"planning/planning_debug_tools/doc-stop-reason-visualizer/","title":"Doc stop reason visualizer","text":""},{"location":"planning/planning_debug_tools/doc-stop-reason-visualizer/#stop_reason_visualizer","title":"stop_reason_visualizer","text":"

This module is to visualize stop factor quickly without selecting correct debug markers. This is supposed to use with virtual wall marker like below.

"},{"location":"planning/planning_debug_tools/doc-stop-reason-visualizer/#how-to-use","title":"How to use","text":"

Run this node.

ros2 run planning_debug_tools stop_reason_visualizer_exe\n

Add stop reason debug marker from rviz.

Note: ros2 process can be sometimes deleted only from killall stop_reason_visualizer_exe

Reference

"},{"location":"planning/planning_validator/","title":"Planning Validator","text":"

The planning_validator is a module that checks the validity of a trajectory before it is published. The status of the validation can be viewed in the /diagnostics and /validation_status topics. When an invalid trajectory is detected, the planning_validator will process the trajectory following the selected option: \"0. publish the trajectory as it is\", \"1. stop publishing the trajectory\", \"2. publish the last validated trajectory\".

"},{"location":"planning/planning_validator/#supported-features","title":"Supported features","text":"

The following features are supported for trajectory validation and can have thresholds set by parameters:

The following features are to be implemented.

"},{"location":"planning/planning_validator/#inputsoutputs","title":"Inputs/Outputs","text":""},{"location":"planning/planning_validator/#inputs","title":"Inputs","text":"

The planning_validator takes in the following inputs:

Name Type Description ~/input/kinematics nav_msgs/Odometry ego pose and twist ~/input/trajectory autoware_auto_planning_msgs/Trajectory target trajectory to be validated in this node"},{"location":"planning/planning_validator/#outputs","title":"Outputs","text":"

It outputs the following:

Name Type Description ~/output/trajectory autoware_auto_planning_msgs/Trajectory validated trajectory ~/output/validation_status planning_validator/PlanningValidatorStatus validator status to inform the reason why the trajectory is valid/invalid /diagnostics diagnostic_msgs/DiagnosticStatus diagnostics to report errors"},{"location":"planning/planning_validator/#parameters","title":"Parameters","text":"

The following parameters can be set for the planning_validator:

"},{"location":"planning/planning_validator/#system-parameters","title":"System parameters","text":"Name Type Description Default value invalid_trajectory_handling_type int set the operation when the invalid trajectory is detected. 0: publish the trajectory even if it is invalid, 1: stop publishing the trajectory, 2: publish the last validated trajectory. 0 publish_diag bool the Diag will be set to ERROR when the number of consecutive invalid trajectory exceeds this threshold. (For example, threshold = 1 means, even if the trajectory is invalid, the Diag will not be ERROR if the next trajectory is valid.) true diag_error_count_threshold int if true, diagnostics msg is published. true display_on_terminal bool show error msg on terminal true"},{"location":"planning/planning_validator/#algorithm-parameters","title":"Algorithm parameters","text":""},{"location":"planning/planning_validator/#thresholds","title":"Thresholds","text":"

The input trajectory is detected as invalid if the index exceeds the following thresholds.

Name Type Description Default value thresholds.interval double invalid threshold of the distance of two neighboring trajectory points 100.0 thresholds.relative_angle double invalid threshold of the relative angle of two neighboring trajectory points 2.0 thresholds.curvature double invalid threshold of the curvature in each trajectory point 1.0 thresholds.lateral_acc double invalid threshold of the lateral acceleration in each trajectory point 9.8 thresholds.longitudinal_max_acc double invalid threshold of the maximum longitudinal acceleration in each trajectory point 9.8 thresholds.longitudinal_min_acc double invalid threshold of the minimum longitudinal deceleration in each trajectory point -9.8 thresholds.steering double invalid threshold of the steering angle in each trajectory point 1.414 thresholds.steering_rate double invalid threshold of the steering angle rate in each trajectory point 10.0 thresholds.velocity_deviation double invalid threshold of the velocity deviation between the ego velocity and the trajectory point closest to ego. 100.0 thresholds.distance_deviation double invalid threshold of the distance deviation between the ego position and the trajectory point closest to ego. 100.0"},{"location":"planning/route_handler/","title":"route handler","text":"

route_handler is a library for calculating driving route on the lanelet map.

"},{"location":"planning/rtc_auto_mode_manager/","title":"RTC Auto Mode Manager","text":""},{"location":"planning/rtc_auto_mode_manager/#purpose","title":"Purpose","text":"

RTC Auto Mode Manager is a node to approve request to cooperate from behavior planning modules automatically.

"},{"location":"planning/rtc_auto_mode_manager/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"planning/rtc_auto_mode_manager/#input","title":"Input","text":"Name Type Description /planning/enable_auto_mode/** tier4_rtc_msgs/srv/AutoMode Service to enable auto mode for the module"},{"location":"planning/rtc_auto_mode_manager/#output","title":"Output","text":"Name Type Description /planning/enable_auto_mode/internal/** tier4_rtc_msgs/srv/AutoMode Service to enable auto mode for the module"},{"location":"planning/rtc_auto_mode_manager/#parameters","title":"Parameters","text":"Name Type Description module_list List of string Module names managing in rtc_auto_mode_manager default_enable_list List of string Module names enabled auto mode at initialization"},{"location":"planning/rtc_auto_mode_manager/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"planning/rtc_auto_mode_manager/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"planning/rtc_auto_mode_manager/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":""},{"location":"planning/rtc_interface/","title":"RTC Interface","text":""},{"location":"planning/rtc_interface/#purpose","title":"Purpose","text":"

RTC Interface is an interface to publish the decision status of behavior planning modules and receive execution command from external of an autonomous driving system.

"},{"location":"planning/rtc_interface/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"planning/rtc_interface/#usage-example","title":"Usage example","text":"
// Generate instance (in this example, \"intersection\" is selected)\nrtc_interface::RTCInterface rtc_interface(node, \"intersection\");\n\n// Generate UUID\nconst unique_identifier_msgs::msg::UUID uuid = generateUUID(getModuleId());\n\n// Repeat while module is running\nwhile (...) {\n// Get safety status of the module corresponding to the module id\nconst bool safe = ...\n\n// Get distance to the object corresponding to the module id\nconst double start_distance = ...\nconst double finish_distance = ...\n\n// Get time stamp\nconst rclcpp::Time stamp = ...\n\n// Update status\nrtc_interface.updateCooperateStatus(uuid, safe, start_distance, finish_distance, stamp);\n\nif (rtc_interface.isActivated(uuid)) {\n// Execute planning\n} else {\n// Stop planning\n}\n// Get time stamp\nconst rclcpp::Time stamp = ...\n\n// Publish status topic\nrtc_interface.publishCooperateStatus(stamp);\n}\n\n// Remove the status from array\nrtc_interface.removeCooperateStatus(uuid);\n
"},{"location":"planning/rtc_interface/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"planning/rtc_interface/#rtcinterface-constructor","title":"RTCInterface (Constructor)","text":"
rtc_interface::RTCInterface(rclcpp::Node & node, const std::string & name);\n
"},{"location":"planning/rtc_interface/#description","title":"Description","text":"

A constructor for rtc_interface::RTCInterface.

"},{"location":"planning/rtc_interface/#input","title":"Input","text":""},{"location":"planning/rtc_interface/#output","title":"Output","text":"

An instance of RTCInterface

"},{"location":"planning/rtc_interface/#publishcooperatestatus","title":"publishCooperateStatus","text":"
rtc_interface::publishCooperateStatus(const rclcpp::Time & stamp)\n
"},{"location":"planning/rtc_interface/#description_1","title":"Description","text":"

Publish registered cooperate status.

"},{"location":"planning/rtc_interface/#input_1","title":"Input","text":""},{"location":"planning/rtc_interface/#output_1","title":"Output","text":"

Nothing

"},{"location":"planning/rtc_interface/#updatecooperatestatus","title":"updateCooperateStatus","text":"
rtc_interface::updateCooperateStatus(const unique_identifier_msgs::msg::UUID & uuid, const bool safe, const double start_distance, const double finish_distance, const rclcpp::Time & stamp)\n
"},{"location":"planning/rtc_interface/#description_2","title":"Description","text":"

Update cooperate status corresponding to uuid. If cooperate status corresponding to uuid is not registered yet, add new cooperate status.

"},{"location":"planning/rtc_interface/#input_2","title":"Input","text":""},{"location":"planning/rtc_interface/#output_2","title":"Output","text":"

Nothing

"},{"location":"planning/rtc_interface/#removecooperatestatus","title":"removeCooperateStatus","text":"
rtc_interface::removeCooperateStatus(const unique_identifier_msgs::msg::UUID & uuid)\n
"},{"location":"planning/rtc_interface/#description_3","title":"Description","text":"

Remove cooperate status corresponding to uuid from registered statuses.

"},{"location":"planning/rtc_interface/#input_3","title":"Input","text":""},{"location":"planning/rtc_interface/#output_3","title":"Output","text":"

Nothing

"},{"location":"planning/rtc_interface/#clearcooperatestatus","title":"clearCooperateStatus","text":"
rtc_interface::clearCooperateStatus()\n
"},{"location":"planning/rtc_interface/#description_4","title":"Description","text":"

Remove all cooperate statuses.

"},{"location":"planning/rtc_interface/#input_4","title":"Input","text":"

Nothing

"},{"location":"planning/rtc_interface/#output_4","title":"Output","text":"

Nothing

"},{"location":"planning/rtc_interface/#isactivated","title":"isActivated","text":"
rtc_interface::isActivated(const unique_identifier_msgs::msg::UUID & uuid)\n
"},{"location":"planning/rtc_interface/#description_5","title":"Description","text":"

Return received command status corresponding to uuid.

"},{"location":"planning/rtc_interface/#input_5","title":"Input","text":""},{"location":"planning/rtc_interface/#output_5","title":"Output","text":"

If auto mode is enabled, return based on the safety status. If not, if received command is ACTIVATED, return true. If not, return false.

"},{"location":"planning/rtc_interface/#isregistered","title":"isRegistered","text":"
rtc_interface::isRegistered(const unique_identifier_msgs::msg::UUID & uuid)\n
"},{"location":"planning/rtc_interface/#description_6","title":"Description","text":"

Return true if uuid is registered.

"},{"location":"planning/rtc_interface/#input_6","title":"Input","text":""},{"location":"planning/rtc_interface/#output_6","title":"Output","text":"

If uuid is registered, return true. If not, return false.

"},{"location":"planning/rtc_interface/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"planning/rtc_interface/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":""},{"location":"planning/rtc_replayer/","title":"rtc_replayer","text":""},{"location":"planning/rtc_replayer/#purpose","title":"Purpose","text":"

The current issue for RTC commands is that service is not recorded to rosbag, so it's very hard to analyze what was happened exactly. So this package makes it possible to replay rtc commands service from rosbag rtc status topic to resolve that issue.

"},{"location":"planning/rtc_replayer/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"planning/rtc_replayer/#input","title":"Input","text":"Name Type Description /debug/rtc_status tier4_rtc_msgs::msg::CooperateStatusArray CooperateStatusArray that is recorded in rosbag"},{"location":"planning/rtc_replayer/#output","title":"Output","text":"Name Type Description /api/external/set/rtc_commands tier4_rtc_msgs::msg::CooperateCommands CooperateCommands that is replayed by this package"},{"location":"planning/rtc_replayer/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"planning/rtc_replayer/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

This package can't replay CooperateCommands correctly if CooperateStatusArray is not stable. And this replay is always later one step than actual however it will not affect much for behavior.

"},{"location":"planning/rtc_replayer/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":"

tbd.

"},{"location":"planning/scenario_selector/","title":"scenario_selector","text":""},{"location":"planning/scenario_selector/#scenario_selector_node","title":"scenario_selector_node","text":"

scenario_selector_node is a node that switches trajectories from each scenario.

"},{"location":"planning/scenario_selector/#input-topics","title":"Input topics","text":"Name Type Description ~input/lane_driving/trajectory autoware_auto_planning_msgs::Trajectory trajectory of LaneDriving scenario ~input/parking/trajectory autoware_auto_planning_msgs::Trajectory trajectory of Parking scenario ~input/lanelet_map autoware_auto_mapping_msgs::HADMapBin ~input/route autoware_planning_msgs::LaneletRoute route and goal pose ~input/odometry nav_msgs::Odometry for checking whether vehicle is stopped is_parking_completed bool (implemented as rosparam) whether all split trajectory of Parking are published"},{"location":"planning/scenario_selector/#output-topics","title":"Output topics","text":"Name Type Description ~output/scenario tier4_planning_msgs::Scenario current scenario and scenarios to be activated ~output/trajectory autoware_auto_planning_msgs::Trajectory trajectory to be followed"},{"location":"planning/scenario_selector/#output-tfs","title":"Output TFs","text":"

None

"},{"location":"planning/scenario_selector/#how-to-launch","title":"How to launch","text":"
  1. Write your remapping info in scenario_selector.launch or add args when executing roslaunch
  2. roslaunch scenario_selector scenario_selector.launch
    • If you would like to use only a single scenario, roslaunch scenario_selector dummy_scenario_selector_{scenario_name}.launch
"},{"location":"planning/scenario_selector/#parameters","title":"Parameters","text":"Parameter Type Description update_rate double timer's update rate th_max_message_delay_sec double threshold time of input messages' maximum delay th_arrived_distance_m double threshold distance to check if vehicle has arrived at the trajectory's endpoint th_stopped_time_sec double threshold time to check if vehicle is stopped th_stopped_velocity_mps double threshold velocity to check if vehicle is stopped"},{"location":"planning/scenario_selector/#flowchart","title":"Flowchart","text":""},{"location":"planning/static_centerline_optimizer/","title":"Static Centerline Optimizer","text":""},{"location":"planning/static_centerline_optimizer/#purpose","title":"Purpose","text":"

This package statically calculates the centerline satisfying path footprints inside the drivable area.

On narrow-road driving, the default centerline, which is the middle line between lanelets' right and left boundaries, often causes path footprints outside the drivable area. To make path footprints inside the drivable area, we use online path shape optimization by the obstacle_avoidance_planner package.

Instead of online path shape optimization, we introduce static centerline optimization. With this static centerline optimization, we have following advantages.

"},{"location":"planning/static_centerline_optimizer/#use-cases","title":"Use cases","text":"

There are two interfaces to communicate with the centerline optimizer.

"},{"location":"planning/static_centerline_optimizer/#vector-map-builder-interface","title":"Vector Map Builder Interface","text":"

Note: This function of Vector Map Builder has not been released. Please wait for a while. Currently there is no documentation about Vector Map Builder's operation for this function.

The optimized centerline can be generated from Vector Map Builder's operation.

We can run

with the following command by designating <vehicle_model>

ros2 launch static_centerline_optimizer run_planning_server.launch.xml vehicle_model:=<vehicle-model>\n

FYI, port ID of the http server is 4010 by default.

"},{"location":"planning/static_centerline_optimizer/#command-line-interface","title":"Command Line Interface","text":"

The optimized centerline can be generated from the command line interface by designating

ros2 launch static_centerline_optimizer static_centerline_optimizer.launch.xml run_backgrond:=false lanelet2_input_file_path:=<input-osm-path> lanelet2_output_file_path:=<output-osm-path> start_lanelet_id:=<start-lane-id> end_lanelet_id:=<end-lane-id> vehicle_model:=<vehicle-model>\n

The default output map path containing the optimized centerline locates /tmp/lanelet2_map.osm. If you want to change the output map path, you can remap the path by designating <output-osm-path>.

"},{"location":"planning/static_centerline_optimizer/#visualization","title":"Visualization","text":"

When launching the path planning server, rviz is launched as well as follows.

"},{"location":"planning/static_centerline_optimizer/#unsafe-footprints","title":"Unsafe footprints","text":"

Sometimes the optimized centerline footprints are close to the lanes' boundaries. We can check how close they are with unsafe footprints marker as follows.

Footprints' color depends on its distance to the boundaries, and text expresses its distance.

By default, footprints' color is

"},{"location":"planning/surround_obstacle_checker/","title":"Surround Obstacle Checker","text":""},{"location":"planning/surround_obstacle_checker/#purpose","title":"Purpose","text":"

This module subscribes required data (ego-pose, obstacles, etc), and publishes zero velocity limit to keep stopping if any of stop conditions are satisfied.

"},{"location":"planning/surround_obstacle_checker/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"planning/surround_obstacle_checker/#flow-chart","title":"Flow chart","text":""},{"location":"planning/surround_obstacle_checker/#algorithms","title":"Algorithms","text":""},{"location":"planning/surround_obstacle_checker/#check-data","title":"Check data","text":"

Check that surround_obstacle_checker receives no ground pointcloud, dynamic objects and current velocity data.

"},{"location":"planning/surround_obstacle_checker/#get-distance-to-nearest-object","title":"Get distance to nearest object","text":"

Calculate distance between ego vehicle and the nearest object. In this function, it calculates the minimum distance between the polygon of ego vehicle and all points in pointclouds and the polygons of dynamic objects.

"},{"location":"planning/surround_obstacle_checker/#stop-requirement","title":"Stop requirement","text":"

If it satisfies all following conditions, it plans stopping.

"},{"location":"planning/surround_obstacle_checker/#states","title":"States","text":"

To prevent chattering, surround_obstacle_checker manages two states. As mentioned in stop condition section, it prevents chattering by changing threshold to find surround obstacle depending on the states.

"},{"location":"planning/surround_obstacle_checker/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"planning/surround_obstacle_checker/#input","title":"Input","text":"Name Type Description /perception/obstacle_segmentation/pointcloud sensor_msgs::msg::PointCloud2 Pointcloud of obstacles which the ego-vehicle should stop or avoid /perception/object_recognition/objects autoware_auto_perception_msgs::msg::PredictedObjects Dynamic objects /localization/kinematic_state nav_msgs::msg::Odometry Current twist /tf tf2_msgs::msg::TFMessage TF /tf_static tf2_msgs::msg::TFMessage TF static"},{"location":"planning/surround_obstacle_checker/#output","title":"Output","text":"Name Type Description ~/output/velocity_limit_clear_command tier4_planning_msgs::msg::VelocityLimitClearCommand Velocity limit clear command ~/output/max_velocity tier4_planning_msgs::msg::VelocityLimit Velocity limit command ~/output/no_start_reason diagnostic_msgs::msg::DiagnosticStatus No start reason ~/output/stop_reasons tier4_planning_msgs::msg::StopReasonArray Stop reasons ~/debug/marker visualization_msgs::msg::MarkerArray Marker for visualization ~/debug/footprint geometry_msgs::msg::PolygonStamped Ego vehicle base footprint for visualization ~/debug/footprint_offset geometry_msgs::msg::PolygonStamped Ego vehicle footprint with surround_check_distance offset for visualization ~/debug/footprint_recover_offset geometry_msgs::msg::PolygonStamped Ego vehicle footprint with surround_check_recover_distance offset for visualization"},{"location":"planning/surround_obstacle_checker/#parameters","title":"Parameters","text":"Name Type Description Default value use_pointcloud bool Use pointcloud as obstacle check true use_dynamic_object bool Use dynamic object as obstacle check true surround_check_distance double If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status [m] 0.5 surround_check_recover_distance double If no object exists in this distance, transit to \"non-surrounding-obstacle\" status [m] 0.8 state_clear_time double Threshold to clear stop state [s] 2.0 stop_state_ego_speed double Threshold to check ego vehicle stopped [m/s] 0.1 stop_state_entry_duration_time double Threshold to check ego vehicle stopped [s] 0.1 publish_debug_footprints bool Publish vehicle footprint with/without offsets true"},{"location":"planning/surround_obstacle_checker/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

To perform stop planning, it is necessary to get obstacle pointclouds data. Hence, it does not plan stopping if the obstacle is in blind spot.

"},{"location":"planning/surround_obstacle_checker/surround_obstacle_checker-design.ja/","title":"Surround Obstacle Checker","text":""},{"location":"planning/surround_obstacle_checker/surround_obstacle_checker-design.ja/#purpose","title":"Purpose","text":"

surround_obstacle_checker \u306f\u3001\u81ea\u8eca\u304c\u505c\u8eca\u4e2d\u3001\u81ea\u8eca\u306e\u5468\u56f2\u306b\u969c\u5bb3\u7269\u304c\u5b58\u5728\u3059\u308b\u5834\u5408\u306b\u767a\u9032\u3057\u306a\u3044\u3088\u3046\u306b\u505c\u6b62\u8a08\u753b\u3092\u884c\u3046\u30e2\u30b8\u30e5\u30fc\u30eb\u3067\u3042\u308b\u3002

"},{"location":"planning/surround_obstacle_checker/surround_obstacle_checker-design.ja/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"planning/surround_obstacle_checker/surround_obstacle_checker-design.ja/#flow-chart","title":"Flow chart","text":""},{"location":"planning/surround_obstacle_checker/surround_obstacle_checker-design.ja/#algorithms","title":"Algorithms","text":""},{"location":"planning/surround_obstacle_checker/surround_obstacle_checker-design.ja/#check-data","title":"Check data","text":"

\u70b9\u7fa4\u3001\u52d5\u7684\u7269\u4f53\u3001\u81ea\u8eca\u901f\u5ea6\u306e\u30c7\u30fc\u30bf\u304c\u53d6\u5f97\u3067\u304d\u3066\u3044\u308b\u304b\u3069\u3046\u304b\u3092\u78ba\u8a8d\u3059\u308b\u3002

"},{"location":"planning/surround_obstacle_checker/surround_obstacle_checker-design.ja/#get-distance-to-nearest-object","title":"Get distance to nearest object","text":"

\u81ea\u8eca\u3068\u6700\u8fd1\u508d\u306e\u969c\u5bb3\u7269\u3068\u306e\u8ddd\u96e2\u3092\u8a08\u7b97\u3059\u308b\u3002 \u3053\u3053\u3067\u306f\u3001\u81ea\u8eca\u306e\u30dd\u30ea\u30b4\u30f3\u3092\u8a08\u7b97\u3057\u3001\u70b9\u7fa4\u306e\u5404\u70b9\u304a\u3088\u3073\u5404\u52d5\u7684\u7269\u4f53\u306e\u30dd\u30ea\u30b4\u30f3\u3068\u306e\u8ddd\u96e2\u3092\u305d\u308c\u305e\u308c\u8a08\u7b97\u3059\u308b\u3053\u3068\u3067\u6700\u8fd1\u508d\u306e\u969c\u5bb3\u7269\u3068\u306e\u8ddd\u96e2\u3092\u6c42\u3081\u308b\u3002

"},{"location":"planning/surround_obstacle_checker/surround_obstacle_checker-design.ja/#stop-condition","title":"Stop condition","text":"

\u6b21\u306e\u6761\u4ef6\u3092\u3059\u3079\u3066\u6e80\u305f\u3059\u3068\u304d\u3001\u81ea\u8eca\u306f\u505c\u6b62\u8a08\u753b\u3092\u884c\u3046\u3002

"},{"location":"planning/surround_obstacle_checker/surround_obstacle_checker-design.ja/#states","title":"States","text":"

\u30c1\u30e3\u30bf\u30ea\u30f3\u30b0\u9632\u6b62\u306e\u305f\u3081\u3001surround_obstacle_checker \u3067\u306f\u72b6\u614b\u3092\u7ba1\u7406\u3057\u3066\u3044\u308b\u3002 Stop condition \u306e\u9805\u3067\u8ff0\u3079\u305f\u3088\u3046\u306b\u3001\u72b6\u614b\u306b\u3088\u3063\u3066\u969c\u5bb3\u7269\u5224\u5b9a\u306e\u3057\u304d\u3044\u5024\u3092\u5909\u66f4\u3059\u308b\u3053\u3068\u3067\u30c1\u30e3\u30bf\u30ea\u30f3\u30b0\u3092\u9632\u6b62\u3057\u3066\u3044\u308b\u3002

"},{"location":"planning/surround_obstacle_checker/surround_obstacle_checker-design.ja/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"planning/surround_obstacle_checker/surround_obstacle_checker-design.ja/#input","title":"Input","text":"Name Type Description /perception/obstacle_segmentation/pointcloud sensor_msgs::msg::PointCloud2 Pointcloud of obstacles which the ego-vehicle should stop or avoid /perception/object_recognition/objects autoware_auto_perception_msgs::msg::PredictedObjects Dynamic objects /localization/kinematic_state nav_msgs::msg::Odometry Current twist /tf tf2_msgs::msg::TFMessage TF /tf_static tf2_msgs::msg::TFMessage TF static"},{"location":"planning/surround_obstacle_checker/surround_obstacle_checker-design.ja/#output","title":"Output","text":"Name Type Description ~/output/velocity_limit_clear_command tier4_planning_msgs::msg::VelocityLimitClearCommand Velocity limit clear command ~/output/max_velocity tier4_planning_msgs::msg::VelocityLimit Velocity limit command ~/output/no_start_reason diagnostic_msgs::msg::DiagnosticStatus No start reason ~/output/stop_reasons tier4_planning_msgs::msg::StopReasonArray Stop reasons ~/debug/marker visualization_msgs::msg::MarkerArray Marker for visualization"},{"location":"planning/surround_obstacle_checker/surround_obstacle_checker-design.ja/#parameters","title":"Parameters","text":"Name Type Description Default value use_pointcloud bool Use pointcloud as obstacle check true use_dynamic_object bool Use dynamic object as obstacle check true surround_check_distance double If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status [m] 0.5 surround_check_recover_distance double If no object exists in this distance, transit to \"non-surrounding-obstacle\" status [m] 0.8 state_clear_time double Threshold to clear stop state [s] 2.0 stop_state_ego_speed double Threshold to check ego vehicle stopped [m/s] 0.1 stop_state_entry_duration_time double Threshold to check ego vehicle stopped [s] 0.1"},{"location":"planning/surround_obstacle_checker/surround_obstacle_checker-design.ja/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

\u3053\u306e\u6a5f\u80fd\u304c\u52d5\u4f5c\u3059\u308b\u305f\u3081\u306b\u306f\u969c\u5bb3\u7269\u70b9\u7fa4\u306e\u89b3\u6e2c\u304c\u5fc5\u8981\u306a\u305f\u3081\u3001\u969c\u5bb3\u7269\u304c\u6b7b\u89d2\u306b\u5165\u3063\u3066\u3044\u308b\u5834\u5408\u306f\u505c\u6b62\u8a08\u753b\u3092\u884c\u308f\u306a\u3044\u3002

"},{"location":"sensing/geo_pos_conv/","title":"geo_pos_conv","text":""},{"location":"sensing/geo_pos_conv/#purpose","title":"Purpose","text":"

The geo_pos_conv is a library to calculate the conversion between x, y positions on the plane rectangular coordinate and latitude/longitude on the earth.

"},{"location":"sensing/geo_pos_conv/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"sensing/geo_pos_conv/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"sensing/geo_pos_conv/#parameters","title":"Parameters","text":""},{"location":"sensing/geo_pos_conv/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/geo_pos_conv/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/geo_pos_conv/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/geo_pos_conv/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"sensing/geo_pos_conv/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/gnss_poser/","title":"gnss_poser","text":""},{"location":"sensing/gnss_poser/#purpose","title":"Purpose","text":"

The gnss_poser is a node that subscribes gnss sensing messages and calculates vehicle pose with covariance.

"},{"location":"sensing/gnss_poser/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"sensing/gnss_poser/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"sensing/gnss_poser/#input","title":"Input","text":"Name Type Description ~/input/fix sensor_msgs::msg::NavSatFix gnss status message ~/input/autoware_orientation autoware_sensing_msgs::msg::GnssInsOrientationStamped orientation click here for more details"},{"location":"sensing/gnss_poser/#output","title":"Output","text":"Name Type Description ~/output/pose geometry_msgs::msg::PoseStamped vehicle pose calculated from gnss sensing data ~/output/gnss_pose_cov geometry_msgs::msg::PoseWithCovarianceStamped vehicle pose with covariance calculated from gnss sensing data ~/output/gnss_fixed tier4_debug_msgs::msg::BoolStamped gnss fix status"},{"location":"sensing/gnss_poser/#parameters","title":"Parameters","text":""},{"location":"sensing/gnss_poser/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description base_frame string \"base_link\" frame id gnss_frame string \"gnss\" frame id gnss_base_frame string \"gnss_base_link\" frame id map_frame string \"map\" frame id coordinate_system int \"4\" coordinate system enumeration; 0: UTM, 1: MGRS, 2: Plane, 3: WGS84 Local Coordinate System, 4: UTM Local Coordinate System plane_zone int 9 identification number of the plane rectangular coordinate systems."},{"location":"sensing/gnss_poser/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/gnss_poser/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/gnss_poser/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/gnss_poser/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"sensing/gnss_poser/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/image_diagnostics/","title":"image_diagnostics","text":""},{"location":"sensing/image_diagnostics/#purpose","title":"Purpose","text":"

The image_diagnostics is a node that check the status of the input raw image.

"},{"location":"sensing/image_diagnostics/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

Below figure shows the flowchart of image diagnostics node. Each image is divided into small blocks for block state assessment.

Each small image block state is assessed as below figure.

After all image's blocks state are evaluated, the whole image status is summarized as below.

"},{"location":"sensing/image_diagnostics/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"sensing/image_diagnostics/#input","title":"Input","text":"Name Type Description input/raw_image sensor_msgs::msg::Image raw image"},{"location":"sensing/image_diagnostics/#output","title":"Output","text":"Name Type Description image_diag/debug/gray_image sensor_msgs::msg::Image gray image image_diag/debug/dft_image sensor_msgs::msg::Image discrete Fourier transformation image image_diag/debug/diag_block_image sensor_msgs::msg::Image each block state colorization image_diag/image_state_diag tier4_debug_msgs::msg::Int32Stamped image diagnostics status value /diagnostics diagnostic_msgs::msg::DiagnosticArray diagnostics"},{"location":"sensing/image_diagnostics/#parameters","title":"Parameters","text":""},{"location":"sensing/image_diagnostics/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/image_diagnostics/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/image_diagnostics/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/image_diagnostics/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"sensing/image_diagnostics/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":" "},{"location":"sensing/image_transport_decompressor/","title":"image_transport_decompressor","text":""},{"location":"sensing/image_transport_decompressor/#purpose","title":"Purpose","text":"

The image_transport_decompressor is a node that decompresses images.

"},{"location":"sensing/image_transport_decompressor/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"sensing/image_transport_decompressor/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"sensing/image_transport_decompressor/#input","title":"Input","text":"Name Type Description ~/input/compressed_image sensor_msgs::msg::CompressedImage compressed image"},{"location":"sensing/image_transport_decompressor/#output","title":"Output","text":"Name Type Description ~/output/raw_image sensor_msgs::msg::Image decompressed image"},{"location":"sensing/image_transport_decompressor/#parameters","title":"Parameters","text":""},{"location":"sensing/image_transport_decompressor/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/image_transport_decompressor/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/image_transport_decompressor/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/image_transport_decompressor/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"sensing/image_transport_decompressor/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/imu_corrector/","title":"imu_corrector","text":""},{"location":"sensing/imu_corrector/#purpose","title":"Purpose","text":"

imu_corrector_node is a node that correct imu data.

  1. Correct yaw rate offset b by reading the parameter.
  2. Correct yaw rate standard deviation \\sigma by reading the parameter.

Mathematically, we assume the following equation:

\\tilde{\\omega}(t) = \\omega(t) + b(t) + n(t)

where \\tilde{\\omega} denotes observed angular velocity, \\omega denotes true angular velocity, b denotes an offset, and n denotes a gaussian noise. We also assume that n\\sim\\mathcal{N}(0, \\sigma^2).

"},{"location":"sensing/imu_corrector/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"sensing/imu_corrector/#input","title":"Input","text":"Name Type Description ~input sensor_msgs::msg::Imu raw imu data"},{"location":"sensing/imu_corrector/#output","title":"Output","text":"Name Type Description ~output sensor_msgs::msg::Imu corrected imu data"},{"location":"sensing/imu_corrector/#parameters","title":"Parameters","text":""},{"location":"sensing/imu_corrector/#core-parameters","title":"Core Parameters","text":"Name Type Description angular_velocity_offset_x double roll rate offset in imu_link [rad/s] angular_velocity_offset_y double pitch rate offset imu_link [rad/s] angular_velocity_offset_z double yaw rate offset imu_link [rad/s] angular_velocity_stddev_xx double roll rate standard deviation imu_link [rad/s] angular_velocity_stddev_yy double pitch rate standard deviation imu_link [rad/s] angular_velocity_stddev_zz double yaw rate standard deviation imu_link [rad/s] acceleration_stddev double acceleration standard deviation imu_link [m/s^2]"},{"location":"sensing/imu_corrector/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/imu_corrector/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/imu_corrector/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/imu_corrector/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"sensing/imu_corrector/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/livox/livox_tag_filter/","title":"livox_tag_filter","text":""},{"location":"sensing/livox/livox_tag_filter/#purpose","title":"Purpose","text":"

The livox_tag_filter is a node that removes noise from pointcloud by using the following tags:

"},{"location":"sensing/livox/livox_tag_filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"sensing/livox/livox_tag_filter/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"sensing/livox/livox_tag_filter/#input","title":"Input","text":"Name Type Description ~/input sensor_msgs::msg::PointCloud2 reference points"},{"location":"sensing/livox/livox_tag_filter/#output","title":"Output","text":"Name Type Description ~/output sensor_msgs::msg::PointCloud2 filtered points"},{"location":"sensing/livox/livox_tag_filter/#parameters","title":"Parameters","text":""},{"location":"sensing/livox/livox_tag_filter/#node-parameters","title":"Node Parameters","text":"Name Type Description ignore_tags vector ignored tags (See the following table)"},{"location":"sensing/livox/livox_tag_filter/#tag-parameters","title":"Tag Parameters","text":"Bit Description Options 0~1 Point property based on spatial position 00: Normal 01: High confidence level of the noise 10: Moderate confidence level of the noise 11: Low confidence level of the noise 2~3 Point property based on intensity 00: Normal 01: High confidence level of the noise 10: Moderate confidence level of the noise 11: Reserved 4~5 Return number 00: return 0 01: return 1 10: return 2 11: return 3 6~7 Reserved

You can download more detail description about the livox from external link [1].

"},{"location":"sensing/livox/livox_tag_filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/livox/livox_tag_filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/livox/livox_tag_filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/livox/livox_tag_filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":"

[1] https://www.livoxtech.com/downloads

"},{"location":"sensing/livox/livox_tag_filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/pointcloud_preprocessor/","title":"pointcloud_preprocessor","text":""},{"location":"sensing/pointcloud_preprocessor/#purpose","title":"Purpose","text":"

The pointcloud_preprocessor is a package that includes the following filters:

"},{"location":"sensing/pointcloud_preprocessor/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

Detail description of each filter's algorithm is in the following links.

Filter Name Description Detail concatenate_data subscribe multiple pointclouds and concatenate them into a pointcloud link crop_box_filter remove points within a given box link distortion_corrector compensate pointcloud distortion caused by ego vehicle's movement during 1 scan link downsample_filter downsampling input pointcloud link outlier_filter remove points caused by hardware problems, rain drops and small insects as a noise link passthrough_filter remove points on the outside of a range in given field (e.g. x, y, z, intensity) link pointcloud_accumulator accumulate pointclouds for a given amount of time link vector_map_filter remove points on the outside of lane by using vector map link vector_map_inside_area_filter remove points inside of vector map area that has given type by parameter link"},{"location":"sensing/pointcloud_preprocessor/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"sensing/pointcloud_preprocessor/#input","title":"Input","text":"Name Type Description ~/input/points sensor_msgs::msg::PointCloud2 reference points ~/input/indices pcl_msgs::msg::Indices reference indices"},{"location":"sensing/pointcloud_preprocessor/#output","title":"Output","text":"Name Type Description ~/output/points sensor_msgs::msg::PointCloud2 filtered points"},{"location":"sensing/pointcloud_preprocessor/#parameters","title":"Parameters","text":""},{"location":"sensing/pointcloud_preprocessor/#node-parameters","title":"Node Parameters","text":"Name Type Default Value Description input_frame string \" \" input frame id output_frame string \" \" output frame id max_queue_size int 5 max queue size of input/output topics use_indices bool false flag to use pointcloud indices latched_indices bool false flag to latch pointcloud indices approximate_sync bool false flag to use approximate sync option"},{"location":"sensing/pointcloud_preprocessor/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

pointcloud_preprocessor::Filter is implemented based on pcl_perception [1] because of this issue.

"},{"location":"sensing/pointcloud_preprocessor/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/pointcloud_preprocessor/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/pointcloud_preprocessor/#referencesexternal-links","title":"References/External links","text":"

[1] https://github.com/ros-perception/perception_pcl/blob/ros2/pcl_ros/src/pcl_ros/filters/filter.cpp

"},{"location":"sensing/pointcloud_preprocessor/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/pointcloud_preprocessor/docs/blockage_diag/","title":"blockage_diag","text":""},{"location":"sensing/pointcloud_preprocessor/docs/blockage_diag/#purpose","title":"Purpose","text":"

To ensure the performance of LiDAR and safety for autonomous driving, the abnormal condition diagnostics feature is needed. LiDAR blockage is abnormal condition of LiDAR when some unwanted objects stitch to and block the light pulses and return signal. This node's purpose is to detect the existing of blockage on LiDAR and its related size and location.

"},{"location":"sensing/pointcloud_preprocessor/docs/blockage_diag/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

This node bases on the no-return region and its location to decide if it is a blockage.

The logic is showed as below

"},{"location":"sensing/pointcloud_preprocessor/docs/blockage_diag/#inputs-outputs","title":"Inputs / Outputs","text":"

This implementation inherits pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"sensing/pointcloud_preprocessor/docs/blockage_diag/#input","title":"Input","text":"Name Type Description ~/input/pointcloud_raw_ex sensor_msgs::msg::PointCloud2 The raw point cloud data is used to detect the no-return region"},{"location":"sensing/pointcloud_preprocessor/docs/blockage_diag/#output","title":"Output","text":"Name Type Description ~/output/blockage_diag/debug/blockage_mask_image sensor_msgs::msg::Image The mask image of detected blockage ~/output/blockage_diag/debug/ground_blockage_ratio tier4_debug_msgs::msg::Float32Stamped The area ratio of blockage region in ground region ~/output/blockage_diag/debug/sky_blockage_ratio tier4_debug_msgs::msg::Float32Stamped The area ratio of blockage region in sky region ~/output/blockage_diag/debug/lidar_depth_map sensor_msgs::msg::Image The depth map image of input point cloud"},{"location":"sensing/pointcloud_preprocessor/docs/blockage_diag/#parameters","title":"Parameters","text":"Name Type Description blockage_ratio_threshold float The threshold of blockage area ratio blockage_count_threshold float The threshold of number continuous blockage frames horizontal_ring_id int The id of horizontal ring of the LiDAR angle_range vector The effective range of LiDAR vertical_bins int The LiDAR channel number model string The LiDAR model buffering_frames uint The number of buffering [range:1-200] buffering_interval uint The interval of buffering"},{"location":"sensing/pointcloud_preprocessor/docs/blockage_diag/#assumptions-known-limits","title":"Assumptions / Known limits","text":"
  1. Only Hesai Pandar40P and Hesai PandarQT were tested. For a new LiDAR, it is necessary to check order of channel id in vertical distribution manually and modify the code.
  2. The current method is still limited for dust type of blockage when dust particles are sparsely distributed.
"},{"location":"sensing/pointcloud_preprocessor/docs/blockage_diag/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/pointcloud_preprocessor/docs/blockage_diag/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/pointcloud_preprocessor/docs/blockage_diag/#referencesexternal-links","title":"References/External links","text":""},{"location":"sensing/pointcloud_preprocessor/docs/blockage_diag/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/pointcloud_preprocessor/docs/concatenate-data/","title":"concatenate_data","text":""},{"location":"sensing/pointcloud_preprocessor/docs/concatenate-data/#purpose","title":"Purpose","text":"

Many self-driving cars combine multiple LiDARs to expand the sensing range. Therefore, a function to combine a plurality of point clouds is required.

To combine multiple sensor data with a similar timestamp, the message_filters is often used in the ROS-based system, but this requires the assumption that all inputs can be received. Since safety must be strongly considered in autonomous driving, the point clouds concatenate node must be designed so that even if one sensor fails, the remaining sensor information can be output.

"},{"location":"sensing/pointcloud_preprocessor/docs/concatenate-data/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

The figure below represents the reception time of each sensor data and how it is combined in the case.

"},{"location":"sensing/pointcloud_preprocessor/docs/concatenate-data/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"sensing/pointcloud_preprocessor/docs/concatenate-data/#input","title":"Input","text":"Name Type Description ~/input/twist autoware_auto_vehicle_msgs::msg::VelocityReport The vehicle odometry is used to interpolate the timestamp of each sensor data"},{"location":"sensing/pointcloud_preprocessor/docs/concatenate-data/#output","title":"Output","text":"Name Type Description ~/output/points sensor_msgs::msg::Pointcloud2 concatenated point clouds"},{"location":"sensing/pointcloud_preprocessor/docs/concatenate-data/#parameters","title":"Parameters","text":"Name Type Default Value Description input/points vector of string [] input topic names that type must be sensor_msgs::msg::Pointcloud2 input_frame string \"\" input frame id output_frame string \"\" output frame id max_queue_size int 5 max queue size of input/output topics"},{"location":"sensing/pointcloud_preprocessor/docs/concatenate-data/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description timeout_sec double 0.1 tolerance of time to publish next pointcloud [s]When this time limit is exceeded, the filter concatenates and publishes pointcloud, even if not all the point clouds are subscribed."},{"location":"sensing/pointcloud_preprocessor/docs/concatenate-data/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

It is necessary to assume that the vehicle odometry value exists, the sensor data and odometry timestamp are correct, and the TF from base_link to sensor_frame is also correct.

"},{"location":"sensing/pointcloud_preprocessor/docs/crop-box-filter/","title":"crop_box_filter","text":""},{"location":"sensing/pointcloud_preprocessor/docs/crop-box-filter/#purpose","title":"Purpose","text":"

The crop_box_filter is a node that removes points with in a given box region. This filter is used to remove the points that hit the vehicle itself.

"},{"location":"sensing/pointcloud_preprocessor/docs/crop-box-filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

pcl::CropBox is used, which filters all points inside a given box.

"},{"location":"sensing/pointcloud_preprocessor/docs/crop-box-filter/#inputs-outputs","title":"Inputs / Outputs","text":"

This implementation inherit pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"sensing/pointcloud_preprocessor/docs/crop-box-filter/#parameters","title":"Parameters","text":""},{"location":"sensing/pointcloud_preprocessor/docs/crop-box-filter/#node-parameters","title":"Node Parameters","text":"

This implementation inherit pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"sensing/pointcloud_preprocessor/docs/crop-box-filter/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description min_x double -1.0 x-coordinate minimum value for crop range max_x double 1.0 x-coordinate maximum value for crop range min_y double -1.0 y-coordinate minimum value for crop range max_y double 1.0 y-coordinate maximum value for crop range min_z double -1.0 z-coordinate minimum value for crop range max_z double 1.0 z-coordinate maximum value for crop range"},{"location":"sensing/pointcloud_preprocessor/docs/crop-box-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/pointcloud_preprocessor/docs/crop-box-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/pointcloud_preprocessor/docs/crop-box-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/pointcloud_preprocessor/docs/crop-box-filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"sensing/pointcloud_preprocessor/docs/crop-box-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/pointcloud_preprocessor/docs/distortion-corrector/","title":"distortion_corrector","text":""},{"location":"sensing/pointcloud_preprocessor/docs/distortion-corrector/#purpose","title":"Purpose","text":"

The distortion_corrector is a node that compensates pointcloud distortion caused by ego vehicle's movement during 1 scan.

Since the LiDAR sensor scans by rotating an internal laser, the resulting point cloud will be distorted if the ego-vehicle moves during a single scan (as shown by the figure below). The node corrects this by interpolating sensor data using odometry of ego-vehicle.

"},{"location":"sensing/pointcloud_preprocessor/docs/distortion-corrector/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

The offset equation is given by $ TimeOffset = (55.296 \\mu s SequenceIndex) + (2.304 \\mu s DataPointIndex) $

To calculate the exact point time, add the TimeOffset to the timestamp. $ ExactPointTime = TimeStamp + TimeOffset $

"},{"location":"sensing/pointcloud_preprocessor/docs/distortion-corrector/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"sensing/pointcloud_preprocessor/docs/distortion-corrector/#input","title":"Input","text":"Name Type Description ~/input/points sensor_msgs::msg::PointCloud2 reference points ~/input/twist geometry_msgs::msg::TwistWithCovarianceStamped twist ~/input/imu sensor_msgs::msg::Imu imu data"},{"location":"sensing/pointcloud_preprocessor/docs/distortion-corrector/#output","title":"Output","text":"Name Type Description ~/output/points sensor_msgs::msg::PointCloud2 filtered points"},{"location":"sensing/pointcloud_preprocessor/docs/distortion-corrector/#parameters","title":"Parameters","text":""},{"location":"sensing/pointcloud_preprocessor/docs/distortion-corrector/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description timestamp_field_name string \"time_stamp\" time stamp field name use_imu bool true use gyroscope for yaw rate if true, else use vehicle status"},{"location":"sensing/pointcloud_preprocessor/docs/distortion-corrector/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/pointcloud_preprocessor/docs/downsample-filter/","title":"downsample_filter","text":""},{"location":"sensing/pointcloud_preprocessor/docs/downsample-filter/#purpose","title":"Purpose","text":"

The downsample_filter is a node that reduces the number of points.

"},{"location":"sensing/pointcloud_preprocessor/docs/downsample-filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"sensing/pointcloud_preprocessor/docs/downsample-filter/#approximate-downsample-filter","title":"Approximate Downsample Filter","text":"

pcl::VoxelGridNearestCentroid is used. The algorithm is described in tier4_pcl_extensions

"},{"location":"sensing/pointcloud_preprocessor/docs/downsample-filter/#random-downsample-filter","title":"Random Downsample Filter","text":"

pcl::RandomSample is used, which points are sampled with uniform probability.

"},{"location":"sensing/pointcloud_preprocessor/docs/downsample-filter/#voxel-grid-downsample-filter","title":"Voxel Grid Downsample Filter","text":"

pcl::VoxelGrid is used, which points in each voxel are approximated with their centroid.

"},{"location":"sensing/pointcloud_preprocessor/docs/downsample-filter/#inputs-outputs","title":"Inputs / Outputs","text":"

These implementations inherit pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"sensing/pointcloud_preprocessor/docs/downsample-filter/#parameters","title":"Parameters","text":""},{"location":"sensing/pointcloud_preprocessor/docs/downsample-filter/#note-parameters","title":"Note Parameters","text":"

These implementations inherit pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"sensing/pointcloud_preprocessor/docs/downsample-filter/#core-parameters","title":"Core Parameters","text":""},{"location":"sensing/pointcloud_preprocessor/docs/downsample-filter/#approximate-downsample-filter_1","title":"Approximate Downsample Filter","text":"Name Type Default Value Description voxel_size_x double 0.3 voxel size x [m] voxel_size_y double 0.3 voxel size y [m] voxel_size_z double 0.1 voxel size z [m]"},{"location":"sensing/pointcloud_preprocessor/docs/downsample-filter/#random-downsample-filter_1","title":"Random Downsample Filter","text":"Name Type Default Value Description sample_num int 1500 number of indices to be sampled"},{"location":"sensing/pointcloud_preprocessor/docs/downsample-filter/#voxel-grid-downsample-filter_1","title":"Voxel Grid Downsample Filter","text":"Name Type Default Value Description voxel_size_x double 0.3 voxel size x [m] voxel_size_y double 0.3 voxel size y [m] voxel_size_z double 0.1 voxel size z [m]"},{"location":"sensing/pointcloud_preprocessor/docs/downsample-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/pointcloud_preprocessor/docs/downsample-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/pointcloud_preprocessor/docs/downsample-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/pointcloud_preprocessor/docs/downsample-filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"sensing/pointcloud_preprocessor/docs/downsample-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/pointcloud_preprocessor/docs/dual-return-outlier-filter/","title":"dual_return_outlier_filter","text":""},{"location":"sensing/pointcloud_preprocessor/docs/dual-return-outlier-filter/#purpose","title":"Purpose","text":"

The purpose is to remove point cloud noise such as fog and rain and publish visibility as a diagnostic topic.

"},{"location":"sensing/pointcloud_preprocessor/docs/dual-return-outlier-filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

This node can remove rain and fog by considering the light reflected from the object in two stages according to the attenuation factor. The dual_return_outlier_filter is named because it removes noise using data that contains two types of return values separated by attenuation factor, as shown in the figure below.

Therefore, in order to use this node, the sensor driver must publish custom data including return_type. please refer to PointXYZIRADRT data structure.

Another feature of this node is that it publishes visibility as a diagnostic topic. With this function, for example, in heavy rain, the sensing module can notify that the processing performance has reached its limit, which can lead to ensuring the safety of the vehicle.

In some complicated road scenes where normal objects also reflect the light in two stages, for instance plants, leaves, some plastic net etc, the visibility faces some drop in fine weather condition. To deal with that, optional settings of a region of interest (ROI) are added.

  1. Fixed_xyz_ROI mode: Visibility estimation based on the weak points in a fixed cuboid surrounding region of ego-vehicle, defined by x, y, z in base_link perspective.
  2. Fixed_azimuth_ROI mode: Visibility estimation based on the weak points in a fixed surrounding region of ego-vehicle, defined by azimuth and distance of LiDAR perspective.

When select 2 fixed ROI modes, due to the range of weak points is shrink, the sensitivity of visibility is decrease so that a trade of between weak_first_local_noise_threshold and visibility_threshold is needed.

The figure below describe how the node works.

The below picture shows the ROI options.

"},{"location":"sensing/pointcloud_preprocessor/docs/dual-return-outlier-filter/#inputs-outputs","title":"Inputs / Outputs","text":"

This implementation inherits pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"sensing/pointcloud_preprocessor/docs/dual-return-outlier-filter/#output","title":"Output","text":"Name Type Description /dual_return_outlier_filter/frequency_image sensor_msgs::msg::Image The histogram image that represent visibility /dual_return_outlier_filter/visibility tier4_debug_msgs::msg::Float32Stamped A representation of visibility with a value from 0 to 1 /dual_return_outlier_filter/pointcloud_noise sensor_msgs::msg::Pointcloud2 The pointcloud removed as noise"},{"location":"sensing/pointcloud_preprocessor/docs/dual-return-outlier-filter/#parameters","title":"Parameters","text":""},{"location":"sensing/pointcloud_preprocessor/docs/dual-return-outlier-filter/#node-parameters","title":"Node Parameters","text":"

This implementation inherits pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"sensing/pointcloud_preprocessor/docs/dual-return-outlier-filter/#core-parameters","title":"Core Parameters","text":"Name Type Description vertical_bins int The number of vertical bin for visibility histogram max_azimuth_diff float Threshold for ring_outlier_filter weak_first_distance_ratio double Threshold for ring_outlier_filter general_distance_ratio double Threshold for ring_outlier_filter weak_first_local_noise_threshold int The parameter for determining whether it is noise visibility_error_threshold float When the percentage of white pixels in the binary histogram falls below this parameter the diagnostic status becomes ERR visibility_warn_threshold float When the percentage of white pixels in the binary histogram falls below this parameter the diagnostic status becomes WARN roi_mode string The name of ROI mode for switching min_azimuth_deg float The left limit of azimuth for Fixed_azimuth_ROI mode max_azimuth_deg float The right limit of azimuth for Fixed_azimuth_ROI mode max_distance float The limit distance for for Fixed_azimuth_ROI mode x_max float Maximum of x for Fixed_xyz_ROI mode x_min float Minimum of x for Fixed_xyz_ROI mode y_max float Maximum of y for Fixed_xyz_ROI mode y_min float Minimum of y for Fixed_xyz_ROI mode z_max float Maximum of z for Fixed_xyz_ROI mode z_min float Minimum of z for Fixed_xyz_ROI mode"},{"location":"sensing/pointcloud_preprocessor/docs/dual-return-outlier-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

Not recommended for use as it is under development. Input data must be PointXYZIRADRT type data including return_type.

"},{"location":"sensing/pointcloud_preprocessor/docs/dual-return-outlier-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/pointcloud_preprocessor/docs/dual-return-outlier-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/pointcloud_preprocessor/docs/dual-return-outlier-filter/#referencesexternal-links","title":"References/External links","text":""},{"location":"sensing/pointcloud_preprocessor/docs/dual-return-outlier-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/pointcloud_preprocessor/docs/outlier-filter/","title":"outlier_filter","text":""},{"location":"sensing/pointcloud_preprocessor/docs/outlier-filter/#purpose","title":"Purpose","text":"

The outlier_filter is a package for filtering outlier of points.

"},{"location":"sensing/pointcloud_preprocessor/docs/outlier-filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"Filter Name Description Detail radius search 2d outlier filter A method of removing point cloud noise based on the number of points existing within a certain radius link ring outlier filter A method of operating scan in chronological order and removing noise based on the rate of change in the distance between points link voxel grid outlier filter A method of removing point cloud noise based on the number of points existing within a voxel link dual return outlier filter (under development) A method of removing rain and fog by considering the light reflected from the object in two stages according to the attenuation factor. link"},{"location":"sensing/pointcloud_preprocessor/docs/outlier-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/pointcloud_preprocessor/docs/outlier-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/pointcloud_preprocessor/docs/outlier-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/pointcloud_preprocessor/docs/outlier-filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"sensing/pointcloud_preprocessor/docs/outlier-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/pointcloud_preprocessor/docs/passthrough-filter/","title":"passthrough_filter","text":""},{"location":"sensing/pointcloud_preprocessor/docs/passthrough-filter/#purpose","title":"Purpose","text":"

The passthrough_filter is a node that removes points on the outside of a range in a given field (e.g. x, y, z, intensity, ring, etc).

"},{"location":"sensing/pointcloud_preprocessor/docs/passthrough-filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"sensing/pointcloud_preprocessor/docs/passthrough-filter/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"sensing/pointcloud_preprocessor/docs/passthrough-filter/#input","title":"Input","text":"Name Type Description ~/input/points sensor_msgs::msg::PointCloud2 reference points ~/input/indices pcl_msgs::msg::Indices reference indices"},{"location":"sensing/pointcloud_preprocessor/docs/passthrough-filter/#output","title":"Output","text":"Name Type Description ~/output/points sensor_msgs::msg::PointCloud2 filtered points"},{"location":"sensing/pointcloud_preprocessor/docs/passthrough-filter/#parameters","title":"Parameters","text":""},{"location":"sensing/pointcloud_preprocessor/docs/passthrough-filter/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description filter_limit_min int 0 minimum allowed field value filter_limit_max int 127 maximum allowed field value filter_field_name string \"ring\" filtering field name keep_organized bool false flag to keep indices structure filter_limit_negative bool false flag to return whether the data is inside limit or not"},{"location":"sensing/pointcloud_preprocessor/docs/passthrough-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/pointcloud_preprocessor/docs/passthrough-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/pointcloud_preprocessor/docs/passthrough-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/pointcloud_preprocessor/docs/passthrough-filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"sensing/pointcloud_preprocessor/docs/passthrough-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/pointcloud_preprocessor/docs/pointcloud-accumulator/","title":"pointcloud_accumulator","text":""},{"location":"sensing/pointcloud_preprocessor/docs/pointcloud-accumulator/#purpose","title":"Purpose","text":"

The pointcloud_accumulator is a node that accumulates pointclouds for a given amount of time.

"},{"location":"sensing/pointcloud_preprocessor/docs/pointcloud-accumulator/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"sensing/pointcloud_preprocessor/docs/pointcloud-accumulator/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"sensing/pointcloud_preprocessor/docs/pointcloud-accumulator/#input","title":"Input","text":"Name Type Description ~/input/points sensor_msgs::msg::PointCloud2 reference points"},{"location":"sensing/pointcloud_preprocessor/docs/pointcloud-accumulator/#output","title":"Output","text":"Name Type Description ~/output/points sensor_msgs::msg::PointCloud2 filtered points"},{"location":"sensing/pointcloud_preprocessor/docs/pointcloud-accumulator/#parameters","title":"Parameters","text":""},{"location":"sensing/pointcloud_preprocessor/docs/pointcloud-accumulator/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description accumulation_time_sec double 2.0 accumulation period [s] pointcloud_buffer_size int 50 buffer size"},{"location":"sensing/pointcloud_preprocessor/docs/pointcloud-accumulator/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/pointcloud_preprocessor/docs/pointcloud-accumulator/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/pointcloud_preprocessor/docs/pointcloud-accumulator/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/pointcloud_preprocessor/docs/pointcloud-accumulator/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"sensing/pointcloud_preprocessor/docs/pointcloud-accumulator/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/pointcloud_preprocessor/docs/radius-search-2d-outlier-filter/","title":"radius_search_2d_outlier_filter","text":""},{"location":"sensing/pointcloud_preprocessor/docs/radius-search-2d-outlier-filter/#purpose","title":"Purpose","text":"

The purpose is to remove point cloud noise such as insects and rain.

"},{"location":"sensing/pointcloud_preprocessor/docs/radius-search-2d-outlier-filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

RadiusOutlierRemoval filter which removes all indices in its input cloud that don\u2019t have at least some number of neighbors within a certain range.

The description above is quoted from [1]. pcl::search::KdTree [2] is used to implement this package.

"},{"location":"sensing/pointcloud_preprocessor/docs/radius-search-2d-outlier-filter/#inputs-outputs","title":"Inputs / Outputs","text":"

This implementation inherits pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"sensing/pointcloud_preprocessor/docs/radius-search-2d-outlier-filter/#parameters","title":"Parameters","text":""},{"location":"sensing/pointcloud_preprocessor/docs/radius-search-2d-outlier-filter/#node-parameters","title":"Node Parameters","text":"

This implementation inherits pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"sensing/pointcloud_preprocessor/docs/radius-search-2d-outlier-filter/#core-parameters","title":"Core Parameters","text":"Name Type Description min_neighbors int If points in the circle centered on reference point is less than min_neighbors, a reference point is judged as outlier search_radius double Searching number of points included in search_radius"},{"location":"sensing/pointcloud_preprocessor/docs/radius-search-2d-outlier-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

Since the method is to count the number of points contained in the cylinder with the direction of gravity as the direction of the cylinder axis, it is a prerequisite that the ground has been removed.

"},{"location":"sensing/pointcloud_preprocessor/docs/radius-search-2d-outlier-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/pointcloud_preprocessor/docs/radius-search-2d-outlier-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/pointcloud_preprocessor/docs/radius-search-2d-outlier-filter/#referencesexternal-links","title":"References/External links","text":"

[1] https://pcl.readthedocs.io/projects/tutorials/en/latest/remove_outliers.html

[2] https://pcl.readthedocs.io/projects/tutorials/en/latest/kdtree_search.html#kdtree-search

"},{"location":"sensing/pointcloud_preprocessor/docs/radius-search-2d-outlier-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/pointcloud_preprocessor/docs/ring-outlier-filter/","title":"ring_outlier_filter","text":""},{"location":"sensing/pointcloud_preprocessor/docs/ring-outlier-filter/#purpose","title":"Purpose","text":"

The purpose is to remove point cloud noise such as insects and rain.

"},{"location":"sensing/pointcloud_preprocessor/docs/ring-outlier-filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

A method of operating scan in chronological order and removing noise based on the rate of change in the distance between points

"},{"location":"sensing/pointcloud_preprocessor/docs/ring-outlier-filter/#inputs-outputs","title":"Inputs / Outputs","text":"

This implementation inherits pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"sensing/pointcloud_preprocessor/docs/ring-outlier-filter/#parameters","title":"Parameters","text":""},{"location":"sensing/pointcloud_preprocessor/docs/ring-outlier-filter/#node-parameters","title":"Node Parameters","text":"

This implementation inherits pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"sensing/pointcloud_preprocessor/docs/ring-outlier-filter/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description distance_ratio double 1.03 object_length_threshold double 0.1 num_points_threshold int 4"},{"location":"sensing/pointcloud_preprocessor/docs/ring-outlier-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

It is a prerequisite to input a scan point cloud in chronological order. In this repository it is defined as blow structure (please refer to PointXYZIRADT).

"},{"location":"sensing/pointcloud_preprocessor/docs/ring-outlier-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/pointcloud_preprocessor/docs/ring-outlier-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/pointcloud_preprocessor/docs/ring-outlier-filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"sensing/pointcloud_preprocessor/docs/ring-outlier-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/pointcloud_preprocessor/docs/vector-map-filter/","title":"vector_map_filter","text":""},{"location":"sensing/pointcloud_preprocessor/docs/vector-map-filter/#purpose","title":"Purpose","text":"

The vector_map_filter is a node that removes points on the outside of lane by using vector map.

"},{"location":"sensing/pointcloud_preprocessor/docs/vector-map-filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"sensing/pointcloud_preprocessor/docs/vector-map-filter/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"sensing/pointcloud_preprocessor/docs/vector-map-filter/#input","title":"Input","text":"Name Type Description ~/input/points sensor_msgs::msg::PointCloud2 reference points ~/input/vector_map autoware_auto_mapping_msgs::msg::HADMapBin vector map"},{"location":"sensing/pointcloud_preprocessor/docs/vector-map-filter/#output","title":"Output","text":"Name Type Description ~/output/points sensor_msgs::msg::PointCloud2 filtered points"},{"location":"sensing/pointcloud_preprocessor/docs/vector-map-filter/#parameters","title":"Parameters","text":""},{"location":"sensing/pointcloud_preprocessor/docs/vector-map-filter/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description voxel_size_x double 0.04 voxel size voxel_size_y double 0.04 voxel size"},{"location":"sensing/pointcloud_preprocessor/docs/vector-map-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/pointcloud_preprocessor/docs/vector-map-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/pointcloud_preprocessor/docs/vector-map-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/pointcloud_preprocessor/docs/vector-map-filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"sensing/pointcloud_preprocessor/docs/vector-map-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/pointcloud_preprocessor/docs/vector-map-inside-area-filter/","title":"vector_map_inside_area_filter","text":""},{"location":"sensing/pointcloud_preprocessor/docs/vector-map-inside-area-filter/#purpose","title":"Purpose","text":"

The vector_map_inside_area_filter is a node that removes points inside the vector map area that has given type by parameter.

"},{"location":"sensing/pointcloud_preprocessor/docs/vector-map-inside-area-filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"sensing/pointcloud_preprocessor/docs/vector-map-inside-area-filter/#inputs-outputs","title":"Inputs / Outputs","text":"

This implementation inherits pointcloud_preprocessor::Filter class, so please see also README.

"},{"location":"sensing/pointcloud_preprocessor/docs/vector-map-inside-area-filter/#input","title":"Input","text":"Name Type Description ~/input sensor_msgs::msg::PointCloud2 input points ~/input/vector_map autoware_auto_mapping_msgs::msg::HADMapBin vector map used for filtering points"},{"location":"sensing/pointcloud_preprocessor/docs/vector-map-inside-area-filter/#output","title":"Output","text":"Name Type Description ~/output sensor_msgs::msg::PointCloud2 filtered points"},{"location":"sensing/pointcloud_preprocessor/docs/vector-map-inside-area-filter/#core-parameters","title":"Core Parameters","text":"Name Type Description polygon_type string polygon type to be filtered"},{"location":"sensing/pointcloud_preprocessor/docs/vector-map-inside-area-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/pointcloud_preprocessor/docs/voxel-grid-outlier-filter/","title":"voxel_grid_outlier_filter","text":""},{"location":"sensing/pointcloud_preprocessor/docs/voxel-grid-outlier-filter/#purpose","title":"Purpose","text":"

The purpose is to remove point cloud noise such as insects and rain.

"},{"location":"sensing/pointcloud_preprocessor/docs/voxel-grid-outlier-filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

Removing point cloud noise based on the number of points existing within a voxel. The radius_search_2d_outlier_filter is better for accuracy, but this method has the advantage of low calculation cost.

"},{"location":"sensing/pointcloud_preprocessor/docs/voxel-grid-outlier-filter/#inputs-outputs","title":"Inputs / Outputs","text":"

This implementation inherits pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"sensing/pointcloud_preprocessor/docs/voxel-grid-outlier-filter/#parameters","title":"Parameters","text":""},{"location":"sensing/pointcloud_preprocessor/docs/voxel-grid-outlier-filter/#node-parameters","title":"Node Parameters","text":"

This implementation inherits pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"sensing/pointcloud_preprocessor/docs/voxel-grid-outlier-filter/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description voxel_size_x double 0.3 the voxel size along x-axis [m] voxel_size_y double 0.3 the voxel size along y-axis [m] voxel_size_z double 0.1 the voxel size along z-axis [m] voxel_points_threshold int 2 the minimum number of points in each voxel"},{"location":"sensing/pointcloud_preprocessor/docs/voxel-grid-outlier-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/pointcloud_preprocessor/docs/voxel-grid-outlier-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/pointcloud_preprocessor/docs/voxel-grid-outlier-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/pointcloud_preprocessor/docs/voxel-grid-outlier-filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"sensing/pointcloud_preprocessor/docs/voxel-grid-outlier-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/radar_scan_to_pointcloud2/","title":"radar_scan_to_pointcloud2","text":""},{"location":"sensing/radar_scan_to_pointcloud2/#radar_scan_to_pointcloud2_node","title":"radar_scan_to_pointcloud2_node","text":""},{"location":"sensing/radar_scan_to_pointcloud2/#input-topics","title":"Input topics","text":"Name Type Description input/radar radar_msgs::msg::RadarScan RadarScan"},{"location":"sensing/radar_scan_to_pointcloud2/#output-topics","title":"Output topics","text":"Name Type Description output/amplitude_pointcloud sensor_msgs::msg::PointCloud2 PointCloud2 radar pointcloud whose intensity is amplitude. output/doppler_pointcloud sensor_msgs::msg::PointCloud2 PointCloud2 radar pointcloud whose intensity is doppler velocity."},{"location":"sensing/radar_scan_to_pointcloud2/#parameters","title":"Parameters","text":"Name Type Description publish_amplitude_pointcloud bool Whether publish radar pointcloud whose intensity is amplitude. Default is true. publish_doppler_pointcloud bool Whether publish radar pointcloud whose intensity is doppler velocity. Default is false."},{"location":"sensing/radar_scan_to_pointcloud2/#how-to-launch","title":"How to launch","text":"
ros2 launch radar_scan_to_pointcloud2 radar_scan_to_pointcloud2.launch.xml\n
"},{"location":"sensing/radar_static_pointcloud_filter/","title":"radar_static_pointcloud_filter","text":""},{"location":"sensing/radar_static_pointcloud_filter/#radar_static_pointcloud_filter_node","title":"radar_static_pointcloud_filter_node","text":"

Extract static/dynamic radar pointcloud by using doppler velocity and ego motion. Calculation cost is O(n). n is the number of radar pointcloud.

"},{"location":"sensing/radar_static_pointcloud_filter/#input-topics","title":"Input topics","text":"Name Type Description input/radar radar_msgs::msg::RadarScan RadarScan input/odometry nav_msgs::msg::Odometry Ego vehicle odometry topic"},{"location":"sensing/radar_static_pointcloud_filter/#output-topics","title":"Output topics","text":"Name Type Description output/static_radar_scan radar_msgs::msg::RadarScan static radar pointcloud output/dynamic_radar_scan radar_msgs::msg::RadarScan dynamic radar pointcloud"},{"location":"sensing/radar_static_pointcloud_filter/#parameters","title":"Parameters","text":"Name Type Description doppler_velocity_sd double Standard deviation for radar doppler velocity. [m/s]"},{"location":"sensing/radar_static_pointcloud_filter/#how-to-launch","title":"How to launch","text":"
ros2 launch radar_static_pointcloud_filter radar_static_pointcloud_filter.launch\n
"},{"location":"sensing/radar_static_pointcloud_filter/#algorithm","title":"Algorithm","text":""},{"location":"sensing/radar_threshold_filter/","title":"radar_threshold_filter","text":""},{"location":"sensing/radar_threshold_filter/#radar_threshold_filter_node","title":"radar_threshold_filter_node","text":"

Remove noise from radar return by threshold.

Calculation cost is O(n). n is the number of radar return.

"},{"location":"sensing/radar_threshold_filter/#input-topics","title":"Input topics","text":"Name Type Description input/radar radar_msgs/msg/RadarScan.msg Radar pointcloud data"},{"location":"sensing/radar_threshold_filter/#output-topics","title":"Output topics","text":"Name Type Description output/radar radar_msgs/msg/RadarScan.msg Filtered radar pointcloud"},{"location":"sensing/radar_threshold_filter/#parameters","title":"Parameters","text":" Name Type Description is_amplitude_filter bool if this parameter is true, apply amplitude filter (publish amplitude_min < amplitude < amplitude_max) amplitude_min double [dBm^2] amplitude_max double [dBm^2] is_range_filter bool if this parameter is true, apply range filter (publish range_min < range < range_max) range_min double [m] range_max double [m] is_azimuth_filter bool if this parameter is true, apply angle filter (publish azimuth_min < range < azimuth_max) azimuth_min double [rad] azimuth_max double [rad] is_z_filter bool if this parameter is true, apply z position filter (publish z_min < z < z_max) z_min double [m] z_max double [m]"},{"location":"sensing/radar_threshold_filter/#how-to-launch","title":"How to launch","text":"
ros2 launch radar_threshold_filter radar_threshold_filter.launch.xml\n
"},{"location":"sensing/tier4_pcl_extensions/","title":"tier4_pcl_extensions","text":""},{"location":"sensing/tier4_pcl_extensions/#purpose","title":"Purpose","text":"

The tier4_pcl_extensions is a pcl extension library. The voxel grid filter in this package works with a different algorithm than the original one.

"},{"location":"sensing/tier4_pcl_extensions/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"sensing/tier4_pcl_extensions/#original-algorithm-1","title":"Original Algorithm [1]","text":"
  1. create a 3D voxel grid over the input pointcloud data
  2. calculate centroid in each voxel
  3. all the points are approximated with their centroid
"},{"location":"sensing/tier4_pcl_extensions/#extended-algorithm","title":"Extended Algorithm","text":"
  1. create a 3D voxel grid over the input pointcloud data
  2. calculate centroid in each voxel
  3. all the points are approximated with the closest point to their centroid
"},{"location":"sensing/tier4_pcl_extensions/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"sensing/tier4_pcl_extensions/#parameters","title":"Parameters","text":""},{"location":"sensing/tier4_pcl_extensions/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/tier4_pcl_extensions/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/tier4_pcl_extensions/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/tier4_pcl_extensions/#optional-referencesexternal-links","title":"(Optional) References/External links","text":"

[1] https://pointclouds.org/documentation/tutorials/voxel_grid.html

"},{"location":"sensing/tier4_pcl_extensions/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/vehicle_velocity_converter/","title":"vehicle_velocity_converter","text":""},{"location":"sensing/vehicle_velocity_converter/#purpose","title":"Purpose","text":"

This package converts autoware_auto_vehicle_msgs::msg::VehicleReport message to geometry_msgs::msg::TwistWithCovarianceStamped for gyro odometer node.

"},{"location":"sensing/vehicle_velocity_converter/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"sensing/vehicle_velocity_converter/#input","title":"Input","text":"Name Type Description velocity_status autoware_auto_vehicle_msgs::msg::VehicleReport vehicle velocity"},{"location":"sensing/vehicle_velocity_converter/#output","title":"Output","text":"Name Type Description twist_with_covariance geometry_msgs::msg::TwistWithCovarianceStamped twist with covariance converted from VehicleReport"},{"location":"sensing/vehicle_velocity_converter/#parameters","title":"Parameters","text":"Name Type Description speed_scale_factor double speed scale factor (ideal value is 1.0) frame_id string frame id for output message velocity_stddev_xx double standard deviation for vx angular_velocity_stddev_zz double standard deviation for yaw rate"},{"location":"simulator/dummy_perception_publisher/","title":"dummy_perception_publisher","text":""},{"location":"simulator/dummy_perception_publisher/#purpose","title":"Purpose","text":"

This node publishes the result of the dummy detection with the type of perception.

"},{"location":"simulator/dummy_perception_publisher/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"simulator/dummy_perception_publisher/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"simulator/dummy_perception_publisher/#input","title":"Input","text":"Name Type Description /tf tf2_msgs/TFMessage TF (self-pose) input/object dummy_perception_publisher::msg::Object dummy detection objects"},{"location":"simulator/dummy_perception_publisher/#output","title":"Output","text":"Name Type Description output/dynamic_object tier4_perception_msgs::msg::DetectedObjectsWithFeature Publishes objects output/points_raw sensor_msgs::msg::PointCloud2 point cloud of objects"},{"location":"simulator/dummy_perception_publisher/#parameters","title":"Parameters","text":"Name Type Default Value Explanation visible_range double 100.0 sensor visible range [m] detection_successful_rate double 0.8 sensor detection rate. (min) 0.0 - 1.0(max) enable_ray_tracing bool true if True, use ray tracking use_object_recognition bool true if True, publish objects topic"},{"location":"simulator/dummy_perception_publisher/#node-parameters","title":"Node Parameters","text":"

None.

"},{"location":"simulator/dummy_perception_publisher/#core-parameters","title":"Core Parameters","text":"

None.

"},{"location":"simulator/dummy_perception_publisher/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"simulator/fault_injection/","title":"fault_injection","text":""},{"location":"simulator/fault_injection/#purpose","title":"Purpose","text":"

This package is used to convert pseudo system faults from PSim to Diagnostics and notify Autoware. The component diagram is as follows:

"},{"location":"simulator/fault_injection/#test","title":"Test","text":"
source install/setup.bash\ncd fault_injection\nlaunch_test test/test_fault_injection_node.test.py\n
"},{"location":"simulator/fault_injection/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"simulator/fault_injection/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"simulator/fault_injection/#input","title":"Input","text":"Name Type Description ~/input/simulation_events tier4_simulation_msgs::msg::SimulationEvents simulation events"},{"location":"simulator/fault_injection/#output","title":"Output","text":"

None.

"},{"location":"simulator/fault_injection/#parameters","title":"Parameters","text":"

None.

"},{"location":"simulator/fault_injection/#node-parameters","title":"Node Parameters","text":"

None.

"},{"location":"simulator/fault_injection/#core-parameters","title":"Core Parameters","text":"

None.

"},{"location":"simulator/fault_injection/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"simulator/simple_planning_simulator/design/simple_planning_simulator-design/","title":"simple_planning_simulator","text":""},{"location":"simulator/simple_planning_simulator/design/simple_planning_simulator-design/#purpose-use-cases","title":"Purpose / Use cases","text":"

This node simulates the vehicle motion for a vehicle command in 2D using a simple vehicle model.

"},{"location":"simulator/simple_planning_simulator/design/simple_planning_simulator-design/#design","title":"Design","text":"

The purpose of this simulator is for the integration test of planning and control modules. This does not simulate sensing or perception, but is implemented in pure c++ only and works without GPU.

"},{"location":"simulator/simple_planning_simulator/design/simple_planning_simulator-design/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"simulator/simple_planning_simulator/design/simple_planning_simulator-design/#inputs-outputs-api","title":"Inputs / Outputs / API","text":""},{"location":"simulator/simple_planning_simulator/design/simple_planning_simulator-design/#input","title":"input","text":""},{"location":"simulator/simple_planning_simulator/design/simple_planning_simulator-design/#output","title":"output","text":""},{"location":"simulator/simple_planning_simulator/design/simple_planning_simulator-design/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"simulator/simple_planning_simulator/design/simple_planning_simulator-design/#common-parameters","title":"Common Parameters","text":"Name Type Description Default value simulated_frame_id string set to the child_frame_id in output tf \"base_link\" origin_frame_id string set to the frame_id in output tf \"odom\" initialize_source string If \"ORIGIN\", the initial pose is set at (0,0,0). If \"INITIAL_POSE_TOPIC\", node will wait until the input/initialpose topic is published. \"INITIAL_POSE_TOPIC\" add_measurement_noise bool If true, the Gaussian noise is added to the simulated results. true pos_noise_stddev double Standard deviation for position noise 0.01 rpy_noise_stddev double Standard deviation for Euler angle noise 0.0001 vel_noise_stddev double Standard deviation for longitudinal velocity noise 0.0 angvel_noise_stddev double Standard deviation for angular velocity noise 0.0 steer_noise_stddev double Standard deviation for steering angle noise 0.0001"},{"location":"simulator/simple_planning_simulator/design/simple_planning_simulator-design/#vehicle-model-parameters","title":"Vehicle Model Parameters","text":""},{"location":"simulator/simple_planning_simulator/design/simple_planning_simulator-design/#vehicle_model_type-options","title":"vehicle_model_type options","text":"

The IDEAL model moves ideally as commanded, while the DELAY model moves based on a 1st-order with time delay model. The STEER means the model receives the steer command. The VEL means the model receives the target velocity command, while the ACC model receives the target acceleration command. The GEARED suffix means that the motion will consider the gear command: the vehicle moves only one direction following the gear.

The table below shows which models correspond to what parameters. The model names are written in abbreviated form (e.g. IDEAL_STEER_VEL = I_ST_V).

Name Type Description I_ST_V I_ST_A I_ST_A_G D_ST_V D_ST_A D_ST_A_G Default value unit acc_time_delay double dead time for the acceleration input x x x x o o 0.1 [s] steer_time_delay double dead time for the steering input x x x o o o 0.24 [s] vel_time_delay double dead time for the velocity input x x x o x x 0.25 [s] acc_time_constant double time constant of the 1st-order acceleration dynamics x x x x o o 0.1 [s] steer_time_constant double time constant of the 1st-order steering dynamics x x x o o o 0.27 [s] vel_time_constant double time constant of the 1st-order velocity dynamics x x x o x x 0.5 [s] vel_lim double limit of velocity x x x o o o 50.0 [m/s] vel_rate_lim double limit of acceleration x x x o o o 7.0 [m/ss] steer_lim double limit of steering angle x x x o o o 1.0 [rad] steer_rate_lim double limit of steering angle change rate x x x o o o 5.0 [rad/s]

Note: The steering/velocity/acceleration dynamics is modeled by a first order system with a deadtime in a delay model. The definition of the time constant is the time it takes for the step response to rise up to 63% of its final value. The deadtime is a delay in the response to a control input.

"},{"location":"simulator/simple_planning_simulator/design/simple_planning_simulator-design/#default-tf-configuration","title":"Default TF configuration","text":"

Since the vehicle outputs odom->base_link tf, this simulator outputs the tf with the same frame_id configuration. In the simple_planning_simulator.launch.py, the node that outputs the map->odom tf, that usually estimated by the localization module (e.g. NDT), will be launched as well. Since the tf output by this simulator module is an ideal value, odom->map will always be 0.

"},{"location":"simulator/simple_planning_simulator/design/simple_planning_simulator-design/#error-detection-and-handling","title":"Error detection and handling","text":"

The only validation on inputs being done is testing for a valid vehicle model type.

"},{"location":"simulator/simple_planning_simulator/design/simple_planning_simulator-design/#security-considerations","title":"Security considerations","text":""},{"location":"simulator/simple_planning_simulator/design/simple_planning_simulator-design/#references-external-links","title":"References / External links","text":"

This is originally developed in the Autoware.AI. See the link below.

https://github.com/Autoware-AI/simulation/tree/master/wf_simulator

"},{"location":"simulator/simple_planning_simulator/design/simple_planning_simulator-design/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":""},{"location":"system/bluetooth_monitor/","title":"bluetooth_monitor","text":""},{"location":"system/bluetooth_monitor/#description","title":"Description","text":"

This node monitors a Bluetooth connection to a wireless device by using L2ping. L2ping generates PING echo command on Bluetooth L2CAP layer, and it is able to receive and check echo response from a wireless device.

"},{"location":"system/bluetooth_monitor/#block-diagram","title":"Block diagram","text":"

L2ping is only allowed for root by default, so this package provides the following approach to minimize security risks as much as possible:

"},{"location":"system/bluetooth_monitor/#output","title":"Output","text":""},{"location":"system/bluetooth_monitor/#bluetooth_monitor-bluetooth_connection","title":"bluetooth_monitor: bluetooth_connection","text":"

[summary]

level message OK OK WARN RTT warning ERROR Lost Function error

[values]

key value (example) Device [0-9]: Status OK / RTT warning / Verify error / Lost / Ping rejected / Function error Device [0-9]: Name Wireless Controller Device [0-9]: Manufacturer MediaTek, Inc. Device [0-9]: Address AA:BB:CC:DD:EE:FF Device [0-9]: RTT 0.00ms key (example) value (example) Device [0-9]: connect No such file or directory"},{"location":"system/bluetooth_monitor/#parameters","title":"Parameters","text":"Name Type Default Value Explanation port int 7640 Port number to connect to L2ping service. timeout int 5 Wait timeout seconds for the response. rtt_warn float 0.00 RTT(Round-Trip Time) to generate warn. addresses string * List of bluetooth address of wireless devices to monitor. "},{"location":"system/bluetooth_monitor/#instructions-before-starting","title":"Instructions before starting","text":"
  1. Assign capability to l2ping_service since L2ping requires cap_net_raw+eip capability.

    sudo setcap 'cap_net_raw+eip' ./build/bluetooth_monitor/l2ping_service\n
  2. Run l2ping_service and bluetooth_monitor.

    ./build/bluetooth_monitor/l2ping_service\nros2 launch bluetooth_monitor bluetooth_monitor.launch.xml\n
"},{"location":"system/bluetooth_monitor/#known-limitations-and-issues","title":"Known limitations and issues","text":"

None.

"},{"location":"system/default_ad_api/","title":"default_ad_api","text":"

This package is a default implementation AD API.

"},{"location":"system/default_ad_api/document/autoware-state/","title":"Autoware state compatibility","text":""},{"location":"system/default_ad_api/document/autoware-state/#overview","title":"Overview","text":"

Since /autoware/state was so widely used, default_ad_api creates it from the states of AD API for backwards compatibility. The diagnostic checks that ad_service_state_monitor used to perform have been replaced by component_state_monitor. The service /autoware/shutdown to change autoware state to finalizing is also supported for compatibility.

"},{"location":"system/default_ad_api/document/autoware-state/#conversion","title":"Conversion","text":"

This is the correspondence between AD API states and autoware states. The launch state is the data that default_ad_api node holds internally.

"},{"location":"system/default_ad_api/document/fail-safe/","title":"Fail-safe API","text":""},{"location":"system/default_ad_api/document/fail-safe/#overview","title":"Overview","text":"

The fail-safe API simply relays the MRM state. See the autoware-documentation for AD API specifications.

"},{"location":"system/default_ad_api/document/interface/","title":"Interface API","text":""},{"location":"system/default_ad_api/document/interface/#overview","title":"Overview","text":"

The interface API simply returns a version number. See the autoware-documentation for AD API specifications.

"},{"location":"system/default_ad_api/document/localization/","title":"Localization API","text":""},{"location":"system/default_ad_api/document/localization/#overview","title":"Overview","text":"

Unify the location initialization method to the service. The topic /initialpose from rviz is now only subscribed to by adapter node and converted to API call. This API call is forwarded to the pose initializer node so it can centralize the state of pose initialization. For other nodes that require initialpose, pose initializer node publishes as /initialpose3d. See the autoware-documentation for AD API specifications.

"},{"location":"system/default_ad_api/document/motion/","title":"Motion API","text":""},{"location":"system/default_ad_api/document/motion/#overview","title":"Overview","text":"

Provides a hook for when the vehicle starts. It is typically used for announcements that call attention to the surroundings. Add a pause function to the vehicle_cmd_gate, and API will control it based on vehicle stopped and start requested. See the autoware-documentation for AD API specifications.

"},{"location":"system/default_ad_api/document/motion/#states","title":"States","text":"

The implementation has more detailed state transitions to manage pause state synchronization. The correspondence with the AD API state is as follows.

"},{"location":"system/default_ad_api/document/operation-mode/","title":"Operation mode API","text":""},{"location":"system/default_ad_api/document/operation-mode/#overview","title":"Overview","text":"

Introduce operation mode. It handles autoware engage, gate_mode, external_cmd_selector and control_mode abstractly. When the mode is changed, it will be in-transition state, and if the transition completion condition to that mode is not satisfied, it will be returned to the previous mode. Also, currently, the condition for mode change is only WaitingForEngage in /autoware/state, and the engage state is shared between modes. After introducing the operation mode, each mode will have a transition available flag. See the autoware-documentation for AD API specifications.

"},{"location":"system/default_ad_api/document/operation-mode/#states","title":"States","text":"

The operation mode has the following state transitions. Disabling autoware control and changing operation mode when autoware control is disabled can be done immediately. Otherwise, enabling autoware control and changing operation mode when autoware control is enabled causes the state will be transition state. If the mode change completion condition is not satisfied within the timeout in the transition state, it will return to the previous mode.

"},{"location":"system/default_ad_api/document/operation-mode/#compatibility","title":"Compatibility","text":"

Ideally, vehicle_cmd_gate and external_cmd_selector should be merged so that the operation mode can be handled directly. However, currently the operation mode transition manager performs the following conversions to match the implementation. The transition manager monitors each topic in the previous interface and synchronizes the operation mode when it changes. When the operation mode is changed with the new interface, the transition manager disables synchronization and changes the operation mode using the previous interface.

"},{"location":"system/default_ad_api/document/routing/","title":"Routing API","text":""},{"location":"system/default_ad_api/document/routing/#overview","title":"Overview","text":"

Unify the route setting method to the service. This API supports two waypoint formats, poses and lanelet segments. The goal and checkpoint topics from rviz is only subscribed to by adapter node and converted to API call. This API call is forwarded to the mission planner node so it can centralize the state of routing. For other nodes that require route, mission planner node publishes as /planning/mission_planning/route. See the autoware-documentation for AD API specifications.

"},{"location":"system/default_ad_api_helpers/ad_api_adaptors/","title":"ad_api_adaptors","text":""},{"location":"system/default_ad_api_helpers/ad_api_adaptors/#initial_pose_adaptor","title":"initial_pose_adaptor","text":"

This node makes it easy to use the localization AD API from RViz. When a initial pose topic is received, call the localization initialize API. This node depends on the map height fitter library. See here for more details.

Interface Local Name Global Name Description Subscription initialpose /initialpose The pose for localization initialization. Client fit_map_height /localization/util/fit_map_height To fix initial pose to map height Client - /api/localization/initialize The localization initialize API."},{"location":"system/default_ad_api_helpers/ad_api_adaptors/#routing_adaptor","title":"routing_adaptor","text":"

This node makes it easy to use the routing AD API from RViz. When a goal pose topic is received, reset the waypoints and call the API. When a waypoint pose topic is received, append it to the end of the waypoints to call the API. The clear API is called automatically before setting the route.

Interface Local Name Global Name Description Subscription ~/input/goal /planning/mission_planning/goal The goal pose of route. Subscription ~/input/waypoint /planning/mission_planning/checkpoint The waypoint pose of route. Client - /api/routing/clear_route The route clear API. Client - /api/routing/set_route_points The route points set API."},{"location":"system/default_ad_api_helpers/automatic_pose_initializer/","title":"automatic_pose_initializer","text":""},{"location":"system/default_ad_api_helpers/automatic_pose_initializer/#automatic_pose_initializer_1","title":"automatic_pose_initializer","text":"

This node calls localization initialize API when the localization initialization state is uninitialized. Since the API uses GNSS pose when no pose is specified, initialization using GNSS can be performed automatically.

Interface Local Name Global Name Description Subscription - /api/localization/initialization_state The localization initialization state API. Client - /api/localization/initialize The localization initialize API."},{"location":"system/dummy_diag_publisher/","title":"dummy_diag_publisher","text":""},{"location":"system/dummy_diag_publisher/#purpose","title":"Purpose","text":"

This package outputs a dummy diagnostic data for debugging and developing.

"},{"location":"system/dummy_diag_publisher/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"system/dummy_diag_publisher/#outputs","title":"Outputs","text":"Name Type Description /diagnostics diagnostic_msgs::msgs::DiagnosticArray Diagnostics outputs"},{"location":"system/dummy_diag_publisher/#parameters","title":"Parameters","text":""},{"location":"system/dummy_diag_publisher/#node-parameters","title":"Node Parameters","text":"

The parameter DIAGNOSTIC_NAME must be a name that exists in the parameter YAML file. If the parameter status is given from a command line, the parameter is_active is automatically set to true.

Name Type Default Value Explanation Reconfigurable update_rate int 10 Timer callback period [Hz] false DIAGNOSTIC_NAME.is_active bool true Force update or not true DIAGNOSTIC_NAME.status string \"OK\" diag status set by dummy diag publisher true"},{"location":"system/dummy_diag_publisher/#yaml-format-for-dummy_diag_publisher","title":"YAML format for dummy_diag_publisher","text":"

If the value is default, the default value will be set.

Key Type Default Value Explanation required_diags.DIAGNOSTIC_NAME.is_active bool true Force update or not required_diags.DIAGNOSTIC_NAME.status string \"OK\" diag status set by dummy diag publisher"},{"location":"system/dummy_diag_publisher/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"system/dummy_diag_publisher/#usage","title":"Usage","text":""},{"location":"system/dummy_diag_publisher/#launch","title":"launch","text":"
ros2 launch dummy_diag_publisher dummy_diag_publisher.launch.xml\n
"},{"location":"system/dummy_diag_publisher/#reconfigure","title":"reconfigure","text":"
ros2 param set /dummy_diag_publisher velodyne_connection.status \"Warn\"\nros2 param set /dummy_diag_publisher velodyne_connection.is_active true\n
"},{"location":"system/dummy_infrastructure/","title":"dummy_infrastructure","text":"

This is a debug node for infrastructure communication.

"},{"location":"system/dummy_infrastructure/#usage","title":"Usage","text":"
ros2 launch dummy_infrastructure dummy_infrastructure.launch.xml\nros2 run rqt_reconfigure rqt_reconfigure\n
"},{"location":"system/dummy_infrastructure/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"system/dummy_infrastructure/#inputs","title":"Inputs","text":"Name Type Description ~/input/command_array tier4_v2x_msgs::msg::InfrastructureCommandArray Infrastructure command"},{"location":"system/dummy_infrastructure/#outputs","title":"Outputs","text":"Name Type Description ~/output/state_array tier4_v2x_msgs::msg::VirtualTrafficLightStateArray Virtual traffic light array"},{"location":"system/dummy_infrastructure/#parameters","title":"Parameters","text":""},{"location":"system/dummy_infrastructure/#node-parameters","title":"Node Parameters","text":"Name Type Default Value Explanation update_rate int 10 Timer callback period [Hz] use_first_command bool true Consider instrument id or not instrument_id string `` Used as command id approval bool false set approval filed to ros param is_finalized bool false Stop at stop_line if finalization isn't completed"},{"location":"system/dummy_infrastructure/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"system/emergency_handler/","title":"emergency_handler","text":""},{"location":"system/emergency_handler/#purpose","title":"Purpose","text":"

Emergency Handler is a node to select proper MRM from from system failure state contained in HazardStatus.

"},{"location":"system/emergency_handler/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"system/emergency_handler/#state-transitions","title":"State Transitions","text":""},{"location":"system/emergency_handler/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"system/emergency_handler/#input","title":"Input","text":"Name Type Description /system/emergency/hazard_status autoware_auto_system_msgs::msg::HazardStatusStamped Used to select proper MRM from system failure state contained in HazardStatus /control/vehicle_cmd autoware_auto_control_msgs::msg::AckermannControlCommand Used as reference when generate Emergency Control Command /localization/kinematic_state nav_msgs::msg::Odometry Used to decide whether vehicle is stopped or not /vehicle/status/control_mode autoware_auto_vehicle_msgs::msg::ControlModeReport Used to check vehicle mode: autonomous or manual /system/api/mrm/comfortable_stop/status tier4_system_msgs::msg::MrmBehaviorStatus Used to check if MRM comfortable stop operation is available /system/api/mrm/emergency_stop/status tier4_system_msgs::msg::MrmBehaviorStatus Used to check if MRM emergency stop operation is available"},{"location":"system/emergency_handler/#output","title":"Output","text":"Name Type Description /system/emergency/shift_cmd autoware_auto_vehicle_msgs::msg::GearCommand Required to execute proper MRM (send gear cmd) /system/emergency/hazard_cmd autoware_auto_vehicle_msgs::msg::HazardLightsCommand Required to execute proper MRM (send turn signal cmd) /api/fail_safe/mrm_state autoware_adapi_v1_msgs::msg::MrmState Inform MRM execution state and selected MRM behavior /system/api/mrm/comfortable_stop/operate tier4_system_msgs::srv::OperateMrm Execution order for MRM comfortable stop /system/api/mrm/emergency_stop/operate tier4_system_msgs::srv::OperateMrm Execution order for MRM emergency stop"},{"location":"system/emergency_handler/#parameters","title":"Parameters","text":""},{"location":"system/emergency_handler/#node-parameters","title":"Node Parameters","text":"Name Type Default Value Explanation update_rate int 10 Timer callback period."},{"location":"system/emergency_handler/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Explanation timeout_hazard_status double 0.5 If the input hazard_status topic cannot be received for more than timeout_hazard_status, vehicle will make an emergency stop. use_parking_after_stopped bool false If this parameter is true, it will publish PARKING shift command. turning_hazard_on.emergency bool true If this parameter is true, hazard lamps will be turned on during emergency state. use_comfortable_stop bool false If this parameter is true, operate comfortable stop when latent faults occur."},{"location":"system/emergency_handler/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"system/mrm_comfortable_stop_operator/","title":"mrm_comfortable_stop_operator","text":""},{"location":"system/mrm_comfortable_stop_operator/#purpose","title":"Purpose","text":"

MRM comfortable stop operator is a node that generates comfortable stop commands according to the comfortable stop MRM order.

"},{"location":"system/mrm_comfortable_stop_operator/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"system/mrm_comfortable_stop_operator/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"system/mrm_comfortable_stop_operator/#input","title":"Input","text":"Name Type Description ~/input/mrm/comfortable_stop/operate tier4_system_msgs::srv::OperateMrm MRM execution order"},{"location":"system/mrm_comfortable_stop_operator/#output","title":"Output","text":"Name Type Description ~/output/mrm/comfortable_stop/status tier4_system_msgs::msg::MrmBehaviorStatus MRM execution status ~/output/velocity_limit tier4_planning_msgs::msg::VelocityLimit Velocity limit command ~/output/velocity_limit/clear tier4_planning_msgs::msg::VelocityLimitClearCommand Velocity limit clear command"},{"location":"system/mrm_comfortable_stop_operator/#parameters","title":"Parameters","text":""},{"location":"system/mrm_comfortable_stop_operator/#node-parameters","title":"Node Parameters","text":"Name Type Default value Explanation update_rate int 10 Timer callback frequency [Hz]"},{"location":"system/mrm_comfortable_stop_operator/#core-parameters","title":"Core Parameters","text":"Name Type Default value Explanation min_acceleration double -1.0 Minimum acceleration for comfortable stop [m/s^2] max_jerk double 0.3 Maximum jerk for comfortable stop [m/s^3] min_jerk double -0.3 Minimum jerk for comfortable stop [m/s^3]"},{"location":"system/mrm_comfortable_stop_operator/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"system/mrm_emergency_stop_operator/READEME/","title":"mrm_emergency_stop_operator","text":""},{"location":"system/mrm_emergency_stop_operator/READEME/#purpose","title":"Purpose","text":"

MRM emergency stop operator is a node that generates emergency stop commands according to the emergency stop MRM order.

"},{"location":"system/mrm_emergency_stop_operator/READEME/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"system/mrm_emergency_stop_operator/READEME/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"system/mrm_emergency_stop_operator/READEME/#input","title":"Input","text":"Name Type Description ~/input/mrm/emergency_stop/operate tier4_system_msgs::srv::OperateMrm MRM execution order ~/input/control/control_cmd autoware_auto_control_msgs::msg::AckermannControlCommand Control command output from the last node of the control component. Used for the initial value of the emergency stop command."},{"location":"system/mrm_emergency_stop_operator/READEME/#output","title":"Output","text":"Name Type Description ~/output/mrm/emergency_stop/status tier4_system_msgs::msg::MrmBehaviorStatus MRM execution status ~/output/mrm/emergency_stop/control_cmd autoware_auto_control_msgs::msg::AckermannControlCommand Emergency stop command"},{"location":"system/mrm_emergency_stop_operator/READEME/#parameters","title":"Parameters","text":""},{"location":"system/mrm_emergency_stop_operator/READEME/#node-parameters","title":"Node Parameters","text":"Name Type Default value Explanation update_rate int 30 Timer callback frequency [Hz]"},{"location":"system/mrm_emergency_stop_operator/READEME/#core-parameters","title":"Core Parameters","text":"Name Type Default value Explanation target_acceleration double -2.5 Target acceleration for emergency stop [m/s^2] target_jerk double -1.5 Target jerk for emergency stop [m/s^3]"},{"location":"system/mrm_emergency_stop_operator/READEME/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"system/system_error_monitor/","title":"system_error_monitor","text":""},{"location":"system/system_error_monitor/#purpose","title":"Purpose","text":"

Autoware Error Monitor has two main functions.

  1. It is to judge the system hazard level from the aggregated diagnostic information of each module of Autoware.
  2. It enables automatic recovery from the emergency state.
"},{"location":"system/system_error_monitor/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"system/system_error_monitor/#state-transition","title":"State Transition","text":""},{"location":"system/system_error_monitor/#updateemergencyholdingcondition-flow-chart","title":"updateEmergencyHoldingCondition Flow Chart","text":""},{"location":"system/system_error_monitor/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"system/system_error_monitor/#input","title":"Input","text":"Name Type Description /diagnostics_agg diagnostic_msgs::msg::DiagnosticArray Diagnostic information aggregated based diagnostic_aggregator setting is used to /autoware/state autoware_auto_system_msgs::msg::AutowareState Required to ignore error during Route, Planning and Finalizing. /control/current_gate_mode tier4_control_msgs::msg::GateMode Required to select the appropriate module from autonomous_driving or external_control /vehicle/control_mode autoware_auto_vehicle_msgs::msg::ControlModeReport Required to not hold emergency during manual driving"},{"location":"system/system_error_monitor/#output","title":"Output","text":"Name Type Description /system/emergency/hazard_status autoware_auto_system_msgs::msg::HazardStatusStamped HazardStatus contains system hazard level, emergency hold status and failure details /diagnostics_err diagnostic_msgs::msg::DiagnosticArray This has the same contents as HazardStatus. This is used for visualization"},{"location":"system/system_error_monitor/#parameters","title":"Parameters","text":""},{"location":"system/system_error_monitor/#node-parameters","title":"Node Parameters","text":"Name Type Default Value Explanation ignore_missing_diagnostics bool false If this parameter is turned off, it will be ignored if required modules have not been received. add_leaf_diagnostics bool true Required to use children diagnostics. diag_timeout_sec double 1.0 (sec) If required diagnostic is not received for a diag_timeout_sec, the diagnostic state become STALE state. data_ready_timeout double 30.0 If input topics required for system_error_monitor are not available for data_ready_timeout seconds, autoware_state will translate to emergency state. data_heartbeat_timeout double 1.0 If input topics required for system_error_monitor are not no longer subscribed for data_heartbeat_timeout seconds, autoware_state will translate to emergency state."},{"location":"system/system_error_monitor/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Explanation hazard_recovery_timeout double 5.0 The vehicle can recovery to normal driving if emergencies disappear during hazard_recovery_timeout. use_emergency_hold bool false If it is false, the vehicle will return to normal as soon as emergencies disappear. use_emergency_hold_in_manual_driving bool false If this parameter is turned off, emergencies will be ignored during manual driving. emergency_hazard_level int 2 If hazard_level is more than emergency_hazard_level, autoware state will translate to emergency state"},{"location":"system/system_error_monitor/#yaml-format-for-system_error_monitor","title":"YAML format for system_error_monitor","text":"

The parameter key should be filled with the hierarchical diagnostics output by diagnostic_aggregator. Parameters prefixed with required_modules.autonomous_driving are for autonomous driving. Parameters with the required_modules.remote_control prefix are for remote control. If the value is default, the default value will be set.

Key Type Default Value Explanation required_modules.autonomous_driving.DIAGNOSTIC_NAME.sf_at string \"none\" Diagnostic level where it becomes Safe Fault. Available options are \"none\", \"warn\", \"error\". required_modules.autonomous_driving.DIAGNOSTIC_NAME.lf_at string \"warn\" Diagnostic level where it becomes Latent Fault. Available options are \"none\", \"warn\", \"error\". required_modules.autonomous_driving.DIAGNOSTIC_NAME.spf_at string \"error\" Diagnostic level where it becomes Single Point Fault. Available options are \"none\", \"warn\", \"error\". required_modules.autonomous_driving.DIAGNOSTIC_NAME.auto_recovery string \"true\" Determines whether the system will automatically recover when it recovers from an error. required_modules.remote_control.DIAGNOSTIC_NAME.sf_at string \"none\" Diagnostic level where it becomes Safe Fault. Available options are \"none\", \"warn\", \"error\". required_modules.remote_control.DIAGNOSTIC_NAME.lf_at string \"warn\" Diagnostic level where it becomes Latent Fault. Available options are \"none\", \"warn\", \"error\". required_modules.remote_control.DIAGNOSTIC_NAME.spf_at string \"error\" Diagnostic level where it becomes Single Point Fault. Available options are \"none\", \"warn\", \"error\". required_modules.remote_control.DIAGNOSTIC_NAME.auto_recovery string \"true\" Determines whether the system will automatically recover when it recovers from an error."},{"location":"system/system_error_monitor/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"system/system_monitor/","title":"System Monitor for Autoware","text":"

Further improvement of system monitor functionality for Autoware.

"},{"location":"system/system_monitor/#description","title":"Description","text":"

This package provides the following nodes for monitoring system:

"},{"location":"system/system_monitor/#supported-architecture","title":"Supported architecture","text":""},{"location":"system/system_monitor/#operation-confirmed-platform","title":"Operation confirmed platform","text":""},{"location":"system/system_monitor/#how-to-use","title":"How to use","text":"

Use colcon build and launch in the same way as other packages.

colcon build\nsource install/setup.bash\nros2 launch system_monitor system_monitor.launch.xml\n

CPU and GPU monitoring method differs depending on platform. CMake automatically chooses source to be built according to build environment. If you build this package on intel platform, CPU monitor and GPU monitor which run on intel platform are built.

"},{"location":"system/system_monitor/#ros-topics-published-by-system-monitor","title":"ROS topics published by system monitor","text":"

Every topic is published in 1 minute interval.

[Usage] \u2713\uff1aSupported, -\uff1aNot supported

Node Message Intel arm64(tegra) arm64(raspi) Notes CPU Monitor CPU Temperature \u2713 \u2713 \u2713 CPU Usage \u2713 \u2713 \u2713 CPU Load Average \u2713 \u2713 \u2713 CPU Thermal Throttling \u2713 - \u2713 CPU Frequency \u2713 \u2713 \u2713 Notification of frequency only, normally error not generated. HDD Monitor HDD Temperature \u2713 \u2713 \u2713 HDD PowerOnHours \u2713 \u2713 \u2713 HDD TotalDataWritten \u2713 \u2713 \u2713 HDD RecoveredError \u2713 \u2713 \u2713 HDD Usage \u2713 \u2713 \u2713 HDD ReadDataRate \u2713 \u2713 \u2713 HDD WriteDataRate \u2713 \u2713 \u2713 HDD ReadIOPS \u2713 \u2713 \u2713 HDD WriteIOPS \u2713 \u2713 \u2713 HDD Connection \u2713 \u2713 \u2713 Memory Monitor Memory Usage \u2713 \u2713 \u2713 Net Monitor Network Connection \u2713 \u2713 \u2713 Network Usage \u2713 \u2713 \u2713 Notification of usage only, normally error not generated. Network CRC Error \u2713 \u2713 \u2713 Warning occurs when the number of CRC errors in the period reaches the threshold value. The number of CRC errors that occur is the same as the value that can be confirmed with the ip command. IP Packet Reassembles Failed \u2713 \u2713 \u2713 NTP Monitor NTP Offset \u2713 \u2713 \u2713 Process Monitor Tasks Summary \u2713 \u2713 \u2713 High-load Proc[0-9] \u2713 \u2713 \u2713 High-mem Proc[0-9] \u2713 \u2713 \u2713 GPU Monitor GPU Temperature \u2713 \u2713 - GPU Usage \u2713 \u2713 - GPU Memory Usage \u2713 - - GPU Thermal Throttling \u2713 - - GPU Frequency \u2713 \u2713 - For Intel platform, monitor whether current GPU clock is supported by the GPU. Voltage Monitor CMOS Battery Status \u2713 - - Battery Health for RTC and BIOS -"},{"location":"system/system_monitor/#ros-parameters","title":"ROS parameters","text":"

See ROS parameters.

"},{"location":"system/system_monitor/#notes","title":"Notes","text":""},{"location":"system/system_monitor/#cpu-monitor-for-intel-platform","title":"CPU monitor for intel platform","text":"

Thermal throttling event can be monitored by reading contents of MSR(Model Specific Register), and accessing MSR is only allowed for root by default, so this package provides the following approach to minimize security risks as much as possible:

"},{"location":"system/system_monitor/#instructions-before-starting","title":"Instructions before starting","text":"
  1. Create a user to run 'msr_reader'.

    sudo adduser <username>\n
  2. Load kernel module 'msr' into your target system. The path '/dev/cpu/CPUNUM/msr' appears.

    sudo modprobe msr\n
  3. Allow user to access MSR with read-only privilege using the Access Control List (ACL).

    sudo setfacl -m u:<username>:r /dev/cpu/*/msr\n
  4. Assign capability to 'msr_reader' since msr kernel module requires rawio capability.

    sudo setcap cap_sys_rawio=ep install/system_monitor/lib/system_monitor/msr_reader\n
  5. Run 'msr_reader' as the user you created, and run system_monitor as a generic user.

    su <username>\ninstall/system_monitor/lib/system_monitor/msr_reader\n
"},{"location":"system/system_monitor/#see-also","title":"See also","text":"

msr_reader

"},{"location":"system/system_monitor/#hdd-monitor","title":"HDD Monitor","text":"

Generally, S.M.A.R.T. information is used to monitor HDD temperature and life of HDD, and normally accessing disk device node is allowed for root user or disk group. As with the CPU monitor, this package provides an approach to minimize security risks as much as possible:

"},{"location":"system/system_monitor/#instructions-before-starting_1","title":"Instructions before starting","text":"
  1. Create a user to run 'hdd_reader'.

    sudo adduser <username>\n
  2. Add user to the disk group.

    sudo usermod -a -G disk <username>\n
  3. Assign capabilities to 'hdd_reader' since SCSI kernel module requires rawio capability to send ATA PASS-THROUGH (12) command and NVMe kernel module requires admin capability to send Admin Command.

    sudo setcap 'cap_sys_rawio=ep cap_sys_admin=ep' install/system_monitor/lib/system_monitor/hdd_reader\n
  4. Run 'hdd_reader' as the user you created, and run system_monitor as a generic user.

    su <username>\ninstall/system_monitor/lib/system_monitor/hdd_reader\n
"},{"location":"system/system_monitor/#see-also_1","title":"See also","text":"

hdd_reader

"},{"location":"system/system_monitor/#gpu-monitor-for-intel-platform","title":"GPU Monitor for intel platform","text":"

Currently GPU monitor for intel platform only supports NVIDIA GPU whose information can be accessed by NVML API.

Also you need to install CUDA libraries. For installation instructions for CUDA 10.0, see NVIDIA CUDA Installation Guide for Linux.

"},{"location":"system/system_monitor/#voltage-monitor-for-cmos-battery","title":"Voltage monitor for CMOS Battery","text":"

Some platforms have built-in batteries for the RTC and CMOS. This node determines the battery status from the result of executing cat /proc/driver/rtc. Also, if lm-sensors is installed, it is possible to use the results. However, the return value of sensors varies depending on the chipset, so it is necessary to set a string to extract the corresponding voltage. It is also necessary to set the voltage for warning and error. For example, if you want a warning when the voltage is less than 2.9V and an error when it is less than 2.7V. The execution result of sensors on the chipset nct6106 is as follows, and \"in7:\" is the voltage of the CMOS battery.

$ sensors\npch_cannonlake-virtual-0\nAdapter: Virtual device\ntemp1:        +42.0\u00b0C\n\nnct6106-isa-0a10\nAdapter: ISA adapter\nin0:           728.00 mV (min =  +0.00 V, max =  +1.74 V)\nin1:             1.01 V  (min =  +0.00 V, max =  +2.04 V)\nin2:             3.34 V  (min =  +0.00 V, max =  +4.08 V)\nin3:             3.34 V  (min =  +0.00 V, max =  +4.08 V)\nin4:             1.07 V  (min =  +0.00 V, max =  +2.04 V)\nin5:             1.05 V  (min =  +0.00 V, max =  +2.04 V)\nin6:             1.67 V  (min =  +0.00 V, max =  +2.04 V)\nin7:             3.06 V  (min =  +0.00 V, max =  +4.08 V)\nin8:             2.10 V  (min =  +0.00 V, max =  +4.08 V)\nfan1:          2789 RPM  (min =    0 RPM)\nfan2:             0 RPM  (min =    0 RPM)\n

The setting value of voltage_monitor.param.yaml is as follows.

/**:\nros__parameters:\ncmos_battery_warn: 2.90\ncmos_battery_error: 2.70\ncmos_battery_label: \"in7:\"\n

The above values of 2.7V and 2.90V are hypothetical. Depending on the motherboard and chipset, the value may vary. However, if the voltage of the lithium battery drops below 2.7V, it is recommended to replace it. In the above example, the message output to the topic /diagnostics is as follows. If the voltage < 2.9V then:

  name: /autoware/system/resource_monitoring/voltage/cmos_battery\n  message: Warning\n  hardware_id: ''\n  values:\n  - key: 'voltage_monitor: CMOS Battery Status'\n    value: Low Battery\n

If the voltage < 2.7V then:

  name: /autoware/system/resource_monitoring/voltage/cmos_battery\n  message: Warning\n  hardware_id: ''\n  values:\n  - key: 'voltage_monitor: CMOS Battery Status'\n    value: Battery Died\n

If neither, then:

  name: /autoware/system/resource_monitoring/voltage/cmos_battery\n  message: OK\n  hardware_id: ''\n  values:\n  - key: 'voltage_monitor: CMOS Battery Status'\n    value: OK\n

If the CMOS battery voltage drops less than voltage_error or voltage_warn,It will be a warning. If the battery runs out, the RTC will stop working when the power is turned off. However, since the vehicle can run, it is not an error. The vehicle will stop when an error occurs, but there is no need to stop immediately. It can be determined by the value of \"Low Battery\" or \"Battery Died\".

"},{"location":"system/system_monitor/#uml-diagrams","title":"UML diagrams","text":"

See Class diagrams. See Sequence diagrams.

"},{"location":"system/system_monitor/docs/class_diagrams/","title":"Class diagrams","text":""},{"location":"system/system_monitor/docs/class_diagrams/#cpu-monitor","title":"CPU Monitor","text":""},{"location":"system/system_monitor/docs/class_diagrams/#hdd-monitor","title":"HDD Monitor","text":""},{"location":"system/system_monitor/docs/class_diagrams/#memory-monitor","title":"Memory Monitor","text":""},{"location":"system/system_monitor/docs/class_diagrams/#net-monitor","title":"Net Monitor","text":""},{"location":"system/system_monitor/docs/class_diagrams/#ntp-monitor","title":"NTP Monitor","text":""},{"location":"system/system_monitor/docs/class_diagrams/#process-monitor","title":"Process Monitor","text":""},{"location":"system/system_monitor/docs/class_diagrams/#gpu-monitor","title":"GPU Monitor","text":""},{"location":"system/system_monitor/docs/hdd_reader/","title":"hdd_reader","text":""},{"location":"system/system_monitor/docs/hdd_reader/#name","title":"Name","text":"

hdd_reader - Read S.M.A.R.T. information for monitoring HDD temperature and life of HDD

"},{"location":"system/system_monitor/docs/hdd_reader/#synopsis","title":"Synopsis","text":"

hdd_reader [OPTION]

"},{"location":"system/system_monitor/docs/hdd_reader/#description","title":"Description","text":"

Read S.M.A.R.T. information for monitoring HDD temperature and life of HDD. This runs as a daemon process and listens to a TCP/IP port (7635 by default).

Options: -h, --help \u00a0\u00a0\u00a0\u00a0Display help -p, --port # \u00a0\u00a0\u00a0\u00a0Port number to listen to

Exit status: Returns 0 if OK; non-zero otherwise.

"},{"location":"system/system_monitor/docs/hdd_reader/#notes","title":"Notes","text":"

The 'hdd_reader' accesses minimal data enough to get Model number, Serial number, HDD temperature, and life of HDD. This is an approach to limit its functionality, however, the functionality can be expanded for further improvements and considerations in the future.

"},{"location":"system/system_monitor/docs/hdd_reader/#ata","title":"[ATA]","text":"Purpose Name Length Model number, Serial number IDENTIFY DEVICE data 256 words(512 bytes) HDD temperature, life of HDD SMART READ DATA 256 words(512 bytes)

For details please see the documents below.

"},{"location":"system/system_monitor/docs/hdd_reader/#nvme","title":"[NVMe]","text":"Purpose Name Length Model number, Serial number Identify Controller data structure 4096 bytes HDD temperature, life of HDD SMART / Health Information 36 Dword(144 bytes)

For details please see the documents below.

"},{"location":"system/system_monitor/docs/hdd_reader/#operation-confirmed-drives","title":"Operation confirmed drives","text":""},{"location":"system/system_monitor/docs/msr_reader/","title":"msr_reader","text":""},{"location":"system/system_monitor/docs/msr_reader/#name","title":"Name","text":"

msr_reader - Read MSR register for monitoring thermal throttling event

"},{"location":"system/system_monitor/docs/msr_reader/#synopsis","title":"Synopsis","text":"

msr_reader [OPTION]

"},{"location":"system/system_monitor/docs/msr_reader/#description","title":"Description","text":"

Read MSR register for monitoring thermal throttling event. This runs as a daemon process and listens to a TCP/IP port (7634 by default).

Options: -h, --help \u00a0\u00a0\u00a0\u00a0Display help -p, --port # \u00a0\u00a0\u00a0\u00a0Port number to listen to

Exit status: Returns 0 if OK; non-zero otherwise.

"},{"location":"system/system_monitor/docs/msr_reader/#notes","title":"Notes","text":"

The 'msr_reader' accesses minimal data enough to get thermal throttling event. This is an approach to limit its functionality, however, the functionality can be expanded for further improvements and considerations in the future.

Register Address Name Length 1B1H IA32_PACKAGE_THERM_STATUS 64bit

For details please see the documents below.

"},{"location":"system/system_monitor/docs/msr_reader/#operation-confirmed-platform","title":"Operation confirmed platform","text":""},{"location":"system/system_monitor/docs/ros_parameters/","title":"ROS parameters","text":""},{"location":"system/system_monitor/docs/ros_parameters/#cpu-monitor","title":"CPU Monitor","text":"

cpu_monitor:

Name Type Unit Default Notes temp_warn float DegC 90.0 Generates warning when CPU temperature reaches a specified value or higher. temp_error float DegC 95.0 Generates error when CPU temperature reaches a specified value or higher. usage_warn float %(1e-2) 0.90 Generates warning when CPU usage reaches a specified value or higher and last for usage_warn_count counts. usage_error float %(1e-2) 1.00 Generates error when CPU usage reaches a specified value or higher and last for usage_error_count counts. usage_warn_count int n/a 2 Generates warning when CPU usage reaches usage_warn value or higher and last for a specified counts. usage_error_count int n/a 2 Generates error when CPU usage reaches usage_error value or higher and last for a specified counts. load1_warn float %(1e-2) 0.90 Generates warning when load average 1min reaches a specified value or higher. load5_warn float %(1e-2) 0.80 Generates warning when load average 5min reaches a specified value or higher. msr_reader_port int n/a 7634 Port number to connect to msr_reader."},{"location":"system/system_monitor/docs/ros_parameters/#hdd-monitor","title":"HDD Monitor","text":"

hdd_monitor:

\u00a0\u00a0disks:

Name Type Unit Default Notes name string n/a none The disk name to monitor temperature. (e.g. /dev/sda) temp_attribute_id int n/a 0xC2 S.M.A.R.T attribute ID of temperature. temp_warn float DegC 55.0 Generates warning when HDD temperature reaches a specified value or higher. temp_error float DegC 70.0 Generates error when HDD temperature reaches a specified value or higher. power_on_hours_attribute_id int n/a 0x09 S.M.A.R.T attribute ID of power-on hours. power_on_hours_warn int Hour 3000000 Generates warning when HDD power-on hours reaches a specified value or higher. total_data_written_attribute_id int n/a 0xF1 S.M.A.R.T attribute ID of total data written. total_data_written_warn int depends on device 4915200 Generates warning when HDD total data written reaches a specified value or higher. total_data_written_safety_factor int %(1e-2) 0.05 Safety factor of HDD total data written. recovered_error_attribute_id int n/a 0xC3 S.M.A.R.T attribute ID of recovered error. recovered_error_warn int n/a 1 Generates warning when HDD recovered error reaches a specified value or higher. read_data_rate_warn float MB/s 360.0 Generates warning when HDD read data rate reaches a specified value or higher. write_data_rate_warn float MB/s 103.5 Generates warning when HDD write data rate reaches a specified value or higher. read_iops_warn float IOPS 63360.0 Generates warning when HDD read IOPS reaches a specified value or higher. write_iops_warn float IOPS 24120.0 Generates warning when HDD write IOPS reaches a specified value or higher.

hdd_monitor:

Name Type Unit Default Notes hdd_reader_port int n/a 7635 Port number to connect to hdd_reader. usage_warn float %(1e-2) 0.95 Generates warning when disk usage reaches a specified value or higher. usage_error float %(1e-2) 0.99 Generates error when disk usage reaches a specified value or higher."},{"location":"system/system_monitor/docs/ros_parameters/#memory-monitor","title":"Memory Monitor","text":"

mem_monitor:

Name Type Unit Default Notes usage_warn float %(1e-2) 0.95 Generates warning when physical memory usage reaches a specified value or higher. usage_error float %(1e-2) 0.99 Generates error when physical memory usage reaches a specified value or higher."},{"location":"system/system_monitor/docs/ros_parameters/#net-monitor","title":"Net Monitor","text":"

net_monitor:

Name Type Unit Default Notes devices list[string] n/a none The name of network interface to monitor. (e.g. eth0, * for all network interfaces) monitor_program string n/a greengrass program name to be monitored by nethogs name. crc_error_check_duration int sec 1 CRC error check duration. crc_error_count_threshold int n/a 1 Generates warning when count of CRC errors during CRC error check duration reaches a specified value or higher. reassembles_failed_check_duration int sec 1 IP packet reassembles failed check duration. reassembles_failed_check_count int n/a 1 Generates warning when count of IP packet reassembles failed during IP packet reassembles failed check duration reaches a specified value or higher."},{"location":"system/system_monitor/docs/ros_parameters/#ntp-monitor","title":"NTP Monitor","text":"

ntp_monitor:

Name Type Unit Default Notes server string n/a ntp.ubuntu.com The name of NTP server to synchronize date and time. (e.g. ntp.nict.jp for Japan) offset_warn float sec 0.1 Generates warning when NTP offset reaches a specified value or higher. (default is 100ms) offset_error float sec 5.0 Generates warning when NTP offset reaches a specified value or higher. (default is 5sec)"},{"location":"system/system_monitor/docs/ros_parameters/#process-monitor","title":"Process Monitor","text":"

process_monitor:

Name Type Unit Default Notes num_of_procs int n/a 5 The number of processes to generate High-load Proc[0-9] and High-mem Proc[0-9]."},{"location":"system/system_monitor/docs/ros_parameters/#gpu-monitor","title":"GPU Monitor","text":"

gpu_monitor:

Name Type Unit Default Notes temp_warn float DegC 90.0 Generates warning when GPU temperature reaches a specified value or higher. temp_error float DegC 95.0 Generates error when GPU temperature reaches a specified value or higher. gpu_usage_warn float %(1e-2) 0.90 Generates warning when GPU usage reaches a specified value or higher. gpu_usage_error float %(1e-2) 1.00 Generates error when GPU usage reaches a specified value or higher. memory_usage_warn float %(1e-2) 0.90 Generates warning when GPU memory usage reaches a specified value or higher. memory_usage_error float %(1e-2) 1.00 Generates error when GPU memory usage reaches a specified value or higher."},{"location":"system/system_monitor/docs/ros_parameters/#voltage-monitor","title":"Voltage Monitor","text":"

voltage_monitor:

Name Type Unit Default Notes cmos_battery_warn float volt 2.9 Generates warning when voltage of CMOS Battery is lower. cmos_battery_error float volt 2.7 Generates error when voltage of CMOS Battery is lower. cmos_battery_label string n/a \"\" voltage string in sensors command outputs. if empty no voltage will be checked."},{"location":"system/system_monitor/docs/seq_diagrams/","title":"Sequence diagrams","text":""},{"location":"system/system_monitor/docs/seq_diagrams/#cpu-monitor","title":"CPU Monitor","text":""},{"location":"system/system_monitor/docs/seq_diagrams/#hdd-monitor","title":"HDD Monitor","text":""},{"location":"system/system_monitor/docs/seq_diagrams/#memory-monitor","title":"Memory Monitor","text":""},{"location":"system/system_monitor/docs/seq_diagrams/#net-monitor","title":"Net Monitor","text":""},{"location":"system/system_monitor/docs/seq_diagrams/#ntp-monitor","title":"NTP Monitor","text":""},{"location":"system/system_monitor/docs/seq_diagrams/#process-monitor","title":"Process Monitor","text":""},{"location":"system/system_monitor/docs/seq_diagrams/#gpu-monitor","title":"GPU Monitor","text":""},{"location":"system/system_monitor/docs/topics_cpu_monitor/","title":"ROS topics: CPU Monitor","text":""},{"location":"system/system_monitor/docs/topics_cpu_monitor/#cpu-temperature","title":"CPU Temperature","text":"

/diagnostics/cpu_monitor: CPU Temperature

[summary]

level message OK OK

[values]

key (example) value (example) Package id 0, Core [0-9], thermal_zone[0-9] 50.0 DegC

*key: thermal_zone[0-9] for ARM architecture.

"},{"location":"system/system_monitor/docs/topics_cpu_monitor/#cpu-usage","title":"CPU Usage","text":"

/diagnostics/cpu_monitor: CPU Usage

[summary]

level message OK OK WARN high load ERROR very high load

[values]

key value (example) CPU [all,0-9]: status OK / high load / very high load CPU [all,0-9]: usr 2.00% CPU [all,0-9]: nice 0.00% CPU [all,0-9]: sys 1.00% CPU [all,0-9]: idle 97.00%"},{"location":"system/system_monitor/docs/topics_cpu_monitor/#cpu-load-average","title":"CPU Load Average","text":"

/diagnostics/cpu_monitor: CPU Load Average

[summary]

level message OK OK WARN high load

[values]

key value (example) 1min 14.50% 5min 14.55% 15min 9.67%"},{"location":"system/system_monitor/docs/topics_cpu_monitor/#cpu-thermal-throttling","title":"CPU Thermal Throttling","text":"

Intel and raspi platform only. Tegra platform not supported.

/diagnostics/cpu_monitor: CPU Thermal Throttling

[summary]

level message OK OK ERROR throttling

[values for intel platform]

key value (example) CPU [0-9]: Pkg Thermal Status OK / throttling

[values for raspi platform]

key value (example) status All clear / Currently throttled / Soft temperature limit active"},{"location":"system/system_monitor/docs/topics_cpu_monitor/#cpu-frequency","title":"CPU Frequency","text":"

/diagnostics/cpu_monitor: CPU Frequency

[summary]

level message OK OK

[values]

key value (example) CPU [0-9]: clock 2879MHz"},{"location":"system/system_monitor/docs/topics_gpu_monitor/","title":"ROS topics: GPU Monitor","text":"

Intel and tegra platform only. Raspi platform not supported.

"},{"location":"system/system_monitor/docs/topics_gpu_monitor/#gpu-temperature","title":"GPU Temperature","text":"

/diagnostics/gpu_monitor: GPU Temperature

[summary]

level message OK OK WARN warm ERROR hot

[values]

key (example) value (example) GeForce GTX 1650, thermal_zone[0-9] 46.0 DegC

*key: thermal_zone[0-9] for ARM architecture.

"},{"location":"system/system_monitor/docs/topics_gpu_monitor/#gpu-usage","title":"GPU Usage","text":"

/diagnostics/gpu_monitor: GPU Usage

[summary]

level message OK OK WARN high load ERROR very high load

[values]

key value (example) GPU [0-9]: status OK / high load / very high load GPU [0-9]: name GeForce GTX 1650, gpu.[0-9] GPU [0-9]: usage 19.0%

*key: gpu.[0-9] for ARM architecture.

"},{"location":"system/system_monitor/docs/topics_gpu_monitor/#gpu-memory-usage","title":"GPU Memory Usage","text":"

Intel platform only. There is no separate gpu memory in tegra. Both cpu and gpu uses cpu memory.

/diagnostics/gpu_monitor: GPU Memory Usage

[summary]

level message OK OK WARN high load ERROR very high load

[values]

key value (example) GPU [0-9]: status OK / high load / very high load GPU [0-9]: name GeForce GTX 1650 GPU [0-9]: usage 13.0% GPU [0-9]: total 3G GPU [0-9]: used 1G GPU [0-9]: free 2G"},{"location":"system/system_monitor/docs/topics_gpu_monitor/#gpu-thermal-throttling","title":"GPU Thermal Throttling","text":"

Intel platform only. Tegra platform not supported.

/diagnostics/gpu_monitor: GPU Thermal Throttling

[summary]

level message OK OK ERROR throttling

[values]

key value (example) GPU [0-9]: status OK / throttling GPU [0-9]: name GeForce GTX 1650 GPU [0-9]: graphics clock 1020 MHz GPU [0-9]: reasons GpuIdle / SwThermalSlowdown etc."},{"location":"system/system_monitor/docs/topics_gpu_monitor/#gpu-frequency","title":"GPU Frequency","text":"

/diagnostics/gpu_monitor: GPU Frequency

"},{"location":"system/system_monitor/docs/topics_gpu_monitor/#intel-platform","title":"Intel platform","text":"

[summary]

level message OK OK WARN unsupported clock

[values]

key value (example) GPU [0-9]: status OK / unsupported clock GPU [0-9]: name GeForce GTX 1650 GPU [0-9]: graphics clock 1020 MHz"},{"location":"system/system_monitor/docs/topics_gpu_monitor/#tegra-platform","title":"Tegra platform","text":"

[summary]

level message OK OK

[values]

key (example) value (example) GPU 17000000.gv11b: clock 318 MHz"},{"location":"system/system_monitor/docs/topics_hdd_monitor/","title":"ROS topics: HDD Monitor","text":""},{"location":"system/system_monitor/docs/topics_hdd_monitor/#hdd-temperature","title":"HDD Temperature","text":"

/diagnostics/hdd_monitor: HDD Temperature

[summary]

level message OK OK WARN hot ERROR critical hot

[values]

key value (example) HDD [0-9]: status OK / hot / critical hot HDD [0-9]: name /dev/nvme0 HDD [0-9]: model SAMSUNG MZVLB1T0HBLR-000L7 HDD [0-9]: serial S4EMNF0M820682 HDD [0-9]: temperature 37.0 DegC not available"},{"location":"system/system_monitor/docs/topics_hdd_monitor/#hdd-poweronhours","title":"HDD PowerOnHours","text":"

/diagnostics/hdd_monitor: HDD PowerOnHours

[summary]

level message OK OK WARN lifetime limit

[values]

key value (example) HDD [0-9]: status OK / lifetime limit HDD [0-9]: name /dev/nvme0 HDD [0-9]: model PHISON PS5012-E12S-512G HDD [0-9]: serial FB590709182505050767 HDD [0-9]: power on hours 4834 Hours not available"},{"location":"system/system_monitor/docs/topics_hdd_monitor/#hdd-totaldatawritten","title":"HDD TotalDataWritten","text":"

/diagnostics/hdd_monitor: HDD TotalDataWritten

[summary]

level message OK OK WARN warranty period

[values]

key value (example) HDD [0-9]: status OK / warranty period HDD [0-9]: name /dev/nvme0 HDD [0-9]: model PHISON PS5012-E12S-512G HDD [0-9]: serial FB590709182505050767 HDD [0-9]: total data written 146295330 not available"},{"location":"system/system_monitor/docs/topics_hdd_monitor/#hdd-recoverederror","title":"HDD RecoveredError","text":"

/diagnostics/hdd_monitor: HDD RecoveredError

[summary]

level message OK OK WARN high soft error rate

[values]

key value (example) HDD [0-9]: status OK / high soft error rate HDD [0-9]: name /dev/nvme0 HDD [0-9]: model PHISON PS5012-E12S-512G HDD [0-9]: serial FB590709182505050767 HDD [0-9]: recovered error 0 not available"},{"location":"system/system_monitor/docs/topics_hdd_monitor/#hdd-usage","title":"HDD Usage","text":"

/diagnostics/hdd_monitor: HDD Usage

[summary]

level message OK OK WARN low disk space ERROR very low disk space

[values]

key value (example) HDD [0-9]: status OK / low disk space / very low disk space HDD [0-9]: filesystem /dev/nvme0n1p4 HDD [0-9]: size 264G HDD [0-9]: used 172G HDD [0-9]: avail 749G HDD [0-9]: use 69% HDD [0-9]: mounted on /"},{"location":"system/system_monitor/docs/topics_hdd_monitor/#hdd-readdatarate","title":"HDD ReadDataRate","text":"

/diagnostics/hdd_monitor: HDD ReadDataRate

[summary]

level message OK OK WARN high data rate of read

[values]

key value (example) HDD [0-9]: status OK / high data rate of read HDD [0-9]: name /dev/nvme0 HDD [0-9]: data rate of read 0.00 MB/s"},{"location":"system/system_monitor/docs/topics_hdd_monitor/#hdd-writedatarate","title":"HDD WriteDataRate","text":"

/diagnostics/hdd_monitor: HDD WriteDataRate

[summary]

level message OK OK WARN high data rate of write

[values]

key value (example) HDD [0-9]: status OK / high data rate of write HDD [0-9]: name /dev/nvme0 HDD [0-9]: data rate of write 0.00 MB/s"},{"location":"system/system_monitor/docs/topics_hdd_monitor/#hdd-readiops","title":"HDD ReadIOPS","text":"

/diagnostics/hdd_monitor: HDD ReadIOPS

[summary]

level message OK OK WARN high IOPS of read

[values]

key value (example) HDD [0-9]: status OK / high IOPS of read HDD [0-9]: name /dev/nvme0 HDD [0-9]: IOPS of read 0.00 IOPS"},{"location":"system/system_monitor/docs/topics_hdd_monitor/#hdd-writeiops","title":"HDD WriteIOPS","text":"

/diagnostics/hdd_monitor: HDD WriteIOPS

[summary]

level message OK OK WARN high IOPS of write

[values]

key value (example) HDD [0-9]: status OK / high IOPS of write HDD [0-9]: name /dev/nvme0 HDD [0-9]: IOPS of write 0.00 IOPS"},{"location":"system/system_monitor/docs/topics_hdd_monitor/#hdd-connection","title":"HDD Connection","text":"

/diagnostics/hdd_monitor: HDD Connection

[summary]

level message OK OK WARN not connected

[values]

key value (example) HDD [0-9]: status OK / not connected HDD [0-9]: name /dev/nvme0 HDD [0-9]: mount point /"},{"location":"system/system_monitor/docs/topics_mem_monitor/","title":"ROS topics: Memory Monitor","text":""},{"location":"system/system_monitor/docs/topics_mem_monitor/#memory-usage","title":"Memory Usage","text":"

/diagnostics/mem_monitor: Memory Usage

[summary]

level message OK OK WARN high load ERROR very high load

[values]

key value (example) Mem: usage 29.72% Mem: total 31.2G Mem: used 6.0G Mem: free 20.7G Mem: shared 2.9G Mem: buff/cache 4.5G Mem: available 21.9G Swap: total 2.0G Swap: used 218M Swap: free 1.8G Total: total 33.2G Total: used 6.2G Total: free 22.5G Total: used+ 9.1G"},{"location":"system/system_monitor/docs/topics_net_monitor/","title":"ROS topics: Net Monitor","text":""},{"location":"system/system_monitor/docs/topics_net_monitor/#network-connection","title":"Network Connection","text":"

/diagnostics/net_monitor: Network Connection

[summary]

level message OK OK WARN no such device

[values]

key value (example) Network [0-9]: status OK / no such device HDD [0-9]: name wlp82s0"},{"location":"system/system_monitor/docs/topics_net_monitor/#network-usage","title":"Network Usage","text":"

/diagnostics/net_monitor: Network Usage

[summary]

level message OK OK

[values]

key value (example) Network [0-9]: status OK Network [0-9]: interface name wlp82s0 Network [0-9]: rx_usage 0.00% Network [0-9]: tx_usage 0.00% Network [0-9]: rx_traffic 0.00 MB/s Network [0-9]: tx_traffic 0.00 MB/s Network [0-9]: capacity 400.0 MB/s Network [0-9]: mtu 1500 Network [0-9]: rx_bytes 58455228 Network [0-9]: rx_errors 0 Network [0-9]: tx_bytes 11069136 Network [0-9]: tx_errors 0 Network [0-9]: collisions 0"},{"location":"system/system_monitor/docs/topics_net_monitor/#network-traffic","title":"Network Traffic","text":"

/diagnostics/net_monitor: Network Traffic

[summary]

level message OK OK

[values when specified program is detected]

key value (example) nethogs [0-9]: program /lambda/greengrassSystemComponents/1384/999 nethogs [0-9]: sent (KB/Sec) 1.13574 nethogs [0-9]: received (KB/Sec) 0.261914

[values when error is occurring]

key value (example) error execve failed: No such file or directory"},{"location":"system/system_monitor/docs/topics_net_monitor/#network-crc-error","title":"Network CRC Error","text":"

/diagnostics/net_monitor: Network CRC Error

[summary]

level message OK OK WARN CRC error

[values]

key value (example) Network [0-9]: interface name wlp82s0 Network [0-9]: total rx_crc_errors 0 Network [0-9]: rx_crc_errors per unit time 0"},{"location":"system/system_monitor/docs/topics_net_monitor/#ip-packet-reassembles-failed","title":"IP Packet Reassembles Failed","text":"

/diagnostics/net_monitor: IP Packet Reassembles Failed

[summary]

level message OK OK WARN reassembles failed

[values]

key value (example) total packet reassembles failed 0 packet reassembles failed per unit time 0"},{"location":"system/system_monitor/docs/topics_ntp_monitor/","title":"ROS topics: NTP Monitor","text":""},{"location":"system/system_monitor/docs/topics_ntp_monitor/#ntp-offset","title":"NTP Offset","text":"

/diagnostics/ntp_monitor: NTP Offset

[summary]

level message OK OK WARN high ERROR too high

[values]

key value (example) NTP Offset -0.013181 sec NTP Delay 0.053880 sec"},{"location":"system/system_monitor/docs/topics_process_monitor/","title":"ROS topics: Process Monitor","text":""},{"location":"system/system_monitor/docs/topics_process_monitor/#tasks-summary","title":"Tasks Summary","text":"

/diagnostics/process_monitor: Tasks Summary

[summary]

level message OK OK

[values]

key value (example) total 409 running 2 sleeping 321 stopped 0 zombie 0"},{"location":"system/system_monitor/docs/topics_process_monitor/#high-load-proc0-9","title":"High-load Proc[0-9]","text":"

/diagnostics/process_monitor: High-load Proc[0-9]

[summary]

level message OK OK

[values]

key value (example) COMMAND /usr/lib/firefox/firefox %CPU 37.5 %MEM 2.1 PID 14062 USER autoware PR 20 NI 0 VIRT 3461152 RES 669052 SHR 481208 S S TIME+ 23:57.49"},{"location":"system/system_monitor/docs/topics_process_monitor/#high-mem-proc0-9","title":"High-mem Proc[0-9]","text":"

/diagnostics/process_monitor: High-mem Proc[0-9]

[summary]

level message OK OK

[values]

key value (example) COMMAND /snap/multipass/1784/usr/bin/qemu-system-x86_64 %CPU 0 %MEM 2.5 PID 1565 USER root PR 20 NI 0 VIRT 3722320 RES 812432 SHR 20340 S S TIME+ 0:22.84"},{"location":"system/system_monitor/docs/topics_voltage_monitor/","title":"ROS topics: Voltage Monitor","text":"

\"CMOS Battery Status\" and \"CMOS battery voltage\" are exclusive. Only one or the other is generated. Which one is generated depends on the value of cmos_battery_label.

"},{"location":"system/system_monitor/docs/topics_voltage_monitor/#cmos-battery-status","title":"CMOS Battery Status","text":"

/diagnostics/voltage_monitor: CMOS Battery Status

[summary]

level message OK OK WARN Battery Dead

[values]

key (example) value (example) CMOS battery status OK / Battery Dead

*key: thermal_zone[0-9] for ARM architecture.

"},{"location":"system/system_monitor/docs/topics_voltage_monitor/#cmos-battery-voltage","title":"CMOS Battery Voltage","text":"

/diagnostics/voltage_monitor: CMOS battery voltage

[summary]

level message OK OK WARN Low Battery WARN Battery Died

[values]

key value (example) CMOS battery voltage 3.06"},{"location":"system/system_monitor/docs/traffic_reader/","title":"traffic_reader","text":""},{"location":"system/system_monitor/docs/traffic_reader/#name","title":"Name","text":"

traffic_reader - monitoring network traffic by process

"},{"location":"system/system_monitor/docs/traffic_reader/#synopsis","title":"Synopsis","text":"

traffic_reader [OPTION]

"},{"location":"system/system_monitor/docs/traffic_reader/#description","title":"Description","text":"

Monitoring network traffic by process. This runs as a daemon process and listens to a TCP/IP port (7636 by default).

Options: -h, --help \u00a0\u00a0\u00a0\u00a0Display help -p, --port # \u00a0\u00a0\u00a0\u00a0Port number to listen to

Exit status: Returns 0 if OK; non-zero otherwise.

"},{"location":"system/system_monitor/docs/traffic_reader/#notes","title":"Notes","text":"

The 'traffic_reader' requires nethogs command.

"},{"location":"system/system_monitor/docs/traffic_reader/#operation-confirmed-platform","title":"Operation confirmed platform","text":""},{"location":"system/topic_state_monitor/Readme/","title":"topic_state_monitor","text":""},{"location":"system/topic_state_monitor/Readme/#purpose","title":"Purpose","text":"

This node monitors input topic for abnormalities such as timeout and low frequency. The result of topic status is published as diagnostics.

"},{"location":"system/topic_state_monitor/Readme/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

The types of topic status and corresponding diagnostic status are following.

Topic status Diagnostic status Description OK OK The topic has no abnormalities NotReceived ERROR The topic has not been received yet WarnRate WARN The frequency of the topic is dropped ErrorRate ERROR The frequency of the topic is significantly dropped Timeout ERROR The topic subscription is stopped for a certain time"},{"location":"system/topic_state_monitor/Readme/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"system/topic_state_monitor/Readme/#input","title":"Input","text":"Name Type Description any name any type Subscribe target topic to monitor"},{"location":"system/topic_state_monitor/Readme/#output","title":"Output","text":"Name Type Description /diagnostics diagnostic_msgs/DiagnosticArray Diagnostics outputs"},{"location":"system/topic_state_monitor/Readme/#parameters","title":"Parameters","text":""},{"location":"system/topic_state_monitor/Readme/#node-parameters","title":"Node Parameters","text":"Name Type Default Value Description topic string - Name of target topic topic_type string - Type of target topic (used if the topic is not transform) frame_id string - Frame ID of transform parent (used if the topic is transform) child_frame_id string - Frame ID of transform child (used if the topic is transform) transient_local bool false QoS policy of topic subscription (Transient Local/Volatile) best_effort bool false QoS policy of topic subscription (Best Effort/Reliable) diag_name string - Name used for the diagnostics to publish update_rate double 10.0 Timer callback period [Hz]"},{"location":"system/topic_state_monitor/Readme/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description warn_rate double 0.5 If the topic rate is lower than this value, the topic status becomes WarnRate error_rate double 0.1 If the topic rate is lower than this value, the topic status becomes ErrorRate timeout double 1.0 If the topic subscription is stopped for more than this time [s], the topic status becomes Timeout window_size int 10 Window size of target topic for calculating frequency"},{"location":"system/topic_state_monitor/Readme/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"system/velodyne_monitor/Readme/","title":"velodyne_monitor","text":""},{"location":"system/velodyne_monitor/Readme/#purpose","title":"Purpose","text":"

This node monitors the status of Velodyne LiDARs. The result of the status is published as diagnostics. Take care not to use this diagnostics to decide the lidar error. Please read Assumptions / Known limits for the detail reason.

"},{"location":"system/velodyne_monitor/Readme/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

The status of Velodyne LiDAR can be retrieved from http://[ip_address]/cgi/{info, settings, status, diag}.json.

The types of abnormal status and corresponding diagnostics status are following.

Abnormal status Diagnostic status No abnormality OK Top board temperature is too cold ERROR Top board temperature is cold WARN Top board temperature is too hot ERROR Top board temperature is hot WARN Bottom board temperature is too cold ERROR Bottom board temperature is cold WARN Bottom board temperature is too hot ERROR Bottom board temperature is hot WARN Rpm(Rotations per minute) of the motor is too low ERROR Rpm(Rotations per minute) of the motor is low WARN Connection error (cannot get Velodyne LiDAR status) ERROR"},{"location":"system/velodyne_monitor/Readme/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"system/velodyne_monitor/Readme/#input","title":"Input","text":"

None

"},{"location":"system/velodyne_monitor/Readme/#output","title":"Output","text":"Name Type Description /diagnostics diagnostic_msgs/DiagnosticArray Diagnostics outputs"},{"location":"system/velodyne_monitor/Readme/#parameters","title":"Parameters","text":""},{"location":"system/velodyne_monitor/Readme/#node-parameters","title":"Node Parameters","text":"Name Type Default Value Description timeout double 0.5 Timeout for HTTP request to get Velodyne LiDAR status [s]"},{"location":"system/velodyne_monitor/Readme/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description ip_address string \"192.168.1.201\" IP address of target Velodyne LiDAR temp_cold_warn double -5.0 If the temperature of Velodyne LiDAR is lower than this value, the diagnostics status becomes WARN [\u00b0C] temp_cold_error double -10.0 If the temperature of Velodyne LiDAR is lower than this value, the diagnostics status becomes ERROR [\u00b0C] temp_hot_warn double 75.0 If the temperature of Velodyne LiDAR is higher than this value, the diagnostics status becomes WARN [\u00b0C] temp_hot_error double 80.0 If the temperature of Velodyne LiDAR is higher than this value, the diagnostics status becomes ERROR [\u00b0C] rpm_ratio_warn double 0.80 If the rpm rate of the motor (= current rpm / default rpm) is lower than this value, the diagnostics status becomes WARN rpm_ratio_error double 0.70 If the rpm rate of the motor (= current rpm / default rpm) is lower than this value, the diagnostics status becomes ERROR"},{"location":"system/velodyne_monitor/Readme/#config-files","title":"Config files","text":"

Config files for several velodyne models are prepared. The temp_*** parameters are set with reference to the operational temperature from each datasheet. Moreover, the temp_hot_*** of each model are set highly as 20 from operational temperature. Now, VLP-16.param.yaml is used as default argument because it is lowest spec.

Model Name Config name Operational Temperature [\u2103] VLP-16 VLP-16.param.yaml -10 to 60 VLP-32C VLP-32C.param.yaml -20 to 60 VLS-128 VLS-128.param.yaml -20 to 60 Velarray M1600 Velarray_M1600.param.yaml -40 to 85 HDL-32E HDL-32E.param.yaml -10 to 60"},{"location":"system/velodyne_monitor/Readme/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

This node uses the http_client and request results by GET method. It takes a few seconds to get results, or generate a timeout exception if it does not succeed the GET request. This occurs frequently and the diagnostics aggregator output STALE. Therefore I recommend to stop using this results to decide the lidar error, and only monitor it to confirm lidar status.

"},{"location":"tools/simulator_test/simulator_compatibility_test/","title":"simulator_compatibility_test","text":""},{"location":"tools/simulator_test/simulator_compatibility_test/#purpose","title":"Purpose","text":"

Test procedures (e.g. test codes) to check whether a certain simulator is compatible with Autoware

"},{"location":"tools/simulator_test/simulator_compatibility_test/#overview-of-the-test-codes","title":"Overview of the test codes","text":"

File structure

  1. test_base provides shared methods for testing. Other test codes are created based on functions defined here.
  2. test_sim_common_manual_testing provides the most basic functions. Any simulator can be tested using codes here. However, to make these codes usable with any simulators, the codes do not include any features for test automation.
  3. test_morai_sim is an automated version of test_sim_common_manual_testing for MORAI SIM: Drive. Thus it includes 'MORAI SIM: Drive'-specific codes. Users of the other simulators may create similar version for their simulator of interest.
"},{"location":"tools/simulator_test/simulator_compatibility_test/#test-procedures-for-test_sim_common_manual_testing","title":"Test Procedures for test_sim_common_manual_testing","text":""},{"location":"tools/simulator_test/simulator_compatibility_test/#build-process-before-test","title":"Build process before test","text":"
source install/setup.bash\ncolcon build --packages-select simulator_compatibility_test\ncd src/universe/autoware.universe/tools/simulator_test/simulator_compatibility_test/test_sim_common_manual_testing\n

To run each test case manually

"},{"location":"tools/simulator_test/simulator_compatibility_test/#test-case-1","title":"Test Case #1","text":"
  1. Run your simulator
  2. Load a map and an ego vehicle for the test
  3. Run the test using the following command

    python -m pytest test_01_control_mode_and_report.py\n
  4. Check if expected behavior is created within the simulator

    • Ego vehicle control mode is changed into Manual (If the simulator has a GUI for this one, it should display the ego is in Manual)
    • Ego vehicle control mode is changed into Auto (If the simulator has a GUI for this one, it should display the ego is in Auto)
  5. Check if pytest output is passed or failure
"},{"location":"tools/simulator_test/simulator_compatibility_test/#test-case-2","title":"Test Case #2","text":"
  1. Run your simulator (If the simulator is already running, skip this part)
  2. Load a map and an ego vehicle for the test (If a map and an ego are loaded already, skip this part)
  3. Run the test using the following command

    python -m pytest test_02_change_gear_and_report.py\n
  4. Check if expected behavior is created within the simulator

    • Ego vehicle gear mode is changed into \"P\" (If the simulator has a GUI for this one, it should display the gear mode is in \"P\")
    • Ego vehicle gear mode is changed into \"N\" (If the simulator has a GUI for this one, it should display the gear mode is in \"N\")
    • Ego vehicle gear mode is changed into \"R\" (If the simulator has a GUI for this one, it should display the gear mode is in \"R\")
    • Ego vehicle gear mode is changed into \"D\" (If the simulator has a GUI for this one, it should display the gear mode is in \"D\")
  5. Check if pytest output is passed or failure
"},{"location":"tools/simulator_test/simulator_compatibility_test/#test-case-3","title":"Test Case #3","text":"
  1. Run your simulator (If the simulator is already running, skip this part)
  2. Load a map and an ego vehicle for the test (If a map and an ego are loaded already, skip this part)
  3. Run the test using the following command

    python -m pytest test_03_longitudinal_command_and_report.py\n
  4. Check if expected behavior is created within the simulator

    • Ego vehicle longitudinal velocity is greater than 10 kph (If the simulator has a GUI for this one, it should display the longitudinal velocity is greater than 10 kph)
    • Ego vehicle longitudinal velocity is going below 10 kph. This is an ego vehicle initialize process to ensure the following acceleration is made by longitudinal.acceleration value (If the simulator has a GUI for this one, it should display the longitudinal velocity is less than 10 kph)
    • Ego vehicle longitudinal velocity is greater than 10 kph (If the simulator has a GUI for this one, it should display the longitudinal velocity is greater than 10 kph)
    • Ego vehicle longitudinal velocity is going below 10 kph. This is an ego vehicle reset process to tear down this test case.
  5. Check if pytest output is passed or failure
"},{"location":"tools/simulator_test/simulator_compatibility_test/#test-case-4","title":"Test Case #4","text":"
  1. Run your simulator (If the simulator is already running, skip this part)
  2. Load a map and an ego vehicle for the test (If a map and an ego are loaded already, skip this part)
  3. Run the test using the following command

    python -m pytest test_04_lateral_command_and_report.py\n
  4. Check if expected behavior is created within the simulator

    • Ego vehicle steering and/or tire value is greater than 0 degree (If the simulator has a GUI for this one, it should display the steering and/or tire is greater than 0 degree)
    • Ego vehicle steering and/or tire value is 0 degree. This is a reset process. (If the simulator has a GUI for this one, it should display the steering and/or tire is 0 degree)
    • Ego vehicle steering and/or tire value is less than 0 degree (If the simulator has a GUI for this one, it should display the steering and/or tire is less than 0 degree)
    • Ego vehicle steering and/or tire value is 0 degree. This is a reset process. (If the simulator has a GUI for this one, it should display the steering and/or tire is 0 degree)
  5. Check if pytest output is passed or failure
"},{"location":"tools/simulator_test/simulator_compatibility_test/#test-case-5","title":"Test Case #5","text":"
  1. Run your simulator (If the simulator is already running, skip this part)
  2. Load a map and an ego vehicle for the test (If a map and an ego are loaded already, skip this part)
  3. Run the test using the following command

    python -m pytest test_05_turn_indicators_cmd_and_report.py\n
  4. Check if expected behavior is created within the simulator

    • Ego vehicle left turn indicator is turned on (If the simulator has a GUI for this one, it should display the left turn indicator is turned on)
    • Ego vehicle right turn indicator is turned on (If the simulator has a GUI for this one, it should display the right turn indicator is turned on)
    • Ego vehicle both turn indicators are turned off. This is a reset process. (If the simulator has a GUI for this one, it should display both left and right turn indicators are turned off)
  5. Check if pytest output is passed or failure
"},{"location":"tools/simulator_test/simulator_compatibility_test/#test-case-6","title":"Test Case #6","text":"
  1. Run your simulator (If the simulator is already running, skip this part)
  2. Load a map and an ego vehicle for the test (If a map and an ego are loaded already, skip this part)
  3. Run the test using the following command

    python -m pytest test_06_hazard_lights_cmd_and_report.py\n
  4. Check if expected behavior is created within the simulator

    • Ego vehicle hazard lights are turned on (If the simulator has a GUI for this one, it should display the hazard lights are turned on or blinking)
    • Ego vehicle hazard lights are turned off. This is a reset process. (If the simulator has a GUI for this one, it should display the hazard lights are turned off)
  5. Check if pytest output is passed or failure
"},{"location":"tools/simulator_test/simulator_compatibility_test/#test-procedures-for-test_morai_sim","title":"Test Procedures for test_morai_sim","text":""},{"location":"tools/simulator_test/simulator_compatibility_test/#build-process-before-test_1","title":"Build process before test","text":"
source install/setup.bash\ncolcon build --packages-select simulator_compatibility_test\ncd src/universe/autoware.universe/tools/simulator_test/simulator_compatibility_test/test_morai_sim\n

Detailed process

(WIP)

"},{"location":"tools/simulator_test/simulator_compatibility_test/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"tools/simulator_test/simulator_compatibility_test/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"tools/simulator_test/simulator_compatibility_test/#input","title":"Input","text":"Name Type Description /vehicle/status/control_mode autoware_auto_vehicle_msgs::msg::ControlModeReport for [Test Case #1] /vehicle/status/gear_status autoware_auto_vehicle_msgs::msg::GearReport for [Test Case #2] /vehicle/status/velocity_status autoware_auto_vehicle_msgs::msg::VelocityReport for [Test Case #3] /vehicle/status/steering_status autoware_auto_vehicle_msgs::msg::SteeringReport for [Test Case #4] /vehicle/status/turn_indicators_status autoware_auto_vehicle_msgs::msg::TurnIndicatorsReport for [Test Case #5] /vehicle/status/hazard_lights_status autoware_auto_vehicle_msgs::msg::HazardLightsReport for [Test Case #6]"},{"location":"tools/simulator_test/simulator_compatibility_test/#output","title":"Output","text":"Name Type Description /control/command/control_mode_cmd autoware_auto_vehicle_msgs/ControlModeCommand for [Test Case #1] /control/command/gear_cmd autoware_auto_vehicle_msgs/GearCommand for [Test Case #2] /control/command/control_cmd autoware_auto_vehicle_msgs/AckermannControlCommand for [Test Case #3, #4] /vehicle/status/steering_status autoware_auto_vehicle_msgs/TurnIndicatorsCommand for [Test Case #5] /control/command/turn_indicators_cmd autoware_auto_vehicle_msgs/HazardLightsCommand for [Test Case #6]"},{"location":"tools/simulator_test/simulator_compatibility_test/#parameters","title":"Parameters","text":"

None.

"},{"location":"tools/simulator_test/simulator_compatibility_test/#node-parameters","title":"Node Parameters","text":"

None.

"},{"location":"tools/simulator_test/simulator_compatibility_test/#core-parameters","title":"Core Parameters","text":"

None.

"},{"location":"tools/simulator_test/simulator_compatibility_test/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

None.

"},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/","title":"accel_brake_map_calibrator","text":"

The role of this node is to automatically calibrate accel_map.csv / brake_map.csv used in the raw_vehicle_cmd_converter node.

The base map, which is lexus's one by default, is updated iteratively with the loaded driving data.

"},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#how-to-calibrate","title":"How to calibrate","text":""},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#launch-calibrator","title":"Launch Calibrator","text":"

After launching Autoware, run the accel_brake_map_calibrator by the following command and then perform autonomous driving. Note: You can collect data with manual driving if it is possible to use the same vehicle interface as during autonomous driving (e.g. using a joystick).

ros2 launch accel_brake_map_calibrator accel_brake_map_calibrator.launch.xml rviz:=true\n

Or if you want to use rosbag files, run the following commands.

ros2 launch accel_brake_map_calibrator accel_brake_map_calibrator.launch.xml rviz:=true use_sim_time:=true\nros2 bag play <rosbag_file> --clock\n

During the calibration with setting the parameter progress_file_output to true, the log file is output in [directory of accel_brake_map_calibrator]/config/ . You can also see accel and brake maps in [directory of accel_brake_map_calibrator]/config/accel_map.csv and [directory of accel_brake_map_calibrator]/config/brake_map.csv after calibration.

"},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#calibration-plugin","title":"Calibration plugin","text":"

The rviz:=true option displays the RViz with a calibration plugin as below.

The current status (velocity and pedal) is shown in the plugin. The color on the current cell varies green/red depending on the current data is valid/invalid. The data that doesn't satisfy the following conditions are considered invalid and will not be used for estimation since aggressive data (e.g. when the pedal is moving fast) causes bad calibration accuracy.

The detailed parameters are described in the parameter section.

Note: You don't need to worry about whether the current state is red or green during calibration. Just keep getting data until all the cells turn red.

The value of each cell in the map is gray at first, and it changes from blue to red as the number of valid data in the cell accumulates. It is preferable to continue the calibration until each cell of the map becomes close to red. In particular, the performance near the stop depends strongly on the velocity of 0 ~ 6m/s range and the pedal value of +0.2 ~ -0.4, range so it is desirable to focus on those areas.

"},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#diagnostics","title":"Diagnostics","text":"

The accel brake map_calibrator publishes diagnostics message depending on the calibration status. Diagnostic type WARN indicates that the current accel/brake map is estimated to be inaccurate. In this situation, it is strongly recommended to perform a re-calibration of the accel/brake map.

Status Diagnostics Type Diagnostics message Description No calibration required OK \"OK\" Calibration Required WARN \"Accel/brake map Calibration is required.\" The accuracy of current accel/brake map may be low.

This diagnostics status can be also checked on the following ROS topic.

ros2 topic echo /accel_brake_map_calibrator/output/update_suggest\n

When the diagnostics type is WARN, True is published on this topic and the update of the accel/brake map is suggested.

"},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#evaluation-of-the-accel-brake-map-accuracy","title":"Evaluation of the accel / brake map accuracy","text":"

The accuracy of map is evaluated by the Root Mean Squared Error (RMSE) between the observed acceleration and predicted acceleration.

TERMS:

You can check additional error information with the following topics.

"},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#how-to-visualize-calibration-data","title":"How to visualize calibration data","text":"

The process of calibration can be visualized as below. Since these scripts need the log output of the calibration, the pedal_accel_graph_output parameter must be set to true while the calibration is running for the visualization.

"},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#visualize-plot-of-relation-between-acceleration-and-pedal","title":"Visualize plot of relation between acceleration and pedal","text":"

The following command shows the plot of used data in the calibration. In each plot of velocity ranges, you can see the distribution of the relationship between pedal and acceleration, and raw data points with colors according to their pitch angles.

ros2 run accel_brake_map_calibrator view_plot.py\n

"},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#visualize-statistics-about-accelerationvelocitypedal-data","title":"Visualize statistics about acceleration/velocity/pedal data","text":"

The following command shows the statistics of the calibration:

of all data in each map cell.

ros2 run accel_brake_map_calibrator view_statistics.py\n

"},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#how-to-save-the-calibrated-accel-brake-map-anytime-you-want","title":"How to save the calibrated accel / brake map anytime you want","text":"

You can save accel and brake map anytime with the following command.

ros2 service call /accel_brake_map_calibrator/update_map_dir tier4_vehicle_msgs/srv/UpdateAccelBrakeMap \"path: '<accel/brake map directory>'\"\n

You can also save accel and brake map in the default directory where Autoware reads accel_map.csv/brake_map.csv using the RViz plugin (AccelBrakeMapCalibratorButtonPanel) as following.

  1. Click Panels tab, and select AccelBrakeMapCalibratorButtonPanel.

  2. Select the panel, and the button will appear at the bottom of RViz.

  3. Press the button, and the accel / brake map will be saved. (The button cannot be pressed in certain situations, such as when the calibrator node is not running.)

"},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#parameters","title":"Parameters","text":""},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#system-parameters","title":"System Parameters","text":"Name Type Description Default value update_method string you can select map calibration method. \"update_offset_each_cell\" calculates offsets for each grid cells on the map. \"update_offset_total\" calculates the total offset of the map. \"update_offset_each_cell\" get_pitch_method string \"tf\": get pitch from tf, \"none\": unable to perform pitch validation and pitch compensation \"tf\" pedal_accel_graph_output bool if true, it will output a log of the pedal accel graph. true progress_file_output bool if true, it will output a log and csv file of the update process. false default_map_dir str directory of default map [directory of raw_vehicle_cmd_converter]/data/default/ calibrated_map_dir str directory of calibrated map [directory of accel_brake_map_calibrator]/config/ update_hz double hz for update 10.0"},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#algorithm-parameters","title":"Algorithm Parameters","text":"Name Type Description Default value initial_covariance double Covariance of initial acceleration map (larger covariance makes the update speed faster) 0.05 velocity_min_threshold double Speeds smaller than this are not used for updating. 0.1 velocity_diff_threshold double When the velocity data is more than this threshold away from the grid reference speed (center value), the associated data is not used for updating. 0.556 max_steer_threshold double If the steer angle is greater than this value, the associated data is not used for updating. 0.2 max_pitch_threshold double If the pitch angle is greater than this value, the associated data is not used for updating. 0.02 max_jerk_threshold double If the ego jerk calculated from ego acceleration is greater than this value, the associated data is not used for updating. 0.7 pedal_velocity_thresh double If the pedal moving speed is greater than this value, the associated data is not used for updating. 0.15 pedal_diff_threshold double If the current pedal value is more then this threshold away from the previous value, the associated data is not used for updating. 0.03 max_accel double Maximum value of acceleration calculated from velocity source. 5.0 min_accel double Minimum value of acceleration calculated from velocity source. -5.0 pedal_to_accel_delay double The delay time between actuation_cmd to acceleration, considered in the update logic. 0.3 update_suggest_thresh double threshold of RMSE ratio that update suggest flag becomes true. ( RMSE ratio: [RMSE of new map] / [RMSE of original map] ) 0.7 max_data_count int For visualization. When the data num of each grid gets this value, the grid color gets red. 100"},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#test-utility-scripts","title":"Test utility scripts","text":""},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#constant-accelbrake-command-test","title":"Constant accel/brake command test","text":"

These scripts are useful to test for accel brake map calibration. These generate an ActuationCmd with a constant accel/brake value given interactively by a user through CLI.

The accel/brake_tester.py receives a target accel/brake command from CLI. It sends a target value to actuation_cmd_publisher.py which generates the ActuationCmd. You can run these scripts by the following commands in the different terminals, and it will be as in the screenshot below.

ros2 run accel_brake_map_calibrator accel_tester.py\nros2 run accel_brake_map_calibrator brake_tester.py\nros2 run accel_brake_map_calibrator actuation_cmd_publisher.py\n

"},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#calibration-method","title":"Calibration Method","text":"

Two algorithms are selectable for the acceleration map update, update_offset_four_cell_around and update_offset_each_cell. Please see the link for datails.

"},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#data-preprocessing","title":"Data Preprocessing","text":"

Before calibration, missing or unusable data (e.g., too large handle angles) must first be eliminated. The following parameters are used to determine which data to remove.

"},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#parameters_1","title":"Parameters","text":"Name Description Default Value velocity_min_threshold Exclude minimal velocity 0.1 max_steer_threshold Exclude large steering angle 0.2 max_pitch_threshold Exclude large pitch angle 0.02 max_jerk_threshold Exclude large jerk 0.7 pedal_velocity_thresh Exclude large pedaling speed 0.15"},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#update_offset_each_cell","title":"update_offset_each_cell","text":"

Update by Recursive Least Squares(RLS) method using data close enough to each grid.

Advantage : Only data close enough to each grid is used for calibration, allowing accurate updates at each point.

Disadvantage : Calibration is time-consuming due to a large amount of data to be excluded.

"},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#parameters_2","title":"Parameters","text":"

Data selection is determined by the following thresholds. | Name | Default Value | | -------- | -------- | |velocity_diff_threshold|0.556| |pedal_diff_threshold|0.03|

"},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#update-formula","title":"Update formula","text":"\\begin{align} \\theta[n]=& \\theta[n-1]+\\frac{p[n-1]x^{(n)}}{\\lambda+p[n-1]{(x^{(n)})}^2}(y^{(n)}-\\theta[n-1]x^{(n)})\\\\ p[n]=&\\frac{p[n-1]}{\\lambda+p[n-1]{(x^{(n)})}^2} \\end{align}"},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#variables","title":"Variables","text":"Variable name Symbol covariance p[n-1] map_offset \\theta[n] forgettingfactor \\lambda phi x(=1) measured_acc y"},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#update_offset_four_cell_around-1","title":"update_offset_four_cell_around [1]","text":"

Update the offsets by RLS in four grids around newly obtained data. By considering linear interpolation, the update takes into account appropriate weights. Therefore, there is no need to remove data by thresholding.

Advantage : No data is wasted because updates are performed on the 4 grids around the data with appropriate weighting. Disadvantage : Accuracy may be degraded due to extreme bias of the data. For example, if data z(k) is biased near Z_{RR} in Fig. 2, updating is performed at the four surrounding points ( Z_{RR}, Z_{RL}, Z_{LR}, and Z_{LL}), but accuracy at Z_{LL} is not expected.

"},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#implementation","title":"Implementation","text":"

See eq.(7)-(10) in [1] for the updated formula. In addition, eq.(17),(18) from [1] are used for Anti-Windup.

"},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#references","title":"References","text":"

[1] Gabrielle Lochrie, Michael Doljevic, Mario Nona, Yongsoon Yoon, Anti-Windup Recursive Least Squares Method for Adaptive Lookup Tables with Application to Automotive Powertrain Control Systems, IFAC-PapersOnLine, Volume 54, Issue 20, 2021, Pages 840-845

"},{"location":"vehicle/external_cmd_converter/","title":"external_cmd_converter","text":"

external_cmd_converter is a node that converts desired mechanical input to acceleration and velocity by using accel/brake map.

"},{"location":"vehicle/external_cmd_converter/#input-topics","title":"Input topics","text":"Name Type Description ~/in/external_control_cmd tier4_external_api_msgs::msg::ControlCommand target throttle/brake/steering_angle/steering_angle_velocity is necessary to calculate desired control command. ~/input/shift_cmd\" autoware_auto_vehicle_msgs::GearCommand current command of gear. ~/input/emergency_stop tier4_external_api_msgs::msg::Heartbeat emergency heart beat for external command. ~/input/current_gate_mode tier4_control_msgs::msg::GateMode topic for gate mode. ~/input/odometry navigation_msgs::Odometry twist topic in odometry is used."},{"location":"vehicle/external_cmd_converter/#output-topics","title":"Output topics","text":"Name Type Description ~/out/control_cmd autoware_auto_control_msgs::msg::AckermannControlCommand ackermann control command converted from selected external command"},{"location":"vehicle/external_cmd_converter/#parameters","title":"Parameters","text":"Parameter Type Description timer_rate double timer's update rate wait_for_first_topic double if time out check is done after receiving first topic control_command_timeout double time out check for control command emergency_stop_timeout double time out check for emergency stop command"},{"location":"vehicle/external_cmd_converter/#limitation","title":"Limitation","text":"

tbd.

"},{"location":"vehicle/raw_vehicle_cmd_converter/","title":"raw_vehicle_cmd_converter","text":"

raw_vehicle_command_converter is a node that converts desired acceleration and velocity to mechanical input by using feed forward + feed back control (optional).

"},{"location":"vehicle/raw_vehicle_cmd_converter/#input-topics","title":"Input topics","text":"Name Type Description ~/input/control_cmd autoware_auto_control_msgs::msg::AckermannControlCommand target velocity/acceleration/steering_angle/steering_angle_velocity is necessary to calculate actuation command. ~/input/steering\" autoware_auto_vehicle_msgs::SteeringReport current status of steering used for steering feed back control ~/input/twist navigation_msgs::Odometry twist topic in odometry is used."},{"location":"vehicle/raw_vehicle_cmd_converter/#output-topics","title":"Output topics","text":"Name Type Description ~/output/actuation_cmd tier4_vehicle_msgs::msg::ActuationCommandStamped actuation command for vehicle to apply mechanical input"},{"location":"vehicle/raw_vehicle_cmd_converter/#parameters","title":"Parameters","text":"Parameter Type Description update_rate double timer's update rate th_max_message_delay_sec double threshold time of input messages' maximum delay th_arrived_distance_m double threshold distance to check if vehicle has arrived at the trajectory's endpoint th_stopped_time_sec double threshold time to check if vehicle is stopped th_stopped_velocity_mps double threshold velocity to check if vehicle is stopped"},{"location":"vehicle/raw_vehicle_cmd_converter/#limitation","title":"Limitation","text":"

The current feed back implementation is only applied to steering control.

"},{"location":"vehicle/steer_offset_estimator/Readme/","title":"steer_offset_estimator","text":""},{"location":"vehicle/steer_offset_estimator/Readme/#purpose","title":"Purpose","text":"

The role of this node is to automatically calibrate steer_offset used in the vehicle_interface node.

The base steer offset value is 0 by default, which is standard, is updated iteratively with the loaded driving data. This module is supposed to be used in below straight driving situation.

"},{"location":"vehicle/steer_offset_estimator/Readme/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

Estimates sequential steering offsets from kinematic model and state observations. Calculate yaw rate error and then calculate steering error recursively by least squared method, for more details see updateSteeringOffset() function.

"},{"location":"vehicle/steer_offset_estimator/Readme/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"vehicle/steer_offset_estimator/Readme/#input","title":"Input","text":"Name Type Description ~/input/twist geometry_msgs::msg::TwistStamped vehicle twist ~/input/steer autoware_auto_vehicle_msgs::msg::SteeringReport steering"},{"location":"vehicle/steer_offset_estimator/Readme/#output","title":"Output","text":"Name Type Description ~/output/steering_offset tier4_debug_msgs::msg::Float32Stamped steering offset ~/output/steering_offset_covariance tier4_debug_msgs::msg::Float32Stamped covariance of steering offset"},{"location":"vehicle/steer_offset_estimator/Readme/#launch-calibrator","title":"Launch Calibrator","text":"

After launching Autoware, run the steer_offset_estimator by the following command and then perform autonomous driving. Note: You can collect data with manual driving if it is possible to use the same vehicle interface as during autonomous driving (e.g. using a joystick).

ros2 launch steer_offset_estimator steer_offset_estimator.launch.xml\n

Or if you want to use rosbag files, run the following commands.

ros2 param set /use_sim_time true\nros2 bag play <rosbag_file> --clock\n
"},{"location":"vehicle/steer_offset_estimator/Readme/#parameters","title":"Parameters","text":"Params Description steer_update_hz update hz initial_covariance steer offset is larger than tolerance forgetting_factor weight of using previous value valid_min_velocity velocity below this value is not used valid_max_steer steer above this value is not used warn_steer_offset_deg warn if offset is above this value"},{"location":"vehicle/steer_offset_estimator/Readme/#diagnostics","title":"Diagnostics","text":"

The steer_offset_estimator publishes diagnostics message depending on the calibration status. Diagnostic type WARN indicates that the current steer_offset is estimated to be inaccurate. In this situation, it is strongly recommended to perform a re-calibration of the steer_offset.

Status Diagnostics Type Diagnostics message No calibration required OK \"Preparation\" Calibration Required WARN \"Steer offset is larger than tolerance\"

This diagnostics status can be also checked on the following ROS topic.

ros2 topic echo /vehicle/status/steering_offset\n
"},{"location":"vehicle/vehicle_info_util/Readme/","title":"Vehicle Info Util","text":""},{"location":"vehicle/vehicle_info_util/Readme/#purpose","title":"Purpose","text":"

This package is to get vehicle info parameters.

"},{"location":"vehicle/vehicle_info_util/Readme/#description","title":"Description","text":"

In here, you can check the vehicle dimensions with more detail.

"},{"location":"vehicle/vehicle_info_util/Readme/#scripts","title":"Scripts","text":""},{"location":"vehicle/vehicle_info_util/Readme/#minimum-turning-radius","title":"Minimum turning radius","text":"
$ ros2 run vehicle_info_util min_turning_radius_calculator.py\nyaml path is /home/autoware/pilot-auto/install/vehicle_info_util/share/vehicle_info_util/config/vehicle_info.param.yaml\nMinimum turning radius is 3.253042620027102 [m] for rear, 4.253220695862465 [m] for front.\n

You can designate yaml file with -y option as follows.

ros2 run vehicle_info_util min_turning_radius_calculator.py -y <path-to-yaml>\n
"},{"location":"vehicle/vehicle_info_util/Readme/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"autoware.universe","text":"

For Autoware's general documentation, see Autoware Documentation.

For detailed documents of Autoware Universe components, see Autoware Universe Documentation.

"},{"location":"CODE_OF_CONDUCT/","title":"Contributor Covenant Code of Conduct","text":""},{"location":"CODE_OF_CONDUCT/#our-pledge","title":"Our Pledge","text":"

We as members, contributors, and leaders pledge to make participation in our community a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation.

We pledge to act and interact in ways that contribute to an open, welcoming, diverse, inclusive, and healthy community.

"},{"location":"CODE_OF_CONDUCT/#our-standards","title":"Our Standards","text":"

Examples of behavior that contributes to a positive environment for our community include:

Examples of unacceptable behavior include:

"},{"location":"CODE_OF_CONDUCT/#enforcement-responsibilities","title":"Enforcement Responsibilities","text":"

Community leaders are responsible for clarifying and enforcing our standards of acceptable behavior and will take appropriate and fair corrective action in response to any behavior that they deem inappropriate, threatening, offensive, or harmful.

Community leaders have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, and will communicate reasons for moderation decisions when appropriate.

"},{"location":"CODE_OF_CONDUCT/#scope","title":"Scope","text":"

This Code of Conduct applies within all community spaces, and also applies when an individual is officially representing the community in public spaces. Examples of representing our community include using an official e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event.

"},{"location":"CODE_OF_CONDUCT/#enforcement","title":"Enforcement","text":"

Instances of abusive, harassing, or otherwise unacceptable behavior may be reported to the community leaders responsible for enforcement at conduct@autoware.org. All complaints will be reviewed and investigated promptly and fairly.

All community leaders are obligated to respect the privacy and security of the reporter of any incident.

"},{"location":"CODE_OF_CONDUCT/#enforcement-guidelines","title":"Enforcement Guidelines","text":"

Community leaders will follow these Community Impact Guidelines in determining the consequences for any action they deem in violation of this Code of Conduct:

"},{"location":"CODE_OF_CONDUCT/#1-correction","title":"1. Correction","text":"

Community Impact: Use of inappropriate language or other behavior deemed unprofessional or unwelcome in the community.

Consequence: A private, written warning from community leaders, providing clarity around the nature of the violation and an explanation of why the behavior was inappropriate. A public apology may be requested.

"},{"location":"CODE_OF_CONDUCT/#2-warning","title":"2. Warning","text":"

Community Impact: A violation through a single incident or series of actions.

Consequence: A warning with consequences for continued behavior. No interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, for a specified period of time. This includes avoiding interactions in community spaces as well as external channels like social media. Violating these terms may lead to a temporary or permanent ban.

"},{"location":"CODE_OF_CONDUCT/#3-temporary-ban","title":"3. Temporary Ban","text":"

Community Impact: A serious violation of community standards, including sustained inappropriate behavior.

Consequence: A temporary ban from any sort of interaction or public communication with the community for a specified period of time. No public or private interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, is allowed during this period. Violating these terms may lead to a permanent ban.

"},{"location":"CODE_OF_CONDUCT/#4-permanent-ban","title":"4. Permanent Ban","text":"

Community Impact: Demonstrating a pattern of violation of community standards, including sustained inappropriate behavior, harassment of an individual, or aggression toward or disparagement of classes of individuals.

Consequence: A permanent ban from any sort of public interaction within the community.

"},{"location":"CODE_OF_CONDUCT/#attribution","title":"Attribution","text":"

This Code of Conduct is adapted from the Contributor Covenant, version 2.1, available at https://www.contributor-covenant.org/version/2/1/code_of_conduct.html.

Community Impact Guidelines were inspired by Mozilla's code of conduct enforcement ladder.

For answers to common questions about this code of conduct, see the FAQ at https://www.contributor-covenant.org/faq. Translations are available at https://www.contributor-covenant.org/translations.

"},{"location":"CONTRIBUTING/","title":"Contributing","text":"

See https://autowarefoundation.github.io/autoware-documentation/main/contributing/.

"},{"location":"DISCLAIMER/","title":"DISCLAIMER","text":"

DISCLAIMER

\u201cAutoware\u201d will be provided by The Autoware Foundation under the Apache License 2.0. This \u201cDISCLAIMER\u201d will be applied to all users of Autoware (a \u201cUser\u201d or \u201cUsers\u201d) with the Apache License 2.0 and Users shall hereby approve and acknowledge all the contents specified in this disclaimer below and will be deemed to consent to this disclaimer without any objection upon utilizing or downloading Autoware.

Disclaimer and Waiver of Warranties

  1. AUTOWARE FOUNDATION MAKES NO REPRESENTATION OR WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, WITH RESPECT TO PROVIDING AUTOWARE (the \u201cService\u201d) including but not limited to any representation or warranty (i) of fitness or suitability for a particular purpose contemplated by the Users, (ii) of the expected functions, commercial value, accuracy, or usefulness of the Service, (iii) that the use by the Users of the Service complies with the laws and regulations applicable to the Users or any internal rules established by industrial organizations, (iv) that the Service will be free of interruption or defects, (v) of the non-infringement of any third party's right and (vi) the accuracy of the content of the Services and the software itself.

  2. The Autoware Foundation shall not be liable for any damage incurred by the User that are attributable to the Autoware Foundation for any reasons whatsoever. UNDER NO CIRCUMSTANCES SHALL THE AUTOWARE FOUNDATION BE LIABLE FOR INCIDENTAL, INDIRECT, SPECIAL OR FUTURE DAMAGES OR LOSS OF PROFITS.

  3. A User shall be entirely responsible for the content posted by the User and its use of any content of the Service or the Website. If the User is held responsible in a civil action such as a claim for damages or even in a criminal case, the Autoware Foundation and member companies, governments and academic & non-profit organizations and their directors, officers, employees and agents (collectively, the \u201cIndemnified Parties\u201d) shall be completely discharged from any rights or assertions the User may have against the Indemnified Parties, or from any legal action, litigation or similar procedures.

Indemnity

A User shall indemnify and hold the Indemnified Parties harmless from any of their damages, losses, liabilities, costs or expenses (including attorneys' fees or criminal compensation), or any claims or demands made against the Indemnified Parties by any third party, due to or arising out of, or in connection with utilizing Autoware (including the representations and warranties), the violation of applicable Product Liability Law of each country (including criminal case) or violation of any applicable laws by the Users, or the content posted by the User or its use of any content of the Service or the Website.

"},{"location":"common/autoware_auto_common/design/comparisons/","title":"Comparisons","text":"

The float_comparisons.hpp library is a simple set of functions for performing approximate numerical comparisons. There are separate functions for performing comparisons using absolute bounds and relative bounds. Absolute comparison checks are prefixed with abs_ and relative checks are prefixed with rel_.

The bool_comparisons.hpp library additionally contains an XOR operator.

The intent of the library is to improve readability of code and reduce likelihood of typographical errors when using numerical and boolean comparisons.

"},{"location":"common/autoware_auto_common/design/comparisons/#target-use-cases","title":"Target use cases","text":"

The approximate comparisons are intended to be used to check whether two numbers lie within some absolute or relative interval. The exclusive_or function will test whether two values cast to different boolean values.

"},{"location":"common/autoware_auto_common/design/comparisons/#assumptions","title":"Assumptions","text":""},{"location":"common/autoware_auto_common/design/comparisons/#example-usage","title":"Example Usage","text":"
#include \"common/bool_comparisons.hpp\"\n#include \"common/float_comparisons.hpp\"\n\n#include <iostream>\n\n// using-directive is just for illustration; don't do this in practice\nusing namespace autoware::common::helper_functions::comparisons;\n\nstatic constexpr auto epsilon = 0.2;\nstatic constexpr auto relative_epsilon = 0.01;\n\nstd::cout << exclusive_or(true, false) << \"\\n\";\n// Prints: true\n\nstd::cout << rel_eq(1.0, 1.1, relative_epsilon)) << \"\\n\";\n// Prints: false\n\nstd::cout << approx_eq(10000.0, 10010.0, epsilon, relative_epsilon)) << \"\\n\";\n// Prints: true\n\nstd::cout << abs_eq(4.0, 4.2, epsilon) << \"\\n\";\n// Prints: true\n\nstd::cout << abs_ne(4.0, 4.2, epsilon) << \"\\n\";\n// Prints: false\n\nstd::cout << abs_eq_zero(0.2, epsilon) << \"\\n\";\n// Prints: false\n\nstd::cout << abs_lt(4.0, 4.25, epsilon) << \"\\n\";\n// Prints: true\n\nstd::cout << abs_lte(1.0, 1.2, epsilon) << \"\\n\";\n// Prints: true\n\nstd::cout << abs_gt(1.25, 1.0, epsilon) << \"\\n\";\n// Prints: true\n\nstd::cout << abs_gte(0.75, 1.0, epsilon) << \"\\n\";\n// Prints: false\n
"},{"location":"common/autoware_auto_geometry/design/interval/","title":"Interval","text":"

The interval is a standard 1D real-valued interval. The class implements a representation and operations on the interval type and guarantees interval validity on construction. Basic operations and accessors are implemented, as well as other common operations. See 'Example Usage' below.

"},{"location":"common/autoware_auto_geometry/design/interval/#target-use-cases","title":"Target use cases","text":""},{"location":"common/autoware_auto_geometry/design/interval/#properties","title":"Properties","text":""},{"location":"common/autoware_auto_geometry/design/interval/#conventions","title":"Conventions","text":""},{"location":"common/autoware_auto_geometry/design/interval/#assumptions","title":"Assumptions","text":""},{"location":"common/autoware_auto_geometry/design/interval/#example-usage","title":"Example Usage","text":"
#include \"geometry/interval.hpp\"\n\n#include <iostream>\n\n// using-directive is just for illustration; don't do this in practice\nusing namespace autoware::common::geometry;\n\n// bounds for example interval\nconstexpr auto MIN = 0.0;\nconstexpr auto MAX = 1.0;\n\n//\n// Try to construct an invalid interval. This will give the following error:\n// 'Attempted to construct an invalid interval: {\"min\": 1.0, \"max\": 0.0}'\n//\n\ntry {\nconst auto i = Interval_d(MAX, MIN);\n} catch (const std::runtime_error& e) {\nstd::cerr << e.what();\n}\n\n//\n// Construct a double precision interval from 0 to 1\n//\n\nconst auto i = Interval_d(MIN, MAX);\n\n//\n// Test accessors and properties\n//\n\nstd::cout << Interval_d::min(i) << \" \" << Interval_d::max(i) << \"\\n\";\n// Prints: 0.0 1.0\n\nstd::cout << Interval_d::empty(i) << \" \" << Interval_d::length(i) << \"\\n\";\n// Prints: false 1.0\n\nstd::cout << Interval_d::contains(i, 0.3) << \"\\n\";\n// Prints: true\n\nstd::cout << Interval_d::is_subset_eq(Interval_d(0.2, 0.4), i) << \"\\n\";\n// Prints: true\n\n//\n// Test operations.\n//\n\nstd::cout << Interval_d::intersect(i, Interval(-1.0, 0.3)) << \"\\n\";\n// Prints: {\"min\": 0.0, \"max\": 0.3}\n\nstd::cout << Interval_d::project_to_interval(i, 0.5) << \" \"\n<< Interval_d::project_to_interval(i, -1.3) << \"\\n\";\n// Prints: 0.5 0.0\n\n//\n// Distinguish empty/zero measure\n//\n\nconst auto i_empty = Interval();\nconst auto i_zero_length = Interval(0.0, 0.0);\n\nstd::cout << Interval_d::empty(i_empty) << \" \"\n<< Interval_d::empty(i_zero_length) << \"\\n\";\n// Prints: true false\n\nstd::cout << Interval_d::zero_measure(i_empty) << \" \"\n<< Interval_d::zero_measure(i_zero_length) << \"\\n\";\n// Prints: false false\n
"},{"location":"common/autoware_auto_geometry/design/polygon_intersection_2d-design/","title":"2D Convex Polygon Intersection","text":"

Two convex polygon's intersection can be visualized on the image below as the blue area:

"},{"location":"common/autoware_auto_geometry/design/polygon_intersection_2d-design/#purpose-use-cases","title":"Purpose / Use cases","text":"

Computing the intersection between two polygons can be useful in many applications of scene understanding. It can be used to estimate collision detection, shape alignment, shape association and in any application that deals with the objects around the perceiving agent.

"},{"location":"common/autoware_auto_geometry/design/polygon_intersection_2d-design/#design","title":"Design","text":"

Livermore, Calif, 1977 mention the following observations about convex polygon intersection:

"},{"location":"common/autoware_auto_geometry/design/polygon_intersection_2d-design/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

With the observation mentioned above, the current algorithm operates in the following way:

"},{"location":"common/autoware_auto_geometry/design/polygon_intersection_2d-design/#inputs-outputs-api","title":"Inputs / Outputs / API","text":"

Inputs:

Outputs:

"},{"location":"common/autoware_auto_geometry/design/polygon_intersection_2d-design/#future-work","title":"Future Work","text":""},{"location":"common/autoware_auto_geometry/design/polygon_intersection_2d-design/#1230-applying-efficient-algorithms","title":"1230: Applying efficient algorithms.","text":""},{"location":"common/autoware_auto_geometry/design/polygon_intersection_2d-design/#related-issues","title":"Related issues","text":""},{"location":"common/autoware_auto_geometry/design/polygon_intersection_2d-design/#983-integrate-vision-detections-in-object-tracker","title":"983: Integrate vision detections in object tracker","text":""},{"location":"common/autoware_auto_geometry/design/spatial-hash-design/","title":"Spatial Hash","text":"

The spatial hash is a data structure designed for efficient fixed-radius near-neighbor queries in low dimensions.

The fixed-radius near-neighbors problem is defined as follows:

For point p, find all points p' s.t. d(p, p') < r

Where in this case d(p, p') is euclidean distance, and r is the fixed radius.

For n points with an average of k neighbors each, this data structure can perform m near-neighbor queries (to generate lists of near-neighbors for m different points) in O(mk) time.

By contrast, using a k-d tree for successive nearest-neighbor queries results in a running time of O(m log n).

The spatial hash works as follows:

Under the hood, an std::unordered_multimap is used, where the key is a bin/voxel index. The bin size was computed to be the same as the lookup distance.

In addition, this data structure can support 2D or 3D queries. This is determined during configuration, and baked into the data structure via the configuration class. The purpose of this was to avoid if statements in tight loops. The configuration class specializations themselves use CRTP (Curiously Recurring Template Patterns) to do \"static polymorphism\", and avoid a dispatching call.

"},{"location":"common/autoware_auto_geometry/design/spatial-hash-design/#performance-characterization","title":"Performance characterization","text":""},{"location":"common/autoware_auto_geometry/design/spatial-hash-design/#time","title":"Time","text":"

Insertion is O(n) because lookup time for the underlying hashmap is O(n) for hashmaps. In practice, lookup time for hashmaps and thus insertion time should be O(1).

Removing a point is O(1) because the current API only supports removal via direct reference to a node.

Finding k near-neighbors is worst case O(n) in the case of an adversarial example, but in practice O(k).

"},{"location":"common/autoware_auto_geometry/design/spatial-hash-design/#space","title":"Space","text":"

The module consists of the following components:

This results in O(n) space complexity.

"},{"location":"common/autoware_auto_geometry/design/spatial-hash-design/#states","title":"States","text":"

The spatial hash's state is dictated by the status of the underlying unordered_multimap.

The data structure is wholly configured by a config class. The constructor of the class determines in the data structure accepts strictly 2D or strictly 3D queries.

"},{"location":"common/autoware_auto_geometry/design/spatial-hash-design/#inputs","title":"Inputs","text":"

The primary method of introducing data into the data structure is via the insert method.

"},{"location":"common/autoware_auto_geometry/design/spatial-hash-design/#outputs","title":"Outputs","text":"

The primary method of retrieving data from the data structure is via the near2D configuration or near 3D configuration method.

The whole data structure can also be traversed using standard constant iterators.

"},{"location":"common/autoware_auto_geometry/design/spatial-hash-design/#future-work","title":"Future Work","text":""},{"location":"common/autoware_auto_geometry/design/spatial-hash-design/#related-issues","title":"Related issues","text":""},{"location":"common/autoware_auto_geometry/design/spatial-hash-design/#28-port-to-autowareauto","title":"28: Port to autoware.Auto","text":""},{"location":"common/autoware_auto_perception_rviz_plugin/","title":"autoware_auto_perception_plugin","text":""},{"location":"common/autoware_auto_perception_rviz_plugin/#purpose","title":"Purpose","text":"

It is an rviz plugin for visualizing the result from perception module. This package is based on the implementation of the rviz plugin developed by Autoware.Auto.

See Autoware.Auto design documentation for the original design philosophy. [1]

"},{"location":"common/autoware_auto_perception_rviz_plugin/#input-types-visualization-results","title":"Input Types / Visualization Results","text":""},{"location":"common/autoware_auto_perception_rviz_plugin/#detectedobjects","title":"DetectedObjects","text":""},{"location":"common/autoware_auto_perception_rviz_plugin/#input-types","title":"Input Types","text":"Name Type Description autoware_auto_perception_msgs::msg::DetectedObjects detection result array"},{"location":"common/autoware_auto_perception_rviz_plugin/#visualization-result","title":"Visualization Result","text":""},{"location":"common/autoware_auto_perception_rviz_plugin/#trackedobjects","title":"TrackedObjects","text":""},{"location":"common/autoware_auto_perception_rviz_plugin/#input-types_1","title":"Input Types","text":"Name Type Description autoware_auto_perception_msgs::msg::TrackedObjects tracking result array"},{"location":"common/autoware_auto_perception_rviz_plugin/#visualization-result_1","title":"Visualization Result","text":"

Overwrite tracking results with detection results.

"},{"location":"common/autoware_auto_perception_rviz_plugin/#predictedobjects","title":"PredictedObjects","text":""},{"location":"common/autoware_auto_perception_rviz_plugin/#input-types_2","title":"Input Types","text":"Name Type Description autoware_auto_perception_msgs::msg::PredictedObjects prediction result array"},{"location":"common/autoware_auto_perception_rviz_plugin/#visualization-result_2","title":"Visualization Result","text":"

Overwrite prediction results with tracking results.

"},{"location":"common/autoware_auto_perception_rviz_plugin/#referencesexternal-links","title":"References/External links","text":"

[1] https://gitlab.com/autowarefoundation/autoware.auto/AutowareAuto/-/tree/master/src/tools/visualization/autoware_rviz_plugins

"},{"location":"common/autoware_auto_perception_rviz_plugin/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":""},{"location":"common/autoware_auto_tf2/design/autoware-auto-tf2-design/","title":"autoware_auto_tf2","text":"

This is the design document for the autoware_auto_tf2 package.

"},{"location":"common/autoware_auto_tf2/design/autoware-auto-tf2-design/#purpose-use-cases","title":"Purpose / Use cases","text":"

In general, users of ROS rely on tf (and its successor, tf2) for publishing and utilizing coordinate frame transforms. This is true even to the extent that the tf2 contains the packages tf2_geometry_msgs and tf2_sensor_msgs which allow for easy conversion to and from the message types defined in geometry_msgs and sensor_msgs, respectively. However, AutowareAuto contains some specialized message types which are not transformable between frames using the ROS2 library. The autoware_auto_tf2 package aims to provide developers with tools to transform applicable autoware_auto_msgs types. In addition to this, this package also provides transform tools for messages types in geometry_msgs missing in tf2_geometry_msgs.

"},{"location":"common/autoware_auto_tf2/design/autoware-auto-tf2-design/#design","title":"Design","text":"

While writing tf2_some_msgs or contributing to tf2_geometry_msgs, compatibility and design intent was ensured with the following files in the existing tf2 framework:

For example:

void tf2::convert( const A & a,B & b)\n

The method tf2::convert is dependent on the following:

template<typename A, typename B>\nB tf2::toMsg(const A& a);\ntemplate<typename A, typename B>\nvoid tf2::fromMsg(const A&, B& b);\n\n// New way to transform instead of using tf2::doTransform() directly\ntf2_ros::BufferInterface::transform(...)\n

Which, in turn, is dependent on the following:

void tf2::convert( const A & a,B & b)\nconst std::string& tf2::getFrameId(const T& t)\nconst ros::Time& tf2::getTimestamp(const T& t);\n
"},{"location":"common/autoware_auto_tf2/design/autoware-auto-tf2-design/#current-implementation-of-tf2_geometry_msgs","title":"Current Implementation of tf2_geometry_msgs","text":"

In both ROS1 and ROS2 stamped msgs like Vector3Stamped, QuaternionStamped have associated functions like:

In ROS1, to support tf2::convert and need in doTransform of the stamped data, non-stamped underlying data like Vector3, Point, have implementations of the following functions:

In ROS2, much of the doTransform method is not using toMsg and fromMsg as data types from tf2 are not used. Instead doTransform is done using KDL, thus functions relating to underlying data were not added; such as Vector3, Point, or ported in this commit ros/geometry2/commit/6f2a82. The non-stamped data with toMsg and fromMsg are Quaternion, Transform. Pose has the modified toMsg and not used by PoseStamped.

"},{"location":"common/autoware_auto_tf2/design/autoware-auto-tf2-design/#plan-for-autoware_auto_perception_msgsmsgboundingboxarray","title":"Plan for autoware_auto_perception_msgs::msg::BoundingBoxArray","text":"

The initial rough plan was to implement some of the common tf2 functions like toMsg, fromMsg, and doTransform, as needed for all the underlying data types in BoundingBoxArray. Examples of the data types include: BoundingBox, Quaternion32, and Point32. In addition, the implementation should be done such that upstream contributions could also be made to geometry_msgs.

"},{"location":"common/autoware_auto_tf2/design/autoware-auto-tf2-design/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

Due to conflicts in a function signatures, the predefined template of convert.h/ transform_functions.h is not followed and compatibility with tf2::convert(..) is broken and toMsg is written differently.

// Old style\ngeometry_msgs::Vector3 toMsg(const tf2::Vector3& in)\ngeometry_msgs::Point& toMsg(const tf2::Vector3& in)\n\n// New style\ngeometry_msgs::Point& toMsg(const tf2::Vector3& in, geometry_msgs::Point& out)\n
"},{"location":"common/autoware_auto_tf2/design/autoware-auto-tf2-design/#inputs-outputs-api","title":"Inputs / Outputs / API","text":"

The library provides API doTransform for the following data-types that are either not available in tf2_geometry_msgs or the messages types are part of autoware_auto_msgs and are therefore custom and not inherently supported by any of the tf2 libraries. The following APIs are provided for the following data types:

inline void doTransform(\nconst geometry_msgs::msg::Point32 & t_in,\ngeometry_msgs::msg::Point32 & t_out,\nconst geometry_msgs::msg::TransformStamped & transform)\n
inline void doTransform(\nconst autoware_auto_geometry_msgs::msg::Quaternion32 & t_in,\nautoware_auto_geometry_msgs::msg::Quaternion32 & t_out,\nconst geometry_msgs::msg::TransformStamped & transform)\n
inline void doTransform(\nconst BoundingBox & t_in, BoundingBox & t_out,\nconst geometry_msgs::msg::TransformStamped & transform)\n
inline void doTransform(\nconst BoundingBoxArray & t_in,\nBoundingBoxArray & t_out,\nconst geometry_msgs::msg::TransformStamped & transform)\n

In addition, the following helper methods are also added:

inline tf2::TimePoint getTimestamp(const BoundingBoxArray & t)\n\ninline std::string getFrameId(const BoundingBoxArray & t)\n
"},{"location":"common/autoware_auto_tf2/design/autoware-auto-tf2-design/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":""},{"location":"common/autoware_auto_tf2/design/autoware-auto-tf2-design/#challenges","title":"Challenges","text":""},{"location":"common/autoware_testing/design/autoware_testing-design/","title":"autoware_testing","text":"

This is the design document for the autoware_testing package.

"},{"location":"common/autoware_testing/design/autoware_testing-design/#purpose-use-cases","title":"Purpose / Use cases","text":"

The package aims to provide a unified way to add standard testing functionality to the package, currently supporting:

"},{"location":"common/autoware_testing/design/autoware_testing-design/#design","title":"Design","text":"

Uses ros_testing (which is an extension of launch_testing) and provides some parametrized, reusable standard tests to run.

"},{"location":"common/autoware_testing/design/autoware_testing-design/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

Parametrization is limited to package, executable names, parameters filename and executable arguments. Test namespace is set as 'test'. Parameters file for the package is expected to be in param directory inside package.

"},{"location":"common/autoware_testing/design/autoware_testing-design/#inputs-outputs-api","title":"Inputs / Outputs / API","text":"

To add a smoke test to your package tests, add test dependency on autoware_testing to package.xml

<test_depend>autoware_testing</test_depend>\n

and add the following two lines to CMakeLists.txt in the IF (BUILD_TESTING) section:

find_package(autoware_testing REQUIRED)\nadd_smoke_test(<package_name> <executable_name> [PARAM_FILENAME <param_filename>] [EXECUTABLE_ARGUMENTS <arguments>])\n

Where

<package_name> - [required] tested node package name.

<executable_name> - [required] tested node executable name.

<param_filename> - [optional] param filename. Default value is test.param.yaml. Required mostly in situation where there are multiple smoke tests in a package and each requires different parameters set

<arguments> - [optional] arguments passed to executable. By default no arguments are passed.

which adds <executable_name>_smoke_test test to suite.

Example test result:

build/<package_name>/test_results/<package_name>/<executable_name>_smoke_test.xunit.xml: 1 test, 0 errors, 0 failures, 0 skipped\n
"},{"location":"common/autoware_testing/design/autoware_testing-design/#references-external-links","title":"References / External links","text":""},{"location":"common/autoware_testing/design/autoware_testing-design/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":""},{"location":"common/autoware_testing/design/autoware_testing-design/#related-issues","title":"Related issues","text":""},{"location":"common/bag_time_manager_rviz_plugin/","title":"bag_time_manager_rviz_plugin","text":""},{"location":"common/bag_time_manager_rviz_plugin/#purpose","title":"Purpose","text":"

This plugin allows publishing and controlling the ros bag time.

"},{"location":"common/bag_time_manager_rviz_plugin/#output","title":"Output","text":"

tbd.

"},{"location":"common/bag_time_manager_rviz_plugin/#howtouse","title":"HowToUse","text":"
  1. Start rviz and select panels/Add new panel.

  2. Select BagTimeManagerPanel and press OK.

  3. See bag_time_manager_rviz_plugin/BagTimeManagerPanel is added.

"},{"location":"common/component_interface_utils/","title":"component_interface_utils","text":""},{"location":"common/component_interface_utils/#features","title":"Features","text":"

This is a utility package that provides the following features:

"},{"location":"common/component_interface_utils/#design","title":"Design","text":"

This package provides the wrappers for the interface classes of rclcpp. The wrappers limit the usage of the original class to enforce the processing recommended by the component interface. Do not inherit the class of rclcpp, and forward or wrap the member function that is allowed to be used.

"},{"location":"common/component_interface_utils/#instantiation-of-the-wrapper-class","title":"Instantiation of the wrapper class","text":"

The wrapper class requires interface information in this format.

struct SampleService\n{\nusing Service = sample_msgs::srv::ServiceType;\nstatic constexpr char name[] = \"/sample/service\";\n};\n\nstruct SampleMessage\n{\nusing Message = sample_msgs::msg::MessageType;\nstatic constexpr char name[] = \"/sample/message\";\nstatic constexpr size_t depth = 3;\nstatic constexpr auto reliability = RMW_QOS_POLICY_RELIABILITY_RELIABLE;\nstatic constexpr auto durability = RMW_QOS_POLICY_DURABILITY_TRANSIENT_LOCAL;\n};\n

Create the wrapper using the above definition as follows.

// header file\ncomponent_interface_utils::Service<SampleService>::SharedPtr srv_;\ncomponent_interface_utils::Client<SampleService>::SharedPtr cli_;\ncomponent_interface_utils::Publisher<SampleMessage>::SharedPtr pub_;\ncomponent_interface_utils::Subscription<SampleMessage>::SharedPtr sub_;\n\n// source file\nconst auto node = component_interface_utils::NodeAdaptor(this);\nnode.init_srv(srv_, callback);\nnode.init_cli(cli_);\nnode.init_pub(pub_);\nnode.init_sub(sub_, callback);\n
"},{"location":"common/component_interface_utils/#logging-for-service-and-client","title":"Logging for service and client","text":"

If the wrapper class is used, logging is automatically enabled. The log level is RCLCPP_INFO.

"},{"location":"common/component_interface_utils/#service-exception-for-response","title":"Service exception for response","text":"

If the wrapper class is used and the service response has status, throwing ServiceException will automatically catch and set it to status. This is useful when returning an error from a function called from the service callback.

void service_callback(Request req, Response res)\n{\nfunction();\nres->status.success = true;\n}\n\nvoid function()\n{\nthrow ServiceException(ERROR_CODE, \"message\");\n}\n

If the wrapper class is not used or the service response has no status, manually catch the ServiceException as follows.

void service_callback(Request req, Response res)\n{\ntry {\nfunction();\nres->status.success = true;\n} catch (const ServiceException & error) {\nres->status = error.status();\n}\n}\n
"},{"location":"common/component_interface_utils/#relays-for-topic-and-service","title":"Relays for topic and service","text":"

There are utilities for relaying services and messages of the same type.

const auto node = component_interface_utils::NodeAdaptor(this);\nservice_callback_group_ = create_callback_group(rclcpp::CallbackGroupType::MutuallyExclusive);\nnode.relay_message(pub_, sub_);\nnode.relay_service(cli_, srv_, service_callback_group_);  // group is for avoiding deadlocks\n
"},{"location":"common/fake_test_node/design/fake_test_node-design/","title":"Fake Test Node","text":""},{"location":"common/fake_test_node/design/fake_test_node-design/#what-this-package-provides","title":"What this package provides","text":"

When writing an integration test for a node in C++ using GTest, there is quite some boilerplate code that needs to be written to set up a fake node that would publish expected messages on an expected topic and subscribes to messages on some other topic. This is usually implemented as a custom GTest fixture.

This package contains a library that introduces two utility classes that can be used in place of custom fixtures described above to write integration tests for a node:

These fixtures take care of initializing and re-initializing rclcpp as well as of checking that all subscribers and publishers have a match, thus reducing the amount of boilerplate code that the user needs to write.

"},{"location":"common/fake_test_node/design/fake_test_node-design/#how-to-use-this-library","title":"How to use this library","text":"

After including the relevant header the user can use a typedef to use a custom fixture name and use the provided classes as fixtures in TEST_F and TEST_P tests directly.

"},{"location":"common/fake_test_node/design/fake_test_node-design/#example-usage","title":"Example usage","text":"

Let's say there is a node NodeUnderTest that requires testing. It just subscribes to std_msgs::msg::Int32 messages and publishes a std_msgs::msg::Bool to indicate that the input is positive. To test such a node the following code can be used utilizing the autoware::tools::testing::FakeTestNode:

using FakeNodeFixture = autoware::tools::testing::FakeTestNode;\n\n/// @test Test that we can use a non-parametrized test.\nTEST_F(FakeNodeFixture, Test) {\nInt32 msg{};\nmsg.data = 15;\nconst auto node = std::make_shared<NodeUnderTest>();\n\nBool::SharedPtr last_received_msg{};\nauto fake_odom_publisher = create_publisher<Int32>(\"/input_topic\");\nauto result_odom_subscription = create_subscription<Bool>(\"/output_topic\", *node,\n[&last_received_msg](const Bool::SharedPtr msg) {last_received_msg = msg;});\n\nconst auto dt{std::chrono::milliseconds{100LL}};\nconst auto max_wait_time{std::chrono::seconds{10LL}};\nauto time_passed{std::chrono::milliseconds{0LL}};\nwhile (!last_received_msg) {\nfake_odom_publisher->publish(msg);\nrclcpp::spin_some(node);\nrclcpp::spin_some(get_fake_node());\nstd::this_thread::sleep_for(dt);\ntime_passed += dt;\nif (time_passed > max_wait_time) {\nFAIL() << \"Did not receive a message soon enough.\";\n}\n}\nEXPECT_TRUE(last_received_msg->data);\nSUCCEED();\n}\n

Here only the TEST_F example is shown but a TEST_P usage is very similar with a little bit more boilerplate to set up all the parameter values, see test_fake_test_node.cpp for an example usage.

"},{"location":"common/global_parameter_loader/Readme/","title":"Autoware Global Parameter Loader","text":"

This package is to set common ROS parameters to each node.

"},{"location":"common/global_parameter_loader/Readme/#usage","title":"Usage","text":"

Add the following lines to the launch file of the node in which you want to get global parameters.

<!-- Global parameters -->\n<include file=\"$(find-pkg-share global_parameter_loader)/launch/global_params.launch.py\">\n<arg name=\"vehicle_model\" value=\"$(var vehicle_model)\"/>\n</include>\n

The vehicle model parameter is read from config/vehicle_info.param.yaml in vehicle_model_description package.

"},{"location":"common/global_parameter_loader/Readme/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

Currently only vehicle_info is loaded by this launcher.

"},{"location":"common/goal_distance_calculator/Readme/","title":"goal_distance_calculator","text":""},{"location":"common/goal_distance_calculator/Readme/#purpose","title":"Purpose","text":"

This node publishes deviation of self-pose from goal pose.

"},{"location":"common/goal_distance_calculator/Readme/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"common/goal_distance_calculator/Readme/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"common/goal_distance_calculator/Readme/#input","title":"Input","text":"Name Type Description /planning/mission_planning/route autoware_auto_planning_msgs::msg::Route Used to get goal pose /tf tf2_msgs/TFMessage TF (self-pose)"},{"location":"common/goal_distance_calculator/Readme/#output","title":"Output","text":"Name Type Description deviation/lateral tier4_debug_msgs::msg::Float64Stamped publish lateral deviation of self-pose from goal pose[m] deviation/longitudinal tier4_debug_msgs::msg::Float64Stamped publish longitudinal deviation of self-pose from goal pose[m] deviation/yaw tier4_debug_msgs::msg::Float64Stamped publish yaw deviation of self-pose from goal pose[rad] deviation/yaw_deg tier4_debug_msgs::msg::Float64Stamped publish yaw deviation of self-pose from goal pose[deg]"},{"location":"common/goal_distance_calculator/Readme/#parameters","title":"Parameters","text":""},{"location":"common/goal_distance_calculator/Readme/#node-parameters","title":"Node Parameters","text":"Name Type Default Value Explanation update_rate double 10.0 Timer callback period. [Hz]"},{"location":"common/goal_distance_calculator/Readme/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Explanation oneshot bool true publish deviations just once or repeatedly"},{"location":"common/goal_distance_calculator/Readme/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"common/grid_map_utils/","title":"Grid Map Utils","text":""},{"location":"common/grid_map_utils/#overview","title":"Overview","text":"

This packages contains a re-implementation of the grid_map::PolygonIterator used to iterate over all cells of a grid map contained inside some polygon.

"},{"location":"common/grid_map_utils/#algorithm","title":"Algorithm","text":"

This implementation uses the scan line algorithm, a common algorithm used to draw polygons on a rasterized image. The main idea of the algorithm adapted to a grid map is as follow:

More details on the scan line algorithm can be found in the References.

"},{"location":"common/grid_map_utils/#api","title":"API","text":"

The grid_map_utils::PolygonIterator follows the same API as the original grid_map::PolygonIterator.

"},{"location":"common/grid_map_utils/#assumptions","title":"Assumptions","text":"

The behavior of the grid_map_utils::PolygonIterator is only guaranteed to match the grid_map::PolygonIterator if edges of the polygon do not exactly cross any cell center. In such a case, whether the crossed cell is considered inside or outside of the polygon can vary due to floating precision error.

"},{"location":"common/grid_map_utils/#performances","title":"Performances","text":"

Benchmarking code is implemented in test/benchmarking.cpp and is also used to validate that the grid_map_utils::PolygonIterator behaves exactly like the grid_map::PolygonIterator.

The following figure shows a comparison of the runtime between the implementation of this package (grid_map_utils) and the original implementation (grid_map). The time measured includes the construction of the iterator and the iteration over all indexes and is shown using a logarithmic scale. Results were obtained varying the side size of a square grid map with 100 <= n <= 1000 (size=n means a grid of n x n cells), random polygons with a number of vertices 3 <= m <= 100 and with each parameter (n,m) repeated 10 times.

"},{"location":"common/grid_map_utils/#future-improvements","title":"Future improvements","text":"

There exists variations of the scan line algorithm for multiple polygons. These can be implemented if we want to iterate over the cells contained in at least one of multiple polygons.

The current implementation imitate the behavior of the original grid_map::PolygonIterator where a cell is selected if its center position is inside the polygon. This behavior could be changed for example to only return all cells overlapped by the polygon.

"},{"location":"common/grid_map_utils/#references","title":"References","text":""},{"location":"common/interpolation/","title":"Interpolation package","text":"

This package supplies linear and spline interpolation functions.

"},{"location":"common/interpolation/#linear-interpolation","title":"Linear Interpolation","text":"

lerp(src_val, dst_val, ratio) (for scalar interpolation) interpolates src_val and dst_val with ratio. This will be replaced with std::lerp(src_val, dst_val, ratio) in C++20.

lerp(base_keys, base_values, query_keys) (for vector interpolation) applies linear regression to each two continuous points whose x values arebase_keys and whose y values are base_values. Then it calculates interpolated values on y-axis for query_keys on x-axis.

"},{"location":"common/interpolation/#spline-interpolation","title":"Spline Interpolation","text":"

spline(base_keys, base_values, query_keys) (for vector interpolation) applies spline regression to each two continuous points whose x values arebase_keys and whose y values are base_values. Then it calculates interpolated values on y-axis for query_keys on x-axis.

"},{"location":"common/interpolation/#evaluation-of-calculation-cost","title":"Evaluation of calculation cost","text":"

We evaluated calculation cost of spline interpolation for 100 points, and adopted the best one which is tridiagonal matrix algorithm. Methods except for tridiagonal matrix algorithm exists in spline_interpolation package, which has been removed from Autoware.

Method Calculation time Tridiagonal Matrix Algorithm 0.007 [ms] Preconditioned Conjugate Gradient 0.024 [ms] Successive Over-Relaxation 0.074 [ms]"},{"location":"common/interpolation/#spline-interpolation-algorithm","title":"Spline Interpolation Algorithm","text":"

Assuming that the size of base_keys (x_i) and base_values (y_i) are N + 1, we aim to calculate spline interpolation with the following equation to interpolate between y_i and y_{i+1}.

Y_i(x) = a_i (x - x_i)^3 + b_i (x - x_i)^2 + c_i (x - x_i) + d_i \\ \\ \\ (i = 0, \\dots, N-1)

Constraints on spline interpolation are as follows. The number of constraints is 4N, which is equal to the number of variables of spline interpolation.

\\begin{align} Y_i (x_i) & = y_i \\ \\ \\ (i = 0, \\dots, N-1) \\\\ Y_i (x_{i+1}) & = y_{i+1} \\ \\ \\ (i = 0, \\dots, N-1) \\\\ Y'_i (x_{i+1}) & = Y'_{i+1} (x_{i+1}) \\ \\ \\ (i = 0, \\dots, N-2) \\\\ Y''_i (x_{i+1}) & = Y''_{i+1} (x_{i+1}) \\ \\ \\ (i = 0, \\dots, N-2) \\\\ Y''_0 (x_0) & = 0 \\\\ Y''_{N-1} (x_N) & = 0 \\end{align}

According to this article, spline interpolation is formulated as the following linear equation.

\\begin{align} \\begin{pmatrix} 2(h_0 + h_1) & h_1 \\\\ h_0 & 2 (h_1 + h_2) & h_2 & & O \\\\ & & & \\ddots \\\\ O & & & & h_{N-2} & 2 (h_{N-2} + h_{N-1}) \\end{pmatrix} \\begin{pmatrix} v_1 \\\\ v_2 \\\\ v_3 \\\\ \\vdots \\\\ v_{N-1} \\end{pmatrix}= \\begin{pmatrix} w_1 \\\\ w_2 \\\\ w_3 \\\\ \\vdots \\\\ w_{N-1} \\end{pmatrix} \\end{align}

where

\\begin{align} h_i & = x_{i+1} - x_i \\ \\ \\ (i = 0, \\dots, N-1) \\\\ w_i & = 6 \\left(\\frac{y_{i+1} - y_{i+1}}{h_i} - \\frac{y_i - y_{i-1}}{h_{i-1}}\\right) \\ \\ \\ (i = 1, \\dots, N-1) \\end{align}

The coefficient matrix of this linear equation is tridiagonal matrix. Therefore, it can be solve with tridiagonal matrix algorithm, which can solve linear equations without gradient descent methods.

Solving this linear equation with tridiagonal matrix algorithm, we can calculate coefficients of spline interpolation as follows.

\\begin{align} a_i & = \\frac{v_{i+1} - v_i}{6 (x_{i+1} - x_i)} \\ \\ \\ (i = 0, \\dots, N-1) \\\\ b_i & = \\frac{v_i}{2} \\ \\ \\ (i = 0, \\dots, N-1) \\\\ c_i & = \\frac{y_{i+1} - y_i}{x_{i+1} - x_i} - \\frac{1}{6}(x_{i+1} - x_i)(2 v_i + v_{i+1}) \\ \\ \\ (i = 0, \\dots, N-1) \\\\ d_i & = y_i \\ \\ \\ (i = 0, \\dots, N-1) \\end{align}"},{"location":"common/interpolation/#tridiagonal-matrix-algorithm","title":"Tridiagonal Matrix Algorithm","text":"

We solve tridiagonal linear equation according to this article where variables of linear equation are expressed as follows in the implementation.

\\begin{align} \\begin{pmatrix} b_0 & c_0 & & \\\\ a_0 & b_1 & c_2 & O \\\\ & & \\ddots \\\\ O & & a_{N-2} & b_{N-1} \\end{pmatrix} x = \\begin{pmatrix} d_0 \\\\ d_2 \\\\ d_3 \\\\ \\vdots \\\\ d_{N-1} \\end{pmatrix} \\end{align}"},{"location":"common/kalman_filter/","title":"kalman_filter","text":""},{"location":"common/kalman_filter/#purpose","title":"Purpose","text":"

This common package contains the kalman filter with time delay and the calculation of the kalman filter.

"},{"location":"common/kalman_filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"common/motion_utils/","title":"Motion Utils package","text":""},{"location":"common/motion_utils/#definition-of-terms","title":"Definition of terms","text":""},{"location":"common/motion_utils/#segment","title":"Segment","text":"

Segment in Autoware is the line segment between two successive points as follows.

The nearest segment index and nearest point index to a certain position is not always th same. Therefore, we prepare two different utility functions to calculate a nearest index for points and segments.

"},{"location":"common/motion_utils/#nearest-index-search","title":"Nearest index search","text":"

In this section, the nearest index and nearest segment index search is explained.

We have the same functions for the nearest index search and nearest segment index search. Taking for the example the nearest index search, we have two types of functions.

The first function finds the nearest index with distance and yaw thresholds.

template <class T>\nsize_t findFirstNearestIndexWithSoftConstraints(\nconst T & points, const geometry_msgs::msg::Pose & pose,\nconst double dist_threshold = std::numeric_limits<double>::max(),\nconst double yaw_threshold = std::numeric_limits<double>::max());\n

This function finds the first local solution within thresholds. The reason to find the first local one is to deal with some edge cases explained in the next subsection.

There are default parameters for thresholds arguments so that you can decide which thresholds to pass to the function.

  1. When both the distance and yaw thresholds are given.
    • First, try to find the nearest index with both the distance and yaw thresholds.
    • If not found, try to find again with only the distance threshold.
    • If not found, find without any thresholds.
  2. When only distance are given.
    • First, try to find the nearest index the distance threshold.
    • If not found, find without any thresholds.
  3. When no thresholds are given.
    • Find the nearest index.

The second function finds the nearest index in the lane whose id is lane_id.

size_t findNearestIndexFromLaneId(\nconst autoware_auto_planning_msgs::msg::PathWithLaneId & path,\nconst geometry_msgs::msg::Point & pos, const int64_t lane_id);\n
"},{"location":"common/motion_utils/#application-to-various-object","title":"Application to various object","text":"

Many node packages often calculate the nearest index of objects. We will explain the recommended method to calculate it.

"},{"location":"common/motion_utils/#nearest-index-for-the-ego","title":"Nearest index for the ego","text":"

Assuming that the path length before the ego is short enough, we expect to find the correct nearest index in the following edge cases by findFirstNearestIndexWithSoftConstraints with both distance and yaw thresholds. Blue circles describes the distance threshold from the base link position and two blue lines describe the yaw threshold against the base link orientation. Among points in these cases, the correct nearest point which is red can be found.

Therefore, the implementation is as follows.

const size_t ego_nearest_idx = findFirstNearestIndexWithSoftConstraints(points, ego_pose, ego_nearest_dist_threshold, ego_nearest_yaw_threshold);\nconst size_t ego_nearest_seg_idx = findFirstNearestIndexWithSoftConstraints(points, ego_pose, ego_nearest_dist_threshold, ego_nearest_yaw_threshold);\n
"},{"location":"common/motion_utils/#nearest-index-for-dynamic-objects","title":"Nearest index for dynamic objects","text":"

For the ego nearest index, the orientation is considered in addition to the position since the ego is supposed to follow the points. However, for the dynamic objects (e.g., predicted object), sometimes its orientation may be different from the points order, e.g. the dynamic object driving backward although the ego is driving forward.

Therefore, the yaw threshold should not be considered for the dynamic object. The implementation is as follows.

const size_t dynamic_obj_nearest_idx = findFirstNearestIndexWithSoftConstraints(points, dynamic_obj_pose, dynamic_obj_nearest_dist_threshold);\nconst size_t dynamic_obj_nearest_seg_idx = findFirstNearestIndexWithSoftConstraints(points, dynamic_obj_pose, dynamic_obj_nearest_dist_threshold);\n
"},{"location":"common/motion_utils/#nearest-index-for-traffic-objects","title":"Nearest index for traffic objects","text":"

In lanelet maps, traffic objects belong to the specific lane. With this specific lane's id, the correct nearest index can be found.

The implementation is as follows.

// first extract `lane_id` which the traffic object belong to.\nconst size_t traffic_obj_nearest_idx = findNearestIndexFromLaneId(path_with_lane_id, traffic_obj_pos, lane_id);\nconst size_t traffic_obj_nearest_seg_idx = findNearestSegmentIndexFromLaneId(path_with_lane_id, traffic_obj_pos, lane_id);\n
"},{"location":"common/motion_utils/#pathtrajectory-length-calculation-between-designated-points","title":"Path/Trajectory length calculation between designated points","text":"

Based on the discussion so far, the nearest index search algorithm is different depending on the object type. Therefore, we recommended using the wrapper utility functions which require the nearest index search (e.g., calculating the path length) with each nearest index search.

For example, when we want to calculate the path length between the ego and the dynamic object, the implementation is as follows.

const size_t ego_nearest_seg_idx = findFirstNearestSegmentIndex(points, ego_pose, ego_nearest_dist_threshold, ego_nearest_yaw_threshold);\nconst size_t dyn_obj_nearest_seg_idx = findFirstNearestSegmentIndex(points, dyn_obj_pose, dyn_obj_nearest_dist_threshold);\nconst double length_from_ego_to_obj = calcSignedArcLength(points, ego_pose, ego_nearest_seg_idx, dyn_obj_pose, dyn_obj_nearest_seg_idx);\n
"},{"location":"common/motion_utils/docs/vehicle/vehicle/","title":"vehicle utils","text":"

Vehicle utils provides a convenient library used to check vehicle status.

"},{"location":"common/motion_utils/docs/vehicle/vehicle/#feature","title":"Feature","text":"

The library contains following classes.

"},{"location":"common/motion_utils/docs/vehicle/vehicle/#vehicle_stop_checker","title":"vehicle_stop_checker","text":"

This class check whether the vehicle is stopped or not based on localization result.

"},{"location":"common/motion_utils/docs/vehicle/vehicle/#subscribed-topics","title":"Subscribed Topics","text":"Name Type Description /localization/kinematic_state nav_msgs::msg::Odometry vehicle odometry"},{"location":"common/motion_utils/docs/vehicle/vehicle/#parameters","title":"Parameters","text":"Name Type Default Value Explanation velocity_buffer_time_sec double 10.0 odometry buffering time [s]"},{"location":"common/motion_utils/docs/vehicle/vehicle/#member-functions","title":"Member functions","text":"
bool isVehicleStopped(const double stop_duration)\n
"},{"location":"common/motion_utils/docs/vehicle/vehicle/#example-usage","title":"Example Usage","text":"

Necessary includes:

#include <tier4_autoware_utils/vehicle/vehicle_state_checker.hpp>\n

1.Create a checker instance.

class SampleNode : public rclcpp::Node\n{\npublic:\nSampleNode() : Node(\"sample_node\")\n{\nvehicle_stop_checker_ = std::make_unique<VehicleStopChecker>(this);\n}\n\nstd::unique_ptr<VehicleStopChecker> vehicle_stop_checker_;\n\nbool sampleFunc();\n\n...\n}\n

2.Check the vehicle state.

bool SampleNode::sampleFunc()\n{\n...\n\nconst auto result_1 = vehicle_stop_checker_->isVehicleStopped();\n\n...\n\nconst auto result_2 = vehicle_stop_checker_->isVehicleStopped(3.0);\n\n...\n}\n
"},{"location":"common/motion_utils/docs/vehicle/vehicle/#vehicle_arrival_checker","title":"vehicle_arrival_checker","text":"

This class check whether the vehicle arrive at stop point based on localization and planning result.

"},{"location":"common/motion_utils/docs/vehicle/vehicle/#subscribed-topics_1","title":"Subscribed Topics","text":"Name Type Description /localization/kinematic_state nav_msgs::msg::Odometry vehicle odometry /planning/scenario_planning/trajectory autoware_auto_planning_msgs::msg::Trajectory trajectory"},{"location":"common/motion_utils/docs/vehicle/vehicle/#parameters_1","title":"Parameters","text":"Name Type Default Value Explanation velocity_buffer_time_sec double 10.0 odometry buffering time [s] th_arrived_distance_m double 1.0 threshold distance to check if vehicle has arrived at target point [m]"},{"location":"common/motion_utils/docs/vehicle/vehicle/#member-functions_1","title":"Member functions","text":"
bool isVehicleStopped(const double stop_duration)\n
bool isVehicleStoppedAtStopPoint(const double stop_duration)\n
"},{"location":"common/motion_utils/docs/vehicle/vehicle/#example-usage_1","title":"Example Usage","text":"

Necessary includes:

#include <tier4_autoware_utils/vehicle/vehicle_state_checker.hpp>\n

1.Create a checker instance.

class SampleNode : public rclcpp::Node\n{\npublic:\nSampleNode() : Node(\"sample_node\")\n{\nvehicle_arrival_checker_ = std::make_unique<VehicleArrivalChecker>(this);\n}\n\nstd::unique_ptr<VehicleArrivalChecker> vehicle_arrival_checker_;\n\nbool sampleFunc();\n\n...\n}\n

2.Check the vehicle state.

bool SampleNode::sampleFunc()\n{\n...\n\nconst auto result_1 = vehicle_arrival_checker_->isVehicleStopped();\n\n...\n\nconst auto result_2 = vehicle_arrival_checker_->isVehicleStopped(3.0);\n\n...\n\nconst auto result_3 = vehicle_arrival_checker_->isVehicleStoppedAtStopPoint();\n\n...\n\nconst auto result_4 = vehicle_arrival_checker_->isVehicleStoppedAtStopPoint(3.0);\n\n...\n}\n
"},{"location":"common/motion_utils/docs/vehicle/vehicle/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

vehicle_stop_checker and vehicle_arrival_checker cannot check whether the vehicle is stopped more than velocity_buffer_time_sec second.

"},{"location":"common/osqp_interface/design/osqp_interface-design/","title":"Interface for the OSQP library","text":"

This is the design document for the osqp_interface package.

"},{"location":"common/osqp_interface/design/osqp_interface-design/#purpose-use-cases","title":"Purpose / Use cases","text":"

This packages provides a C++ interface for the OSQP library.

"},{"location":"common/osqp_interface/design/osqp_interface-design/#design","title":"Design","text":"

The class OSQPInterface takes a problem formulation as Eigen matrices and vectors, converts these objects into C-style Compressed-Column-Sparse matrices and dynamic arrays, loads the data into the OSQP workspace dataholder, and runs the optimizer.

"},{"location":"common/osqp_interface/design/osqp_interface-design/#inputs-outputs-api","title":"Inputs / Outputs / API","text":"

The interface can be used in several ways:

  1. Initialize the interface WITHOUT data. Load the problem formulation at the optimization call.

        osqp_interface = OSQPInterface();\nosqp_interface.optimize(P, A, q, l, u);\n
  2. Initialize the interface WITH data.

        osqp_interface = OSQPInterface(P, A, q, l, u);\nosqp_interface.optimize();\n
  3. WARM START OPTIMIZATION by modifying the problem formulation between optimization runs.

        osqp_interface = OSQPInterface(P, A, q, l, u);\nosqp_interface.optimize();\nosqp.initializeProblem(P_new, A_new, q_new, l_new, u_new);\nosqp_interface.optimize();\n

    The optimization results are returned as a vector by the optimization function.

    std::tuple<std::vector<double>, std::vector<double>> result = osqp_interface.optimize();\nstd::vector<double> param = std::get<0>(result);\ndouble x_0 = param[0];\ndouble x_1 = param[1];\n
"},{"location":"common/osqp_interface/design/osqp_interface-design/#references-external-links","title":"References / External links","text":""},{"location":"common/osqp_interface/design/osqp_interface-design/#related-issues","title":"Related issues","text":""},{"location":"common/path_distance_calculator/Readme/","title":"path_distance_calculator","text":""},{"location":"common/path_distance_calculator/Readme/#purpose","title":"Purpose","text":"

This node publishes a distance from the closest path point from the self-position to the end point of the path. Note that the distance means the arc-length along the path, not the Euclidean distance between the two points.

"},{"location":"common/path_distance_calculator/Readme/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"common/path_distance_calculator/Readme/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"common/path_distance_calculator/Readme/#input","title":"Input","text":"Name Type Description /planning/scenario_planning/lane_driving/behavior_planning/path autoware_auto_planning_msgs::msg::Path Reference path /tf tf2_msgs/TFMessage TF (self-pose)"},{"location":"common/path_distance_calculator/Readme/#output","title":"Output","text":"Name Type Description ~/distance tier4_debug_msgs::msg::Float64Stamped Publish a distance from the closest path point from the self-position to the end point of the path[m]"},{"location":"common/path_distance_calculator/Readme/#parameters","title":"Parameters","text":""},{"location":"common/path_distance_calculator/Readme/#node-parameters","title":"Node Parameters","text":"

None.

"},{"location":"common/path_distance_calculator/Readme/#core-parameters","title":"Core Parameters","text":"

None.

"},{"location":"common/path_distance_calculator/Readme/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"common/polar_grid/Readme/","title":"Polar Grid","text":""},{"location":"common/polar_grid/Readme/#purpose","title":"Purpose","text":"

This plugin displays polar grid around ego vehicle in Rviz.

"},{"location":"common/polar_grid/Readme/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Explanation Max Range float 200.0f max range for polar grid. [m] Wave Velocity float 100.0f wave ring velocity. [m/s] Delta Range float 10.0f wave ring distance for polar grid. [m]"},{"location":"common/polar_grid/Readme/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"common/rtc_manager_rviz_plugin/","title":"rtc_manager_rviz_plugin","text":""},{"location":"common/rtc_manager_rviz_plugin/#purpose","title":"Purpose","text":"

The purpose of this Rviz plugin is

  1. To display each content of RTC status.

  2. To switch each module of RTC auto mode.

  3. To change RTC cooperate commands by button.

"},{"location":"common/rtc_manager_rviz_plugin/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"common/rtc_manager_rviz_plugin/#input","title":"Input","text":"Name Type Description /api/external/get/rtc_status tier4_rtc_msgs::msg::CooperateStatusArray The statuses of each Cooperate Commands"},{"location":"common/rtc_manager_rviz_plugin/#output","title":"Output","text":"Name Type Description /api/external/set/rtc_commands tier4_rtc_msgs::src::CooperateCommands The Cooperate Commands for each planning /planning/enable_auto_mode/* tier4_rtc_msgs::src::AutoMode The Cooperate Commands mode for each planning module"},{"location":"common/rtc_manager_rviz_plugin/#howtouse","title":"HowToUse","text":"
  1. Start rviz and select panels/Add new panel.

  2. tier4_state_rviz_plugin/RTCManagerPanel and press OK.

"},{"location":"common/signal_processing/","title":"Signal Processing Methods","text":"

In this package, we present signal processing related methods for the Autoware applications. The following functionalities are available in the current version.

low-pass filter currently supports only the 1-D low pass filtering.

"},{"location":"common/signal_processing/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"common/signal_processing/documentation/ButterworthFilter/","title":"ButterworthFilter","text":""},{"location":"common/signal_processing/documentation/ButterworthFilter/#butterworth-low-pass-filter-design-tool-class","title":"Butterworth Low-pass Filter Design Tool Class","text":"

This Butterworth low-pass filter design tool can be used to design a Butterworth filter in continuous and discrete-time from the given specifications of the filter performance. The Butterworth filter is a class implementation. A default constructor creates the object without any argument.

The filter can be prepared in three ways. If the filter specifications are known, such as the pass-band, and stop-band frequencies (Wp and Ws) together with the pass-band and stop-band ripple magnitudes (Ap and As), one can call the filter's buttord method with these arguments to obtain the recommended filter order (N) and cut-off frequency (Wc_rad_sec [rad/s]).

Figure 1. Butterworth Low-pass filter specification from [1].

An example call is demonstrated below;

ButterworthFilter bf();\n\nWp = 2.0; // pass-band frequency [rad/sec]\nWs = 3.0; // stop-band frequency [rad/sec]\nAp = 6.0; // pass-band ripple mag or loss [dB]\nAs = 20.0; // stop band ripple attenuation [dB]\n\n// Computing filter coefficients from the specs\nbf.Buttord(Wp, Ws, Ap, As);\n\n// Get the computed order and Cut-Off frequency\nsOrderCutOff NWc = bf.getOrderCutOff();]\n\ncout << \" The computed order is ;\" << NWc.N << endl;\ncout << \" The computed cut-off frequency is ;\" << NWc.Wc_rad_sec << endl;\n

The filter order and cut-off frequency can be obtained in a struct using bf.getOrderCutoff() method. These specs can be printed on the screen by calling PrintFilterSpecs() method. If the user would like to define the order and cut-off frequency manually, the setter methods for these variables can be called to set the filter order (N) and the desired cut-off frequency (Wc_rad_sec [rad/sec]) for the filter.

"},{"location":"common/signal_processing/documentation/ButterworthFilter/#obtaining-filter-transfer-functions","title":"Obtaining Filter Transfer Functions","text":"

The discrete transfer function of the filter requires the roots and gain of the continuous-time transfer function. Therefore, it is a must to call the first computeContinuousTimeTF() to create the continuous-time transfer function of the filter using;

bf.computeContinuousTimeTF();\n

The computed continuous-time transfer function roots can be printed on the screen using the methods;

bf.PrintFilter_ContinuousTimeRoots();\nbf.PrintContinuousTimeTF();\n

The resulting screen output for a 5th order filter is demonstrated below.

 Roots of Continuous Time Filter Transfer Function Denominator are :\n-0.585518 + j 1.80204\n-1.53291 + j 1.11372\n-1.89478 + j 2.32043e-16\n-1.53291 + j -1.11372\n-0.585518 + j -1.80204\n\n\nThe Continuous-Time Transfer Function of the Filter is ;\n\n                                   24.422\n-------------------------------------------------------------------------------\n1.000 *s[5] + 6.132 *s[4] + 18.798 *s[3] + 35.619 *s[2] + 41.711 *s[1] + 24.422\n
"},{"location":"common/signal_processing/documentation/ButterworthFilter/#discrete-time-transfer-function-difference-equations","title":"Discrete Time Transfer Function (Difference Equations)","text":"

The digital filter equivalent of the continuous-time definitions is produced by using the bi-linear transformation. When creating the discrete-time function of the ButterworthFilter object, its Numerator (Bn) and Denominator (An ) coefficients are stored in a vector of filter order size N.

The discrete transfer function method is called using ;

bf.computeDiscreteTimeTF();\nbf.PrintDiscreteTimeTF();\n

The results are printed on the screen like; The Discrete-Time Transfer Function of the Filter is ;

0.191 *z[-5] + 0.956 *z[-4] + 1.913 *z[-3] + 1.913 *z[-2] + 0.956 *z[-1] + 0.191\n--------------------------------------------------------------------------------\n1.000 *z[-5] + 1.885 *z[-4] + 1.888 *z[-3] + 1.014 *z[-2] + 0.298 *z[-1] + 0.037\n

and the associated difference coefficients An and Bn by withing a struct ;

sDifferenceAnBn AnBn = bf.getAnBn();\n

The difference coefficients appear in the filtering equation in the form of.

An * Y_filtered = Bn * Y_unfiltered\n

To filter a signal given in a vector form ;

"},{"location":"common/signal_processing/documentation/ButterworthFilter/#calling-filter-by-a-specified-cut-off-and-sampling-frequencies-in-hz","title":"Calling Filter by a specified cut-off and sampling frequencies [in Hz]","text":"

The Butterworth filter can also be created by defining the desired order (N), a cut-off frequency (fc in [Hz]), and a sampling frequency (fs in [Hz]). In this method, the cut-off frequency is pre-warped with respect to the sampling frequency [1, 2] to match the continuous and digital filter frequencies.

The filter is prepared by the following calling options;

 // 3rd METHOD defining a sampling frequency together with the cut-off fc, fs\n bf.setOrder(2);\n bf.setCutOffFrequency(10, 100);\n

At this step, we define a boolean variable whether to use the pre-warping option or not.

// Compute Continuous Time TF\nbool use_sampling_frequency = true;\nbf.computeContinuousTimeTF(use_sampling_frequency);\nbf.PrintFilter_ContinuousTimeRoots();\nbf.PrintContinuousTimeTF();\n\n// Compute Discrete Time TF\nbf.computeDiscreteTimeTF(use_sampling_frequency);\nbf.PrintDiscreteTimeTF();\n

References:

  1. Manolakis, Dimitris G., and Vinay K. Ingle. Applied digital signal processing: theory and practice. Cambridge University Press, 2011.

  2. https://en.wikibooks.org/wiki/Digital_Signal_Processing/Bilinear_Transform

"},{"location":"common/tier4_automatic_goal_rviz_plugin/","title":"tier4_automatic_goal_rviz_plugin","text":""},{"location":"common/tier4_automatic_goal_rviz_plugin/#purpose","title":"Purpose","text":"
  1. Defining a GoalsList by adding goals using RvizTool (Pose on the map).

  2. Automatic execution of the created GoalsList from the selected goal - it can be stopped and restarted.

  3. Looping the current GoalsList.

  4. Saving achieved goals to a file.

  5. Plan the route to one (single) selected goal and starting that route - it can be stopped and restarted.

  6. Remove any goal from the list or clear the current route.

  7. Save the current GoalsList to a file and load the list from the file.

  8. The application enables/disables access to options depending on the current state.

  9. The saved GoalsList can be executed without using a plugin - using a node automatic_goal_sender.

"},{"location":"common/tier4_automatic_goal_rviz_plugin/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"common/tier4_automatic_goal_rviz_plugin/#input","title":"Input","text":"Name Type Description /api/operation_mode/state autoware_adapi_v1_msgs::msg::OperationModeState The topic represents the state of operation mode /api/routing/state autoware_adapi_v1_msgs::msg::RouteState The topic represents the state of route /rviz2/automatic_goal/goal geometry_msgs::msgs::PoseStamped The topic for adding goals to GoalsList"},{"location":"common/tier4_automatic_goal_rviz_plugin/#output","title":"Output","text":"Name Type Description /api/operation_mode/change_to_autonomous autoware_adapi_v1_msgs::srv::ChangeOperationMode The service to change operation mode to autonomous /api/operation_mode/change_to_stop autoware_adapi_v1_msgs::srv::ChangeOperationMode The service to change operation mode to stop /api/routing/set_route_points autoware_adapi_v1_msgs::srv::SetRoutePoints The service to set route /api/routing/clear_route autoware_adapi_v1_msgs::srv::ClearRoute The service to clear route state /rviz2/automatic_goal/markers visualization_msgs::msg::MarkerArray The topic to visualize goals as rviz markers"},{"location":"common/tier4_automatic_goal_rviz_plugin/#howtouse","title":"HowToUse","text":"
  1. Start rviz and select panels/Add new panel.

  2. Select tier4_automatic_goal_rviz_plugin/AutowareAutomaticGoalPanel and press OK.

  3. Select Add a new tool.

  4. Select tier4_automatic_goal_rviz_plugin/AutowareAutomaticGoalTool and press OK.

  5. Add goals visualization as markers to Displays.

  6. Append goals to the GoalsList to be achieved using 2D Append Goal - in such a way that routes can be planned.

  7. Start sequential planning and goal achievement by clicking Send goals automatically

  8. You can save GoalsList by clicking Save to file.

  9. After saving, you can run the GoalsList without using a plugin also:

    • example: ros2 launch tier4_automatic_goal_rviz_plugin automatic_goal_sender.launch.xml goals_list_file_path:=\"/tmp/goals_list.yaml\" goals_achieved_dir_path:=\"/tmp/\"
      • goals_list_file_path - is the path to the saved GoalsList file to be loaded
      • goals_achieved_dir_path - is the path to the directory where the file goals_achieved.log will be created and the achieved goals will be written to it
"},{"location":"common/tier4_automatic_goal_rviz_plugin/#hints","title":"Hints","text":"

If the application (Engagement) goes into ERROR mode (usually returns to EDITING later), it means that one of the services returned a calling error (code!=0). In this situation, check the terminal output for more information.

"},{"location":"common/tier4_autoware_utils/","title":"tier4_autoware_utils","text":""},{"location":"common/tier4_autoware_utils/#purpose","title":"Purpose","text":"

This package contains many common functions used by other packages, so please refer to them as needed.

"},{"location":"common/tier4_control_rviz_plugin/","title":"tier4_control_rviz_plugin","text":"

This package is to mimic external control for simulation.

"},{"location":"common/tier4_control_rviz_plugin/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"common/tier4_control_rviz_plugin/#input","title":"Input","text":"Name Type Description /control/current_gate_mode tier4_control_msgs::msg::GateMode Current GATE mode /vehicle/status/velocity_status autoware_auto_vehicle_msgs::msg::VelocityReport Current velocity status /api/autoware/get/engage tier4_external_api_msgs::srv::Engage Getting Engage /vehicle/status/gear_status autoware_auto_vehicle_msgs::msg::GearReport The state of GEAR"},{"location":"common/tier4_control_rviz_plugin/#output","title":"Output","text":"Name Type Description /control/gate_mode_cmd tier4_control_msgs::msg::GateMode GATE mode /external/selected/control_cmd autoware_auto_control_msgs::msg::AckermannControlCommand AckermannControlCommand /external/selected/gear_cmd autoware_auto_vehicle_msgs::msg::GearCommand GEAR"},{"location":"common/tier4_control_rviz_plugin/#usage","title":"Usage","text":"
  1. Start rviz and select Panels.

  2. Select tier4_control_rviz_plugin/ManualController and press OK.

  3. Enter velocity in \"Set Cruise Velocity\" and Press the button to confirm. You can notice that GEAR shows D (DRIVE).

  4. Press \"Enable Manual Control\" and you can notice that \"GATE\" and \"Engage\" turn \"Ready\" and the vehicle starts!

"},{"location":"common/tier4_datetime_rviz_plugin/","title":"tier4_datetime_rviz_plugin","text":""},{"location":"common/tier4_datetime_rviz_plugin/#purpose","title":"Purpose","text":"

This plugin displays the ROS Time and Wall Time in rviz.

"},{"location":"common/tier4_datetime_rviz_plugin/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"common/tier4_datetime_rviz_plugin/#usage","title":"Usage","text":"
  1. Start rviz and select panels/Add new panel.
  2. Select tier4_datetime_rviz_plugin/AutowareDateTimePanel and press OK.
"},{"location":"common/tier4_debug_rviz_plugin/","title":"tier4_debug_rviz_plugin","text":"

This package is including jsk code. Note that jsk_overlay_utils.cpp and jsk_overlay_utils.hpp are BSD license.

"},{"location":"common/tier4_debug_rviz_plugin/#plugins","title":"Plugins","text":""},{"location":"common/tier4_debug_rviz_plugin/#float32multiarraystampedpiechart","title":"Float32MultiArrayStampedPieChart","text":"

Pie chart from tier4_debug_msgs::msg::Float32MultiArrayStamped.

"},{"location":"common/tier4_debug_tools/","title":"tier4_debug_tools","text":"

This package provides useful features for debugging Autoware.

"},{"location":"common/tier4_debug_tools/#usage","title":"Usage","text":""},{"location":"common/tier4_debug_tools/#tf2pose","title":"tf2pose","text":"

This tool converts any tf to pose topic. With this tool, for example, you can plot x values of tf in rqt_multiplot.

ros2 run tier4_debug_tools tf2pose {tf_from} {tf_to} {hz}\n

Example:

$ ros2 run tier4_debug_tools tf2pose base_link ndt_base_link 100\n\n$ ros2 topic echo /tf2pose/pose -n1\nheader:\n  seq: 13\nstamp:\n    secs: 1605168366\nnsecs: 549174070\nframe_id: \"base_link\"\npose:\n  position:\n    x: 0.0387684271191\n    y: -0.00320360406477\n    z: 0.000276674520819\n  orientation:\n    x: 0.000335221893885\n    y: 0.000122020672186\n    z: -0.00539673212896\n    w: 0.999985368502\n---\n
"},{"location":"common/tier4_debug_tools/#pose2tf","title":"pose2tf","text":"

This tool converts any pose topic to tf.

ros2 run tier4_debug_tools pose2tf {pose_topic_name} {tf_name}\n

Example:

$ ros2 run tier4_debug_tools pose2tf /localization/pose_estimator/pose ndt_pose\n\n$ ros2 run tf tf_echo ndt_pose ndt_base_link 100\nAt time 1605168365.449\n- Translation: [0.000, 0.000, 0.000]\n- Rotation: in Quaternion [0.000, 0.000, 0.000, 1.000]\nin RPY (radian) [0.000, -0.000, 0.000]\nin RPY (degree) [0.000, -0.000, 0.000]\n
"},{"location":"common/tier4_debug_tools/#stop_reason2pose","title":"stop_reason2pose","text":"

This tool extracts pose from stop_reasons. Topics without numbers such as /stop_reason2pose/pose/detection_area are the nearest stop_reasons, and topics with numbers are individual stop_reasons that are roughly matched with previous ones.

ros2 run tier4_debug_tools stop_reason2pose {stop_reason_topic_name}\n

Example:

$ ros2 run tier4_debug_tools stop_reason2pose /planning/scenario_planning/status/stop_reasons\n\n$ ros2 topic list | ag stop_reason2pose\n/stop_reason2pose/pose/detection_area\n/stop_reason2pose/pose/detection_area_1\n/stop_reason2pose/pose/obstacle_stop\n/stop_reason2pose/pose/obstacle_stop_1\n\n$ ros2 topic echo /stop_reason2pose/pose/detection_area -n1\nheader:\n  seq: 1\nstamp:\n    secs: 1605168355\nnsecs:    821713\nframe_id: \"map\"\npose:\n  position:\n    x: 60608.8433457\n    y: 43886.2410876\n    z: 44.9078212441\n  orientation:\n    x: 0.0\n    y: 0.0\n    z: -0.190261378408\n    w: 0.981733470901\n---\n
"},{"location":"common/tier4_debug_tools/#stop_reason2tf","title":"stop_reason2tf","text":"

This is an all-in-one script that uses tf2pose, pose2tf, and stop_reason2pose. With this tool, you can view the relative position from base_link to the nearest stop_reason.

ros2 run tier4_debug_tools stop_reason2tf {stop_reason_name}\n

Example:

$ ros2 run tier4_debug_tools stop_reason2tf obstacle_stop\nAt time 1605168359.501\n- Translation: [0.291, -0.095, 0.266]\n- Rotation: in Quaternion [0.007, 0.011, -0.005, 1.000]\nin RPY (radian) [0.014, 0.023, -0.010]\nin RPY (degree) [0.825, 1.305, -0.573]\n
"},{"location":"common/tier4_debug_tools/#lateral_error_publisher","title":"lateral_error_publisher","text":"

This node calculate the control error and localization error in the trajectory normal direction as shown in the figure below.

Set the reference trajectory, vehicle pose and ground truth pose in the launch file.

ros2 launch tier4_debug_tools lateral_error_publisher.launch.xml\n
"},{"location":"common/tier4_localization_rviz_plugin/","title":"tier4_localization_rviz_plugin","text":""},{"location":"common/tier4_localization_rviz_plugin/#purpose","title":"Purpose","text":"

This plugin can display the history of the localization obtained by ekf_localizer or ndt_scan_matching.

"},{"location":"common/tier4_localization_rviz_plugin/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"common/tier4_localization_rviz_plugin/#input","title":"Input","text":"Name Type Description input/pose geometry_msgs::msg::PoseStamped In input/pose, put the result of localization calculated by ekf_localizer or ndt_scan_matching"},{"location":"common/tier4_localization_rviz_plugin/#parameters","title":"Parameters","text":""},{"location":"common/tier4_localization_rviz_plugin/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description property_buffer_size_ int 100 Buffer size of topic property_line_view_ bool true Use Line property or not property_line_width_ float 0.1 Width of Line property [m] property_line_alpha_ float 1.0 Alpha of Line property property_line_color_ QColor Qt::white Color of Line property"},{"location":"common/tier4_localization_rviz_plugin/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"common/tier4_localization_rviz_plugin/#usage","title":"Usage","text":"
  1. Start rviz and select Add under the Displays panel.
  2. Select tier4_localization_rviz_plugin/PoseHistory and press OK.
  3. Enter the name of the topic where you want to view the trajectory.
"},{"location":"common/tier4_perception_rviz_plugin/","title":"tier4_perception_rviz_plugin","text":""},{"location":"common/tier4_perception_rviz_plugin/#purpose","title":"Purpose","text":"

This plugin is used to generate dummy pedestrians, cars, and obstacles in planning simulator.

"},{"location":"common/tier4_perception_rviz_plugin/#overview","title":"Overview","text":"

The CarInitialPoseTool sends a topic for generating a dummy car. The PedestrianInitialPoseTool sends a topic for generating a dummy pedestrian. The UnknownInitialPoseTool sends a topic for generating a dummy obstacle. The DeleteAllObjectsTool deletes the dummy cars, pedestrians, and obstacles displayed by the above three tools.

"},{"location":"common/tier4_perception_rviz_plugin/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"common/tier4_perception_rviz_plugin/#output","title":"Output","text":"Name Type Description /simulation/dummy_perception_publisher/object_info dummy_perception_publisher::msg::Object The topic on which to publish dummy object info"},{"location":"common/tier4_perception_rviz_plugin/#parameter","title":"Parameter","text":""},{"location":"common/tier4_perception_rviz_plugin/#core-parameters","title":"Core Parameters","text":""},{"location":"common/tier4_perception_rviz_plugin/#carpose","title":"CarPose","text":"Name Type Default Value Description topic_property_ string /simulation/dummy_perception_publisher/object_info The topic on which to publish dummy object info std_dev_x_ float 0.03 X standard deviation for initial pose [m] std_dev_y_ float 0.03 Y standard deviation for initial pose [m] std_dev_z_ float 0.03 Z standard deviation for initial pose [m] std_dev_theta_ float 5.0 * M_PI / 180.0 Theta standard deviation for initial pose [rad] length_ float 4.0 X standard deviation for initial pose [m] width_ float 1.8 Y standard deviation for initial pose [m] height_ float 2.0 Z standard deviation for initial pose [m] position_z_ float 0.0 Z position for initial pose [m] velocity_ float 0.0 Velocity [m/s]"},{"location":"common/tier4_perception_rviz_plugin/#buspose","title":"BusPose","text":"Name Type Default Value Description topic_property_ string /simulation/dummy_perception_publisher/object_info The topic on which to publish dummy object info std_dev_x_ float 0.03 X standard deviation for initial pose [m] std_dev_y_ float 0.03 Y standard deviation for initial pose [m] std_dev_z_ float 0.03 Z standard deviation for initial pose [m] std_dev_theta_ float 5.0 * M_PI / 180.0 Theta standard deviation for initial pose [rad] length_ float 10.5 X standard deviation for initial pose [m] width_ float 2.5 Y standard deviation for initial pose [m] height_ float 3.5 Z standard deviation for initial pose [m] position_z_ float 0.0 Z position for initial pose [m] velocity_ float 0.0 Velocity [m/s]"},{"location":"common/tier4_perception_rviz_plugin/#pedestrianpose","title":"PedestrianPose","text":"Name Type Default Value Description topic_property_ string /simulation/dummy_perception_publisher/object_info The topic on which to publish dummy object info std_dev_x_ float 0.03 X standard deviation for initial pose [m] std_dev_y_ float 0.03 Y standard deviation for initial pose [m] std_dev_z_ float 0.03 Z standard deviation for initial pose [m] std_dev_theta_ float 5.0 * M_PI / 180.0 Theta standard deviation for initial pose [rad] position_z_ float 0.0 Z position for initial pose [m] velocity_ float 0.0 Velocity [m/s]"},{"location":"common/tier4_perception_rviz_plugin/#unknownpose","title":"UnknownPose","text":"Name Type Default Value Description topic_property_ string /simulation/dummy_perception_publisher/object_info The topic on which to publish dummy object info std_dev_x_ float 0.03 X standard deviation for initial pose [m] std_dev_y_ float 0.03 Y standard deviation for initial pose [m] std_dev_z_ float 0.03 Z standard deviation for initial pose [m] std_dev_theta_ float 5.0 * M_PI / 180.0 Theta standard deviation for initial pose [rad] position_z_ float 0.0 Z position for initial pose [m] velocity_ float 0.0 Velocity [m/s]"},{"location":"common/tier4_perception_rviz_plugin/#deleteallobjects","title":"DeleteAllObjects","text":"Name Type Default Value Description topic_property_ string /simulation/dummy_perception_publisher/object_info The topic on which to publish dummy object info"},{"location":"common/tier4_perception_rviz_plugin/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

Using a planning simulator

"},{"location":"common/tier4_perception_rviz_plugin/#usage","title":"Usage","text":"
  1. Start rviz and select + on the tool tab.
  2. Select one of the following: tier4_perception_rviz_plugin and press OK.
  3. Select the new item in the tool tab (2D Dummy Car in the example) and click on it in rviz.
"},{"location":"common/tier4_perception_rviz_plugin/#interactive-manipulation","title":"Interactive manipulation","text":"

You can interactively manipulate the object.

  1. Select \"Tool Properties\" in rviz.
  2. Select the corresponding object tab in the Tool Properties.
  3. Turn the \"Interactive\" checkbox on.
  4. Select the item in the tool tab in you haven't chosen yet.
  5. Key commands are as follows.
action key command ADD Shift + Click Right Button MOVE Hold down Right Button + Drug and Drop DELETE Alt + Click Right Button"},{"location":"common/tier4_planning_rviz_plugin/","title":"tier4_planning_rviz_plugin","text":"

This package is including jsk code. Note that jsk_overlay_utils.cpp and jsk_overlay_utils.hpp are BSD license.

"},{"location":"common/tier4_planning_rviz_plugin/#purpose","title":"Purpose","text":"

This plugin displays the path, trajectory, and maximum speed.

"},{"location":"common/tier4_planning_rviz_plugin/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"common/tier4_planning_rviz_plugin/#input","title":"Input","text":"Name Type Description /input/path autoware_auto_planning_msgs::msg::Path The topic on which to subscribe path /input/trajectory autoware_auto_planning_msgs::msg::Trajectory The topic on which to subscribe trajectory /planning/scenario_planning/current_max_velocity tier4_planning_msgs/msg/VelocityLimit The topic on which to publish max velocity"},{"location":"common/tier4_planning_rviz_plugin/#output","title":"Output","text":"Name Type Description /planning/mission_planning/checkpoint geometry_msgs/msg/PoseStamped The topic on which to publish checkpoint"},{"location":"common/tier4_planning_rviz_plugin/#parameter","title":"Parameter","text":""},{"location":"common/tier4_planning_rviz_plugin/#core-parameters","title":"Core Parameters","text":""},{"location":"common/tier4_planning_rviz_plugin/#missioncheckpoint","title":"MissionCheckpoint","text":"Name Type Default Value Description pose_topic_property_ string mission_checkpoint The topic on which to publish checkpoint std_dev_x_ float 0.5 X standard deviation for checkpoint pose [m] std_dev_y_ float 0.5 Y standard deviation for checkpoint pose [m] std_dev_theta_ float M_PI / 12.0 Theta standard deviation for checkpoint pose [rad] position_z_ float 0.0 Z position for checkpoint pose [m]"},{"location":"common/tier4_planning_rviz_plugin/#path","title":"Path","text":"Name Type Default Value Description property_path_view_ bool true Use Path property or not property_path_width_ float 2.0 Width of Path property [m] property_path_alpha_ float 1.0 Alpha of Path property property_path_color_view_ bool false Use Constant Color or not property_path_color_ QColor Qt::black Color of Path property property_velocity_view_ bool true Use Velocity property or not property_velocity_alpha_ float 1.0 Alpha of Velocity property property_velocity_scale_ float 0.3 Scale of Velocity property property_velocity_color_view_ bool false Use Constant Color or not property_velocity_color_ QColor Qt::black Color of Velocity property property_vel_max_ float 3.0 Max velocity [m/s]"},{"location":"common/tier4_planning_rviz_plugin/#drivablearea","title":"DrivableArea","text":"Name Type Default Value Description color_scheme_property_ int 0 Color scheme of DrivableArea property alpha_property_ float 0.2 Alpha of DrivableArea property draw_under_property_ bool false Draw as background or not"},{"location":"common/tier4_planning_rviz_plugin/#pathfootprint","title":"PathFootprint","text":"Name Type Default Value Description property_path_footprint_view_ bool true Use Path Footprint property or not property_path_footprint_alpha_ float 1.0 Alpha of Path Footprint property property_path_footprint_color_ QColor Qt::black Color of Path Footprint property property_vehicle_length_ float 4.77 Vehicle length [m] property_vehicle_width_ float 1.83 Vehicle width [m] property_rear_overhang_ float 1.03 Rear overhang [m]"},{"location":"common/tier4_planning_rviz_plugin/#trajectory","title":"Trajectory","text":"Name Type Default Value Description property_path_view_ bool true Use Path property or not property_path_width_ float 2.0 Width of Path property [m] property_path_alpha_ float 1.0 Alpha of Path property property_path_color_view_ bool false Use Constant Color or not property_path_color_ QColor Qt::black Color of Path property property_velocity_view_ bool true Use Velocity property or not property_velocity_alpha_ float 1.0 Alpha of Velocity property property_velocity_scale_ float 0.3 Scale of Velocity property property_velocity_color_view_ bool false Use Constant Color or not property_velocity_color_ QColor Qt::black Color of Velocity property property_velocity_text_view_ bool false View text Velocity property_velocity_text_scale_ float 0.3 Scale of Velocity property property_vel_max_ float 3.0 Max velocity [m/s]"},{"location":"common/tier4_planning_rviz_plugin/#trajectoryfootprint","title":"TrajectoryFootprint","text":"Name Type Default Value Description property_trajectory_footprint_view_ bool true Use Trajectory Footprint property or not property_trajectory_footprint_alpha_ float 1.0 Alpha of Trajectory Footprint property property_trajectory_footprint_color_ QColor QColor(230, 230, 50) Color of Trajectory Footprint property property_vehicle_length_ float 4.77 Vehicle length [m] property_vehicle_width_ float 1.83 Vehicle width [m] property_rear_overhang_ float 1.03 Rear overhang [m] property_trajectory_point_view_ bool false Use Trajectory Point property or not property_trajectory_point_alpha_ float 1.0 Alpha of Trajectory Point property property_trajectory_point_color_ QColor QColor(0, 60, 255) Color of Trajectory Point property property_trajectory_point_radius_ float 0.1 Radius of Trajectory Point property"},{"location":"common/tier4_planning_rviz_plugin/#maxvelocity","title":"MaxVelocity","text":"Name Type Default Value Description property_topic_name_ string /planning/scenario_planning/current_max_velocity The topic on which to subscribe max velocity property_text_color_ QColor QColor(255, 255, 255) Text color property_left_ int 128 Left of the plotter window [px] property_top_ int 128 Top of the plotter window [px] property_length_ int 96 Length of the plotter window [px] property_value_scale_ float 1.0 / 4.0 Value scale"},{"location":"common/tier4_planning_rviz_plugin/#usage","title":"Usage","text":"
  1. Start rviz and select Add under the Displays panel.
  2. Select any one of the tier4_planning_rviz_plugin and press OK.
  3. Enter the name of the topic where you want to view the path or trajectory.
"},{"location":"common/tier4_screen_capture_rviz_plugin/","title":"tier4_screen_capture_rviz_plugin","text":""},{"location":"common/tier4_screen_capture_rviz_plugin/#purpose","title":"Purpose","text":"

This plugin captures the screen of rviz.

"},{"location":"common/tier4_screen_capture_rviz_plugin/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

This is only for debug or analyze. The capture screen button is still beta version which can slow frame rate. set lower frame rate according to PC spec.

"},{"location":"common/tier4_screen_capture_rviz_plugin/#usage","title":"Usage","text":"
  1. Start rviz and select panels/Add new panel.
"},{"location":"common/tier4_simulated_clock_rviz_plugin/","title":"tier4_simulated_clock_rviz_plugin","text":""},{"location":"common/tier4_simulated_clock_rviz_plugin/#purpose","title":"Purpose","text":"

This plugin allows publishing and controlling the simulated ROS time.

"},{"location":"common/tier4_simulated_clock_rviz_plugin/#output","title":"Output","text":"Name Type Description /clock rosgraph_msgs::msg::Clock the current simulated time"},{"location":"common/tier4_simulated_clock_rviz_plugin/#howtouse","title":"HowToUse","text":"
  1. Start rviz and select panels/Add new panel.
  2. Select tier4_clock_rviz_plugin/SimulatedClock and press OK.
  3. Use the added panel to control how the simulated clock is published.

    • Pause button: pause/resume the clock.
    • Speed: speed of the clock relative to the system clock.
    • Rate: publishing rate of the clock.
    • Step button: advance the clock by the specified time step.
    • Time step: value used to advance the clock when pressing the step button d).
    • Time unit: time unit associated with the value from e).
"},{"location":"common/tier4_state_rviz_plugin/","title":"tier4_state_rviz_plugin","text":""},{"location":"common/tier4_state_rviz_plugin/#purpose","title":"Purpose","text":"

This plugin displays the current status of autoware. This plugin also can engage from the panel.

"},{"location":"common/tier4_state_rviz_plugin/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"common/tier4_state_rviz_plugin/#input","title":"Input","text":"Name Type Description /api/operation_mode/state autoware_adapi_v1_msgs::msg::OperationModeState The topic represents the state of operation mode /api/routing/state autoware_adapi_v1_msgs::msg::RouteState The topic represents the state of route /api/localization/initialization_state autoware_adapi_v1_msgs::msg::LocalizationInitializationState The topic represents the state of localization initialization /api/motion/state autoware_adapi_v1_msgs::msg::MotionState The topic represents the state of motion /api/autoware/get/emergency tier4_external_api_msgs::msg::Emergency The topic represents the state of external emergency /vehicle/status/gear_status autoware_auto_vehicle_msgs::msg::GearReport The topic represents the state of gear"},{"location":"common/tier4_state_rviz_plugin/#output","title":"Output","text":"Name Type Description /api/operation_mode/change_to_autonomous autoware_adapi_v1_msgs::srv::ChangeOperationMode The service to change operation mode to autonomous /api/operation_mode/change_to_stop autoware_adapi_v1_msgs::srv::ChangeOperationMode The service to change operation mode to stop /api/operation_mode/change_to_local autoware_adapi_v1_msgs::srv::ChangeOperationMode The service to change operation mode to local /api/operation_mode/change_to_remote autoware_adapi_v1_msgs::srv::ChangeOperationMode The service to change operation mode to remote /api/operation_mode/enable_autoware_control autoware_adapi_v1_msgs::srv::ChangeOperationMode The service to enable vehicle control by Autoware /api/operation_mode/disable_autoware_control autoware_adapi_v1_msgs::srv::ChangeOperationMode The service to disable vehicle control by Autoware /api/routing/clear_route autoware_adapi_v1_msgs::srv::ClearRoute The service to clear route state /api/motion/accept_start autoware_adapi_v1_msgs::srv::AcceptStart The service to accept the vehicle to start /api/autoware/set/emergency tier4_external_api_msgs::srv::SetEmergency The service to set external emergency /planning/scenario_planning/max_velocity_default tier4_planning_msgs::msg::VelocityLimit The topic to set maximum speed of the vehicle"},{"location":"common/tier4_state_rviz_plugin/#howtouse","title":"HowToUse","text":"
  1. Start rviz and select panels/Add new panel.

  2. Select tier4_state_rviz_plugin/AutowareStatePanel and press OK.

  3. If the auto button is activated, can engage by clicking it.

"},{"location":"common/tier4_traffic_light_rviz_plugin/","title":"tier4_traffic_light_rviz_plugin","text":""},{"location":"common/tier4_traffic_light_rviz_plugin/#purpose","title":"Purpose","text":"

This plugin panel publishes dummy traffic light signals.

"},{"location":"common/tier4_traffic_light_rviz_plugin/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"common/tier4_traffic_light_rviz_plugin/#output","title":"Output","text":"Name Type Description /perception/traffic_light_recognition/traffic_signals autoware_auto_perception_msgs::msg::TrafficSignalArray Publish traffic light signals"},{"location":"common/tier4_traffic_light_rviz_plugin/#howtouse","title":"HowToUse","text":"
  1. Start rviz and select panels/Add new panel.
  2. Select TrafficLightPublishPanel and press OK.
  3. Set Traffic Light ID & Traffic Light Status and press SET button.
  4. Traffic light signals are published, while PUBLISH button is pushed.
"},{"location":"common/tier4_vehicle_rviz_plugin/","title":"tier4_vehicle_rviz_plugin","text":"

This package is including jsk code. Note that jsk_overlay_utils.cpp and jsk_overlay_utils.hpp are BSD license.

"},{"location":"common/tier4_vehicle_rviz_plugin/#purpose","title":"Purpose","text":"

This plugin provides a visual and easy-to-understand display of vehicle speed, turn signal and steering status.

"},{"location":"common/tier4_vehicle_rviz_plugin/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"common/tier4_vehicle_rviz_plugin/#input","title":"Input","text":"Name Type Description /vehicle/status/velocity_status autoware_auto_vehicle_msgs::msg::VelocityReport The topic is vehicle twist /control/turn_signal_cmd autoware_auto_vehicle_msgs::msg::TurnIndicatorsReport The topic is status of turn signal /vehicle/status/steering_status autoware_auto_vehicle_msgs::msg::SteeringReport The topic is status of steering"},{"location":"common/tier4_vehicle_rviz_plugin/#parameter","title":"Parameter","text":""},{"location":"common/tier4_vehicle_rviz_plugin/#core-parameters","title":"Core Parameters","text":""},{"location":"common/tier4_vehicle_rviz_plugin/#consolemeter","title":"ConsoleMeter","text":"Name Type Default Value Description property_text_color_ QColor QColor(25, 255, 240) Text color property_left_ int 128 Left of the plotter window [px] property_top_ int 128 Top of the plotter window [px] property_length_ int 256 Height of the plotter window [px] property_value_height_offset_ int 0 Height offset of the plotter window [px] property_value_scale_ float 1.0 / 6.667 Value scale"},{"location":"common/tier4_vehicle_rviz_plugin/#steeringangle","title":"SteeringAngle","text":"Name Type Default Value Description property_text_color_ QColor QColor(25, 255, 240) Text color property_left_ int 128 Left of the plotter window [px] property_top_ int 128 Top of the plotter window [px] property_length_ int 256 Height of the plotter window [px] property_value_height_offset_ int 0 Height offset of the plotter window [px] property_value_scale_ float 1.0 / 6.667 Value scale property_handle_angle_scale_ float 3.0 Scale is steering angle to handle angle"},{"location":"common/tier4_vehicle_rviz_plugin/#turnsignal","title":"TurnSignal","text":"Name Type Default Value Description property_left_ int 128 Left of the plotter window [px] property_top_ int 128 Top of the plotter window [px] property_width_ int 256 Left of the plotter window [px] property_height_ int 256 Width of the plotter window [px]"},{"location":"common/tier4_vehicle_rviz_plugin/#velocityhistory","title":"VelocityHistory","text":"Name Type Default Value Description property_velocity_timeout_ float 10.0 Timeout of velocity [s] property_velocity_alpha_ float 1.0 Alpha of velocity property_velocity_scale_ float 0.3 Scale of velocity property_velocity_color_view_ bool false Use Constant Color or not property_velocity_color_ QColor Qt::black Color of velocity history property_vel_max_ float 3.0 Color Border Vel Max [m/s]"},{"location":"common/tier4_vehicle_rviz_plugin/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"common/tier4_vehicle_rviz_plugin/#usage","title":"Usage","text":"
  1. Start rviz and select Add under the Displays panel.
  2. Select any one of the tier4_vehicle_rviz_plugin and press OK.
  3. Enter the name of the topic where you want to view the status.
"},{"location":"common/tvm_utility/artifacts/","title":"Macro Syntax Error","text":"

File: common/tvm_utility/artifacts/README.md

Line 1 in Markdown file: Missing end of comment tag

# TVM Utility Artifacts {#tvm-utility-artifacts-readme}\n

"},{"location":"common/tvm_utility/design/tvm-utility-design/","title":"Macro Syntax Error","text":"

File: common/tvm_utility/design/tvm-utility-design.md

Line 1 in Markdown file: Missing end of comment tag

# TVM Utility {#tvm-utility-design}\n

"},{"location":"common/tvm_utility/design/tvm-utility-yolo-v2-tiny-tests/","title":"Macro Syntax Error","text":"

File: common/tvm_utility/design/tvm-utility-yolo-v2-tiny-tests.md

Line 1 in Markdown file: Missing end of comment tag

# YOLOv2 Tiny Example Pipeline {#tvm-utility-yolo-v2-tiny-tests}\n

"},{"location":"control/control_performance_analysis/","title":"control_performance_analysis","text":""},{"location":"control/control_performance_analysis/#purpose","title":"Purpose","text":"

control_performance_analysis is the package to analyze the tracking performance of a control module and monitor the driving status of the vehicle.

This package is used as a tool to quantify the results of the control module. That's why it doesn't interfere with the core logic of autonomous driving.

Based on the various input from planning, control, and vehicle, it publishes the result of analysis as control_performance_analysis::msg::ErrorStamped defined in this package.

All results in ErrorStamped message are calculated in Frenet Frame of curve. Errors and velocity errors are calculated by using paper below.

Werling, Moritz & Groell, Lutz & Bretthauer, Georg. (2010). Invariant Trajectory Tracking With a Full-Size Autonomous Road Vehicle. IEEE Transactions on Robotics. 26. 758 - 765. 10.1109/TRO.2010.2052325.

If you are interested in calculations, you can see the error and error velocity calculations in section C. Asymptotical Trajectory Tracking With Orientation Control.

Error acceleration calculations are made based on the velocity calculations above. You can see below the calculation of error acceleration.

"},{"location":"control/control_performance_analysis/#input-output","title":"Input / Output","text":""},{"location":"control/control_performance_analysis/#input-topics","title":"Input topics","text":"Name Type Description /planning/scenario_planning/trajectory autoware_auto_planning_msgs::msg::Trajectory Output trajectory from planning module. /control/command/control_cmd autoware_auto_control_msgs::msg::AckermannControlCommand Output control command from control module. /vehicle/status/steering_status autoware_auto_vehicle_msgs::msg::SteeringReport Steering information from vehicle. /localization/kinematic_state nav_msgs::msg::Odometry Use twist from odometry. /tf tf2_msgs::msg::TFMessage Extract ego pose from tf."},{"location":"control/control_performance_analysis/#output-topics","title":"Output topics","text":"Name Type Description /control_performance/performance_vars control_performance_analysis::msg::ErrorStamped The result of the performance analysis. /control_performance/driving_status control_performance_analysis::msg::DrivingMonitorStamped Driving status (acceleration, jerk etc.) monitoring"},{"location":"control/control_performance_analysis/#outputs","title":"Outputs","text":""},{"location":"control/control_performance_analysis/#control_performance_analysismsgdrivingmonitorstamped","title":"control_performance_analysis::msg::DrivingMonitorStamped","text":"Name Type Description longitudinal_acceleration float [m / s^2] longitudinal_jerk float [m / s^3] lateral_acceleration float [m / s^2] lateral_jerk float [m / s^3] desired_steering_angle float [rad] controller_processing_time float Timestamp between last two control command messages [ms]"},{"location":"control/control_performance_analysis/#control_performance_analysismsgerrorstamped","title":"control_performance_analysis::msg::ErrorStamped","text":"Name Type Description lateral_error float [m] lateral_error_velocity float [m / s] lateral_error_acceleration float [m / s^2] longitudinal_error float [m] longitudinal_error_velocity float [m / s] longitudinal_error_acceleration float [m / s^2] heading_error float [rad] heading_error_velocity float [rad / s] control_effort_energy float [u * R * u^T] error_energy float lateral_error^2 + heading_error^2 value_approximation float V = xPx' ; Value function from DARE Lyap matrix P curvature_estimate float [1 / m] curvature_estimate_pp float [1 / m] vehicle_velocity_error float [m / s] tracking_curvature_discontinuity_ability float Measures the ability to tracking the curvature changes [abs(delta(curvature)) / (1 + abs(delta(lateral_error))]"},{"location":"control/control_performance_analysis/#parameters","title":"Parameters","text":"Name Type Description curvature_interval_length double Used for estimating current curvature prevent_zero_division_value double Value to avoid zero division. Default is 0.001 odom_interval unsigned integer Interval between odom messages, increase it for smoother curve. acceptable_max_distance_to_waypoint double Maximum distance between trajectory point and vehicle [m] acceptable_max_yaw_difference_rad double Maximum yaw difference between trajectory point and vehicle [rad] low_pass_filter_gain double Low pass filter gain"},{"location":"control/control_performance_analysis/#usage","title":"Usage","text":" "},{"location":"control/control_performance_analysis/#future-improvements","title":"Future Improvements","text":""},{"location":"control/external_cmd_selector/","title":"external_cmd_selector","text":""},{"location":"control/external_cmd_selector/#purpose","title":"Purpose","text":"

external_cmd_selector is the package to publish external_control_cmd, gear_cmd, hazard_lights_cmd, heartbeat and turn_indicators_cmd, according to the current mode, which is remote or local.

The current mode is set via service, remote is remotely operated, local is to use the values calculated by Autoware.

"},{"location":"control/external_cmd_selector/#input-output","title":"Input / Output","text":""},{"location":"control/external_cmd_selector/#input-topics","title":"Input topics","text":"Name Type Description /api/external/set/command/local/control TBD Local. Calculated control value. /api/external/set/command/local/heartbeat TBD Local. Heartbeat. /api/external/set/command/local/shift TBD Local. Gear shift like drive, rear and etc. /api/external/set/command/local/turn_signal TBD Local. Turn signal like left turn, right turn and etc. /api/external/set/command/remote/control TBD Remote. Calculated control value. /api/external/set/command/remote/heartbeat TBD Remote. Heartbeat. /api/external/set/command/remote/shift TBD Remote. Gear shift like drive, rear and etc. /api/external/set/command/remote/turn_signal TBD Remote. Turn signal like left turn, right turn and etc."},{"location":"control/external_cmd_selector/#output-topics","title":"Output topics","text":"Name Type Description /control/external_cmd_selector/current_selector_mode TBD Current selected mode, remote or local. /diagnostics diagnostic_msgs::msg::DiagnosticArray Check if node is active or not. /external/selected/external_control_cmd TBD Pass through control command with current mode. /external/selected/gear_cmd autoware_auto_vehicle_msgs::msg::GearCommand Pass through gear command with current mode. /external/selected/hazard_lights_cmd autoware_auto_vehicle_msgs::msg::HazardLightsCommand Pass through hazard light with current mode. /external/selected/heartbeat TBD Pass through heartbeat with current mode. /external/selected/turn_indicators_cmd autoware_auto_vehicle_msgs::msg::TurnIndicatorsCommand Pass through turn indicator with current mode."},{"location":"control/joy_controller/","title":"joy_controller","text":""},{"location":"control/joy_controller/#role","title":"Role","text":"

joy_controller is the package to convert a joy msg to autoware commands (e.g. steering wheel, shift, turn signal, engage) for a vehicle.

"},{"location":"control/joy_controller/#input-output","title":"Input / Output","text":""},{"location":"control/joy_controller/#input-topics","title":"Input topics","text":"Name Type Description ~/input/joy sensor_msgs::msg::Joy joy controller command ~/input/odometry nav_msgs::msg::Odometry ego vehicle odometry to get twist"},{"location":"control/joy_controller/#output-topics","title":"Output topics","text":"Name Type Description ~/output/control_command autoware_auto_control_msgs::msg::AckermannControlCommand lateral and longitudinal control command ~/output/external_control_command tier4_external_api_msgs::msg::ControlCommandStamped lateral and longitudinal control command ~/output/shift tier4_external_api_msgs::msg::GearShiftStamped gear command ~/output/turn_signal tier4_external_api_msgs::msg::TurnSignalStamped turn signal command ~/output/gate_mode tier4_control_msgs::msg::GateMode gate mode (Auto or External) ~/output/heartbeat tier4_external_api_msgs::msg::Heartbeat heartbeat ~/output/vehicle_engage autoware_auto_vehicle_msgs::msg::Engage vehicle engage"},{"location":"control/joy_controller/#parameters","title":"Parameters","text":"Parameter Type Description joy_type string joy controller type (default: DS4) update_rate double update rate to publish control commands accel_ratio double ratio to calculate acceleration (commanded acceleration is ratio * operation) brake_ratio double ratio to calculate deceleration (commanded acceleration is -ratio * operation) steer_ratio double ratio to calculate deceleration (commanded steer is ratio * operation) steering_angle_velocity double steering angle velocity for operation accel_sensitivity double sensitivity to calculate acceleration for external API (commanded acceleration is pow(operation, 1 / sensitivity)) brake_sensitivity double sensitivity to calculate deceleration for external API (commanded acceleration is pow(operation, 1 / sensitivity)) velocity_gain double ratio to calculate velocity by acceleration max_forward_velocity double absolute max velocity to go forward max_backward_velocity double absolute max velocity to go backward backward_accel_ratio double ratio to calculate deceleration (commanded acceleration is -ratio * operation)"},{"location":"control/joy_controller/#p65-joystick-key-map","title":"P65 Joystick Key Map","text":"Action Button Acceleration R2 Brake L2 Steering Left Stick Left Right Shift up Cursor Up Shift down Cursor Down Shift Drive Cursor Left Shift Reverse Cursor Right Turn Signal Left L1 Turn Signal Right R1 Clear Turn Signal A Gate Mode B Emergency Stop Select Clear Emergency Stop Start Autoware Engage X Autoware Disengage Y Vehicle Engage PS Vehicle Disengage Right Trigger"},{"location":"control/joy_controller/#ds4-joystick-key-map","title":"DS4 Joystick Key Map","text":"Action Button Acceleration R2, \u00d7, or Right Stick Up Brake L2, \u25a1, or Right Stick Down Steering Left Stick Left Right Shift up Cursor Up Shift down Cursor Down Shift Drive Cursor Left Shift Reverse Cursor Right Turn Signal Left L1 Turn Signal Right R1 Clear Turn Signal SHARE Gate Mode OPTIONS Emergency Stop PS Clear Emergency Stop PS Autoware Engage \u25cb Autoware Disengage \u25cb Vehicle Engage \u25b3 Vehicle Disengage \u25b3"},{"location":"control/lane_departure_checker/","title":"Lane Departure Checker","text":"

The Lane Departure Checker checks if vehicle follows a trajectory. If it does not follow the trajectory, it reports its status via diagnostic_updater.

"},{"location":"control/lane_departure_checker/#features","title":"Features","text":"

This package includes the following features:

"},{"location":"control/lane_departure_checker/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"control/lane_departure_checker/#how-to-extend-footprint-by-covariance","title":"How to extend footprint by covariance","text":"
  1. Calculate the standard deviation of error ellipse(covariance) in vehicle coordinate.

    1.Transform covariance into vehicle coordinate.

    Calculate covariance in vehicle coordinate.

    2.The longitudinal length we want to expand is correspond to marginal distribution of x_{vehicle}, which is represented in Cov_{vehicle}(0,0). In the same way, the lateral length is represented in Cov_{vehicle}(1,1). Wikipedia reference here.

  2. Expand footprint based on the standard deviation multiplied with footprint_margin_scale.

"},{"location":"control/lane_departure_checker/#interface","title":"Interface","text":""},{"location":"control/lane_departure_checker/#input","title":"Input","text":""},{"location":"control/lane_departure_checker/#output","title":"Output","text":""},{"location":"control/lane_departure_checker/#parameters","title":"Parameters","text":""},{"location":"control/lane_departure_checker/#node-parameters","title":"Node Parameters","text":"Name Type Description Default value update_rate double Frequency for publishing [Hz] 10.0 visualize_lanelet bool Flag for visualizing lanelet False"},{"location":"control/lane_departure_checker/#core-parameters","title":"Core Parameters","text":"Name Type Description Default value footprint_margin_scale double Coefficient for expanding footprint margin. Multiplied by 1 standard deviation. 1.0 resample_interval double Minimum Euclidean distance between points when resample trajectory.[m] 0.3 max_deceleration double Maximum deceleration when calculating braking distance. 2.8 delay_time double Delay time which took to actuate brake when calculating braking distance. [second] 1.3 max_lateral_deviation double Maximum lateral deviation in vehicle coordinate. [m] 2.0 max_longitudinal_deviation double Maximum longitudinal deviation in vehicle coordinate. [m] 2.0 max_yaw_deviation_deg double Maximum ego yaw deviation from trajectory. [deg] 60.0"},{"location":"control/mpc_lateral_controller/","title":"MPC Lateral Controller","text":"

This is the design document for the lateral controller node in the trajectory_follower_node package.

"},{"location":"control/mpc_lateral_controller/#purpose-use-cases","title":"Purpose / Use cases","text":"

This node is used to general lateral control commands (steering angle and steering rate) when following a path.

"},{"location":"control/mpc_lateral_controller/#design","title":"Design","text":"

The node uses an implementation of linear model predictive control (MPC) for accurate path tracking. The MPC uses a model of the vehicle to simulate the trajectory resulting from the control command. The optimization of the control command is formulated as a Quadratic Program (QP).

Different vehicle models are implemented:

For the optimization, a Quadratic Programming (QP) solver is used and two options are currently implemented:

"},{"location":"control/mpc_lateral_controller/#filtering","title":"Filtering","text":"

Filtering is required for good noise reduction. A Butterworth filter is used for the yaw and lateral errors used as input of the MPC as well as for the output steering angle. Other filtering methods can be considered as long as the noise reduction performances are good enough. The moving average filter for example is not suited and can yield worse results than without any filtering.

"},{"location":"control/mpc_lateral_controller/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

The tracking is not accurate if the first point of the reference trajectory is at or in front of the current ego pose.

"},{"location":"control/mpc_lateral_controller/#inputs-outputs-api","title":"Inputs / Outputs / API","text":""},{"location":"control/mpc_lateral_controller/#inputs","title":"Inputs","text":"

Set the following from the controller_node

"},{"location":"control/mpc_lateral_controller/#outputs","title":"Outputs","text":"

Return LateralOutput which contains the following to the controller node

"},{"location":"control/mpc_lateral_controller/#mpc-class","title":"MPC class","text":"

The MPC class (defined in mpc.hpp) provides the interface with the MPC algorithm. Once a vehicle model, a QP solver, and the reference trajectory to follow have been set (using setVehicleModel(), setQPSolver(), setReferenceTrajectory()), a lateral control command can be calculated by providing the current steer, velocity, and pose to function calculateMPC().

"},{"location":"control/mpc_lateral_controller/#parameter-description","title":"Parameter description","text":"

The default parameters defined in param/lateral_controller_defaults.param.yaml are adjusted to the AutonomouStuff Lexus RX 450h for under 40 km/h driving.

Name Type Description Default value show_debug_info bool display debug info false traj_resample_dist double distance of waypoints in resampling [m] 0.1 enable_path_smoothing bool path smoothing flag. This should be true when uses path resampling to reduce resampling noise. true path_filter_moving_ave_num int number of data points moving average filter for path smoothing 35 path_smoothing_times int number of times of applying path smoothing filter 1 curvature_smoothing_num_ref_steer double index distance of points used in curvature calculation for reference steer command: p(i-num), p(i), p(i+num). larger num makes less noisy values. 35 curvature_smoothing_num_traj double index distance of points used in curvature calculation for trajectory: p(i-num), p(i), p(i+num). larger num makes less noisy values. 1 extend_trajectory_for_end_yaw_control bool trajectory extending flag for end yaw control. true steering_lpf_cutoff_hz double cutoff frequency of lowpass filter for steering output command [hz] 3.0 admissible_position_error double stop vehicle when following position error is larger than this value [m]. 5.0 admissible_yaw_error_rad double stop vehicle when following yaw angle error is larger than this value [rad]. 1.57 stop_state_entry_ego_speed *1 double threshold value of the ego vehicle speed used to the stop state entry condition 0.0 stop_state_entry_target_speed *1 double threshold value of the target speed used to the stop state entry condition 0.0 converged_steer_rad double threshold value of the steer convergence 0.1 keep_steer_control_until_converged bool keep steer control until steer is converged true new_traj_duration_time double threshold value of the time to be considered as new trajectory 1.0 new_traj_end_dist double threshold value of the distance between trajectory ends to be considered as new trajectory 0.3

(*1) To prevent unnecessary steering movement, the steering command is fixed to the previous value in the stop state.

"},{"location":"control/mpc_lateral_controller/#mpc-algorithm","title":"MPC algorithm","text":"Name Type Description Default value qp_solver_type string QP solver option. described below in detail. unconstraint_fast vehicle_model_type string vehicle model option. described below in detail. kinematics prediction_horizon int total prediction step for MPC 70 prediction_sampling_time double prediction period for one step [s] 0.1 weight_lat_error double weight for lateral error 0.1 weight_heading_error double weight for heading error 0.0 weight_heading_error_squared_vel_coeff double weight for heading error * velocity 5.0 weight_steering_input double weight for steering error (steer command - reference steer) 1.0 weight_steering_input_squared_vel_coeff double weight for steering error (steer command - reference steer) * velocity 0.25 weight_lat_jerk double weight for lateral jerk (steer(i) - steer(i-1)) * velocity 0.0 weight_terminal_lat_error double terminal cost weight for lateral error 1.0 weight_terminal_heading_error double terminal cost weight for heading error 0.1 zero_ff_steer_deg double threshold of feedforward angle [deg]. feedforward angle smaller than this value is set to zero. 2.0"},{"location":"control/mpc_lateral_controller/#vehicle","title":"Vehicle","text":"Name Type Description Default value cg_to_front_m double distance from baselink to the front axle[m] 1.228 cg_to_rear_m double distance from baselink to the rear axle [m] 1.5618 mass_fl double mass applied to front left tire [kg] 600 mass_fr double mass applied to front right tire [kg] 600 mass_rl double mass applied to rear left tire [kg] 600 mass_rr double mass applied to rear right tire [kg] 600 cf double front cornering power [N/rad] 155494.663 cr double rear cornering power [N/rad] 155494.663 steering_tau double steering dynamics time constant (1d approximation) for vehicle model [s] 0.3"},{"location":"control/mpc_lateral_controller/#how-to-tune-mpc-parameters","title":"How to tune MPC parameters","text":"
  1. Set appropriate vehicle kinematics parameters for distance to front and rear axle and max steer angle. Also check that the input VehicleKinematicState has appropriate values (speed: vehicle rear-wheels-center velocity [km/h], angle: steering (tire) angle [rad]). These values give a vehicle information to the controller for path following. Errors in these values cause fundamental tracking error.

  2. Set appropriate vehicle dynamics parameters of steering_tau, which is the approximated delay from steering angle command to actual steering angle.

  3. Set weight_steering_input = 1.0, weight_lat_error = 0.1, and other weights to 0. If the vehicle oscillates when driving with low speed, set weight_lat_error smaller.

  4. Adjust other weights. One of the simple way for tuning is to increase weight_lat_error until oscillation occurs. If the vehicle is unstable with very small weight_lat_error, increase terminal weight : weight_terminal_lat_error and weight_terminal_heading_error to improve tracking stability. Larger prediction_horizon and smaller prediction_sampling_time is effective for tracking performance, but it is a trade-off between computational costs. Other parameters can be adjusted like below.

"},{"location":"control/mpc_lateral_controller/#references-external-links","title":"References / External links","text":""},{"location":"control/mpc_lateral_controller/#related-issues","title":"Related issues","text":""},{"location":"control/obstacle_collision_checker/","title":"obstacle_collision_checker","text":""},{"location":"control/obstacle_collision_checker/#purpose","title":"Purpose","text":"

obstacle_collision_checker is a module to check obstacle collision for predicted trajectory and publish diagnostic errors if collision is found.

"},{"location":"control/obstacle_collision_checker/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"control/obstacle_collision_checker/#flow-chart","title":"Flow chart","text":""},{"location":"control/obstacle_collision_checker/#algorithms","title":"Algorithms","text":""},{"location":"control/obstacle_collision_checker/#check-data","title":"Check data","text":"

Check that obstacle_collision_checker receives no ground pointcloud, predicted_trajectory, reference trajectory, and current velocity data.

"},{"location":"control/obstacle_collision_checker/#diagnostic-update","title":"Diagnostic update","text":"

If any collision is found on predicted path, this module sets ERROR level as diagnostic status else sets OK.

"},{"location":"control/obstacle_collision_checker/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"control/obstacle_collision_checker/#input","title":"Input","text":"Name Type Description ~/input/trajectory autoware_auto_planning_msgs::msg::Trajectory Reference trajectory ~/input/trajectory autoware_auto_planning_msgs::msg::Trajectory Predicted trajectory /perception/obstacle_segmentation/pointcloud sensor_msgs::msg::PointCloud2 Pointcloud of obstacles which the ego-vehicle should stop or avoid /tf tf2_msgs::msg::TFMessage TF /tf_static tf2_msgs::msg::TFMessage TF static"},{"location":"control/obstacle_collision_checker/#output","title":"Output","text":"Name Type Description ~/debug/marker visualization_msgs::msg::MarkerArray Marker for visualization"},{"location":"control/obstacle_collision_checker/#parameters","title":"Parameters","text":"Name Type Description Default value delay_time double Delay time of vehicle [s] 0.3 footprint_margin double Foot print margin [m] 0.0 max_deceleration double Max deceleration for ego vehicle to stop [m/s^2] 2.0 resample_interval double Interval for resampling trajectory [m] 0.3 search_radius double Search distance from trajectory to point cloud [m] 5.0"},{"location":"control/obstacle_collision_checker/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

To perform proper collision check, it is necessary to get probably predicted trajectory and obstacle pointclouds without noise.

"},{"location":"control/operation_mode_transition_manager/","title":"operation_mode_transition_manager","text":""},{"location":"control/operation_mode_transition_manager/#purpose-use-cases","title":"Purpose / Use cases","text":"

This module is responsible for managing the different modes of operation for the Autoware system. The possible modes are:

There is also an In Transition state that occurs during each mode transitions. During this state, the transition to the new operator is not yet complete, and the previous operator is still responsible for controlling the system until the transition is complete. Some actions may be restricted during the In Transition state, such as sudden braking or steering. (This is restricted by the vehicle_cmd_gate).

"},{"location":"control/operation_mode_transition_manager/#features","title":"Features","text":" "},{"location":"control/operation_mode_transition_manager/#design","title":"Design","text":""},{"location":"control/operation_mode_transition_manager/#inputs-outputs-api","title":"Inputs / Outputs / API","text":""},{"location":"control/operation_mode_transition_manager/#inputs","title":"Inputs","text":"

For the mode transition:

For the transition availability/completion check:

For the backward compatibility (to be removed):

"},{"location":"control/operation_mode_transition_manager/#outputs","title":"Outputs","text":" "},{"location":"control/operation_mode_transition_manager/#parameters","title":"Parameters","text":"Name Type Description Default value transition_timeout double If the state transition is not completed within this time, it is considered a transition failure. 10.0 frequency_hz double running hz 10.0 check_engage_condition double If false, autonomous transition is always available 0.1 nearest_dist_deviation_threshold double distance threshold used to find nearest trajectory point 3.0 nearest_yaw_deviation_threshold double angle threshold used to find nearest trajectory point 1.57

For engage_acceptable_limits related parameters:

Name Type Description Default value allow_autonomous_in_stopped bool If true, autonomous transition is available when the vehicle is stopped even if other checks fail. true dist_threshold double the distance between the trajectory and ego vehicle must be within this distance for Autonomous transition. 1.5 yaw_threshold double the yaw angle between trajectory and ego vehicle must be within this threshold for Autonomous transition. 0.524 speed_upper_threshold double the velocity deviation between control command and ego vehicle must be within this threshold for Autonomous transition. 10.0 speed_lower_threshold double the velocity deviation between the control command and ego vehicle must be within this threshold for Autonomous transition. -10.0 acc_threshold double the control command acceleration must be less than this threshold for Autonomous transition. 1.5 lateral_acc_threshold double the control command lateral acceleration must be less than this threshold for Autonomous transition. 1.0 lateral_acc_diff_threshold double the lateral acceleration deviation between the control command must be less than this threshold for Autonomous transition. 0.5

For stable_check related parameters:

Name Type Description Default value duration double the stable condition must be satisfied for this duration to complete the transition. 0.1 dist_threshold double the distance between the trajectory and ego vehicle must be within this distance to complete Autonomous transition. 1.5 yaw_threshold double the yaw angle between trajectory and ego vehicle must be within this threshold to complete Autonomous transition. 0.262 speed_upper_threshold double the velocity deviation between control command and ego vehicle must be within this threshold to complete Autonomous transition. 2.0 speed_lower_threshold double the velocity deviation between control command and ego vehicle must be within this threshold to complete Autonomous transition. 2.0"},{"location":"control/operation_mode_transition_manager/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":""},{"location":"control/pid_longitudinal_controller/","title":"PID Longitudinal Controller","text":""},{"location":"control/pid_longitudinal_controller/#purpose-use-cases","title":"Purpose / Use cases","text":"

The longitudinal_controller computes the target acceleration to achieve the target velocity set at each point of the target trajectory using a feed-forward/back control.

It also contains a slope force correction that takes into account road slope information, and a delay compensation function. It is assumed that the target acceleration calculated here will be properly realized by the vehicle interface.

Note that the use of this module is not mandatory for Autoware if the vehicle supports the \"target speed\" interface.

"},{"location":"control/pid_longitudinal_controller/#design-inner-workings-algorithms","title":"Design / Inner-workings / Algorithms","text":""},{"location":"control/pid_longitudinal_controller/#states","title":"States","text":"

This module has four state transitions as shown below in order to handle special processing in a specific situation.

The state transition diagram is shown below.

"},{"location":"control/pid_longitudinal_controller/#logics","title":"Logics","text":""},{"location":"control/pid_longitudinal_controller/#control-block-diagram","title":"Control Block Diagram","text":""},{"location":"control/pid_longitudinal_controller/#feedforward-ff","title":"FeedForward (FF)","text":"

The reference acceleration set in the trajectory and slope compensation terms are output as a feedforward. Under ideal conditions with no modeling error, this FF term alone should be sufficient for velocity tracking.

Tracking errors causing modeling or discretization errors are removed by the feedback control (now using PID).

"},{"location":"control/pid_longitudinal_controller/#brake-keeping","title":"Brake keeping","text":"

From the viewpoint of ride comfort, stopping with 0 acceleration is important because it reduces the impact of braking. However, if the target acceleration when stopping is 0, the vehicle may cross over the stop line or accelerate a little in front of the stop line due to vehicle model error or gradient estimation error.

For reliable stopping, the target acceleration calculated by the FeedForward system is limited to a negative acceleration when stopping.

"},{"location":"control/pid_longitudinal_controller/#slope-compensation","title":"Slope compensation","text":"

Based on the slope information, a compensation term is added to the target acceleration.

There are two sources of the slope information, which can be switched by a parameter.

"},{"location":"control/pid_longitudinal_controller/#pid-control","title":"PID control","text":"

For deviations that cannot be handled by FeedForward control, such as model errors, PID control is used to construct a feedback system.

This PID control calculates the target acceleration from the deviation between the current ego-velocity and the target velocity.

This PID logic has a maximum value for the output of each term. This is to prevent the following:

Also, the integral term is not accumulated when the vehicle is stopped. This is to prevent unintended accumulation of the integral term in cases such as Autoware assumes that the vehicle is engaged, but an external system has locked the vehicle to start. On the other hand, if the vehicle gets stuck in a depression in the road surface when starting, the vehicle will not start forever, which is currently being addressed by developers.

At present, PID control is implemented from the viewpoint of trade-off between development/maintenance cost and performance. This may be replaced by a higher performance controller (adaptive control or robust control) in future development.

"},{"location":"control/pid_longitudinal_controller/#time-delay-compensation","title":"Time delay compensation","text":"

At high speeds, the delay of actuator systems such as gas pedals and brakes has a significant impact on driving accuracy. Depending on the actuating principle of the vehicle, the mechanism that physically controls the gas pedal and brake typically has a delay of about a hundred millisecond.

In this controller, the predicted ego-velocity and the target velocity after the delay time are calculated and used for the feedback to address the time delay problem.

"},{"location":"control/pid_longitudinal_controller/#slope-compensation_1","title":"Slope compensation","text":"

Based on the slope information, a compensation term is added to the target acceleration.

There are two sources of the slope information, which can be switched by a parameter.

"},{"location":"control/pid_longitudinal_controller/#assumptions-known-limits","title":"Assumptions / Known limits","text":"
  1. Smoothed target velocity and its acceleration shall be set in the trajectory
    1. The velocity command is not smoothed inside the controller (only noise may be removed).
    2. For step-like target signal, tracking is performed as fast as possible.
  2. The vehicle velocity must be an appropriate value
    1. The ego-velocity must be a signed-value corresponding to the forward/backward direction
    2. The ego-velocity should be given with appropriate noise processing.
    3. If there is a large amount of noise in the ego-velocity, the tracking performance will be significantly reduced.
  3. The output of this controller must be achieved by later modules (e.g. vehicle interface).
    1. If the vehicle interface does not have the target velocity or acceleration interface (e.g., the vehicle only has a gas pedal and brake interface), an appropriate conversion must be done after this controller.
"},{"location":"control/pid_longitudinal_controller/#inputs-outputs-api","title":"Inputs / Outputs / API","text":""},{"location":"control/pid_longitudinal_controller/#input","title":"Input","text":"

Set the following from the controller_node

"},{"location":"control/pid_longitudinal_controller/#output","title":"Output","text":"

Return LongitudinalOutput which contains the following to the controller node

"},{"location":"control/pid_longitudinal_controller/#pidcontroller-class","title":"PIDController class","text":"

The PIDController class is straightforward to use. First, gains and limits must be set (using setGains() and setLimits()) for the proportional (P), integral (I), and derivative (D) components. Then, the velocity can be calculated by providing the current error and time step duration to the calculate() function.

"},{"location":"control/pid_longitudinal_controller/#parameter-description","title":"Parameter description","text":"

The default parameters defined in param/lateral_controller_defaults.param.yaml are adjusted to the AutonomouStuff Lexus RX 450h for under 40 km/h driving.

Name Type Description Default value delay_compensation_time double delay for longitudinal control [s] 0.17 enable_smooth_stop bool flag to enable transition to STOPPING true enable_overshoot_emergency bool flag to enable transition to EMERGENCY when the ego is over the stop line with a certain distance. See emergency_state_overshoot_stop_dist. true enable_large_tracking_error_emergency bool flag to enable transition to EMERGENCY when the closest trajectory point search is failed due to a large deviation between trajectory and ego pose. true enable_slope_compensation bool flag to modify output acceleration for slope compensation. The source of the slope angle can be selected from ego-pose or trajectory angle. See use_trajectory_for_pitch_calculation. true enable_brake_keeping_before_stop bool flag to keep a certain acceleration during DRIVE state before the ego stops. See Brake keeping. false enable_keep_stopped_until_steer_convergence bool flag to keep stopped condition until until the steer converges. true max_acc double max value of output acceleration [m/s^2] 3.0 min_acc double min value of output acceleration [m/s^2] -5.0 max_jerk double max value of jerk of output acceleration [m/s^3] 2.0 min_jerk double min value of jerk of output acceleration [m/s^3] -5.0 use_trajectory_for_pitch_calculation bool If true, the slope is estimated from trajectory z-level. Otherwise the pitch angle of the ego pose is used. false lpf_pitch_gain double gain of low-pass filter for pitch estimation 0.95 max_pitch_rad double max value of estimated pitch [rad] 0.1 min_pitch_rad double min value of estimated pitch [rad] -0.1"},{"location":"control/pid_longitudinal_controller/#state-transition","title":"State transition","text":"Name Type Description Default value drive_state_stop_dist double The state will transit to DRIVE when the distance to the stop point is longer than drive_state_stop_dist + drive_state_offset_stop_dist [m] 0.5 drive_state_offset_stop_dist double The state will transit to DRIVE when the distance to the stop point is longer than drive_state_stop_dist + drive_state_offset_stop_dist [m] 1.0 stopping_state_stop_dist double The state will transit to STOPPING when the distance to the stop point is shorter than stopping_state_stop_dist [m] 0.5 stopped_state_entry_vel double threshold of the ego velocity in transition to the STOPPED state [m/s] 0.01 stopped_state_entry_acc double threshold of the ego acceleration in transition to the STOPPED state [m/s^2] 0.1 emergency_state_overshoot_stop_dist double If enable_overshoot_emergency is true and the ego is emergency_state_overshoot_stop_dist-meter ahead of the stop point, the state will transit to EMERGENCY. [m] 1.5 emergency_state_traj_trans_dev double If the ego's position is emergency_state_traj_tran_dev meter away from the nearest trajectory point, the state will transit to EMERGENCY. [m] 3.0 emergency_state_traj_rot_dev double If the ego's orientation is emergency_state_traj_rot_dev rad away from the nearest trajectory point orientation, the state will transit to EMERGENCY. [rad] 0.784"},{"location":"control/pid_longitudinal_controller/#drive-parameter","title":"DRIVE Parameter","text":"Name Type Description Default value kp double p gain for longitudinal control 1.0 ki double i gain for longitudinal control 0.1 kd double d gain for longitudinal control 0.0 max_out double max value of PID's output acceleration during DRIVE state [m/s^2] 1.0 min_out double min value of PID's output acceleration during DRIVE state [m/s^2] -1.0 max_p_effort double max value of acceleration with p gain 1.0 min_p_effort double min value of acceleration with p gain -1.0 max_i_effort double max value of acceleration with i gain 0.3 min_i_effort double min value of acceleration with i gain -0.3 max_d_effort double max value of acceleration with d gain 0.0 min_d_effort double min value of acceleration with d gain 0.0 lpf_vel_error_gain double gain of low-pass filter for velocity error 0.9 current_vel_threshold_pid_integration double Velocity error is integrated for I-term only when the absolute value of current velocity is larger than this parameter. [m/s] 0.5 brake_keeping_acc double If enable_brake_keeping_before_stop is true, a certain acceleration is kept during DRIVE state before the ego stops [m/s^2] See Brake keeping. 0.2"},{"location":"control/pid_longitudinal_controller/#stopping-parameter-smooth-stop","title":"STOPPING Parameter (smooth stop)","text":"

Smooth stop is enabled if enable_smooth_stop is true. In smooth stop, strong acceleration (strong_acc) will be output first to decrease the ego velocity. Then weak acceleration (weak_acc) will be output to stop smoothly by decreasing the ego jerk. If the ego does not stop in a certain time or some-meter over the stop point, weak acceleration to stop right (weak_stop_acc) now will be output. If the ego is still running, strong acceleration (strong_stop_acc) to stop right now will be output.

Name Type Description Default value smooth_stop_max_strong_acc double max strong acceleration [m/s^2] -0.5 smooth_stop_min_strong_acc double min strong acceleration [m/s^2] -0.8 smooth_stop_weak_acc double weak acceleration [m/s^2] -0.3 smooth_stop_weak_stop_acc double weak acceleration to stop right now [m/s^2] -0.8 smooth_stop_strong_stop_acc double strong acceleration to be output when the ego is smooth_stop_strong_stop_dist-meter over the stop point. [m/s^2] -3.4 smooth_stop_max_fast_vel double max fast vel to judge the ego is running fast [m/s]. If the ego is running fast, strong acceleration will be output. 0.5 smooth_stop_min_running_vel double min ego velocity to judge if the ego is running or not [m/s] 0.01 smooth_stop_min_running_acc double min ego acceleration to judge if the ego is running or not [m/s^2] 0.01 smooth_stop_weak_stop_time double max time to output weak acceleration [s]. After this, strong acceleration will be output. 0.8 smooth_stop_weak_stop_dist double Weak acceleration will be output when the ego is smooth_stop_weak_stop_dist-meter before the stop point. [m] -0.3 smooth_stop_strong_stop_dist double Strong acceleration will be output when the ego is smooth_stop_strong_stop_dist-meter over the stop point. [m] -0.5"},{"location":"control/pid_longitudinal_controller/#stopped-parameter","title":"STOPPED Parameter","text":"Name Type Description Default value stopped_vel double target velocity in STOPPED state [m/s] 0.0 stopped_acc double target acceleration in STOPPED state [m/s^2] -3.4 stopped_jerk double target jerk in STOPPED state [m/s^3] -5.0"},{"location":"control/pid_longitudinal_controller/#emergency-parameter","title":"EMERGENCY Parameter","text":"Name Type Description Default value emergency_vel double target velocity in EMERGENCY state [m/s] 0.0 emergency_acc double target acceleration in an EMERGENCY state [m/s^2] -5.0 emergency_jerk double target jerk in an EMERGENCY state [m/s^3] -3.0"},{"location":"control/pid_longitudinal_controller/#references-external-links","title":"References / External links","text":""},{"location":"control/pid_longitudinal_controller/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":""},{"location":"control/pid_longitudinal_controller/#related-issues","title":"Related issues","text":""},{"location":"control/shift_decider/","title":"Shift Decider","text":""},{"location":"control/shift_decider/#purpose","title":"Purpose","text":"

shift_decider is a module to decide shift from ackermann control command.

"},{"location":"control/shift_decider/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"control/shift_decider/#flow-chart","title":"Flow chart","text":""},{"location":"control/shift_decider/#algorithms","title":"Algorithms","text":""},{"location":"control/shift_decider/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"control/shift_decider/#input","title":"Input","text":"Name Type Description ~/input/control_cmd autoware_auto_control_msgs::msg::AckermannControlCommand Control command for vehicle."},{"location":"control/shift_decider/#output","title":"Output","text":"Name Type Description ~output/gear_cmd autoware_auto_vehicle_msgs::msg::GearCommand Gear for drive forward / backward."},{"location":"control/shift_decider/#parameters","title":"Parameters","text":"

none.

"},{"location":"control/shift_decider/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"control/trajectory_follower_base/","title":"Trajectory Follower","text":"

This is the design document for the trajectory_follower package.

"},{"location":"control/trajectory_follower_base/#purpose-use-cases","title":"Purpose / Use cases","text":"

This package provides the interface of longitudinal and lateral controllers used by the node of the trajectory_follower_node package. We can implement a detailed controller by deriving the longitudinal and lateral base interfaces.

"},{"location":"control/trajectory_follower_base/#design","title":"Design","text":"

There are lateral and longitudinal base interface classes and each algorithm inherits from this class to implement. The interface class has the following base functions.

See the Design of Trajectory Follower Nodes for how these functions work in the node.

"},{"location":"control/trajectory_follower_base/#separated-lateral-steering-and-longitudinal-velocity-controls","title":"Separated lateral (steering) and longitudinal (velocity) controls","text":"

This longitudinal controller assumes that the roles of lateral and longitudinal control are separated as follows.

Ideally, dealing with the lateral and longitudinal control as a single mixed problem can achieve high performance. In contrast, there are two reasons to provide velocity controller as a stand-alone function, described below.

"},{"location":"control/trajectory_follower_base/#complex-requirements-for-longitudinal-motion","title":"Complex requirements for longitudinal motion","text":"

The longitudinal vehicle behavior that humans expect is difficult to express in a single logic. For example, the expected behavior just before stopping differs depending on whether the ego-position is ahead/behind of the stop line, or whether the current speed is higher/lower than the target speed to achieve a human-like movement.

In addition, some vehicles have difficulty measuring the ego-speed at extremely low speeds. In such cases, a configuration that can improve the functionality of the longitudinal control without affecting the lateral control is important.

There are many characteristics and needs that are unique to longitudinal control. Designing them separately from the lateral control keeps the modules less coupled and improves maintainability.

"},{"location":"control/trajectory_follower_base/#nonlinear-coupling-of-lateral-and-longitudinal-motion","title":"Nonlinear coupling of lateral and longitudinal motion","text":"

The lat-lon mixed control problem is very complex and uses nonlinear optimization to achieve high performance. Since it is difficult to guarantee the convergence of the nonlinear optimization, a simple control logic is also necessary for development.

Also, the benefits of simultaneous longitudinal and lateral control are small if the vehicle doesn't move at high speed.

"},{"location":"control/trajectory_follower_base/#related-issues","title":"Related issues","text":""},{"location":"control/trajectory_follower_node/","title":"Trajectory Follower Nodes","text":""},{"location":"control/trajectory_follower_node/#purpose","title":"Purpose","text":"

Generate control commands to follow a given Trajectory.

"},{"location":"control/trajectory_follower_node/#design","title":"Design","text":"

This is a node of the functionalities implemented in the controller class derived from trajectory_follower_base package. It has instances of those functionalities, gives them input data to perform calculations, and publishes control commands.

By default, the controller instance with the Controller class as follows is used.

The process flow of Controller class is as follows.

// 1. create input data\nconst auto input_data = createInputData(*get_clock());\nif (!input_data) {\nreturn;\n}\n\n// 2. check if controllers are ready\nconst bool is_lat_ready = lateral_controller_->isReady(*input_data);\nconst bool is_lon_ready = longitudinal_controller_->isReady(*input_data);\nif (!is_lat_ready || !is_lon_ready) {\nreturn;\n}\n\n// 3. run controllers\nconst auto lat_out = lateral_controller_->run(*input_data);\nconst auto lon_out = longitudinal_controller_->run(*input_data);\n\n// 4. sync with each other controllers\nlongitudinal_controller_->sync(lat_out.sync_data);\nlateral_controller_->sync(lon_out.sync_data);\n\n// 5. publish control command\ncontrol_cmd_pub_->publish(out);\n

Giving the longitudinal controller information about steer convergence allows it to control steer when stopped if following parameters are true

"},{"location":"control/trajectory_follower_node/#inputs-outputs-api","title":"Inputs / Outputs / API","text":""},{"location":"control/trajectory_follower_node/#inputs","title":"Inputs","text":""},{"location":"control/trajectory_follower_node/#outputs","title":"Outputs","text":""},{"location":"control/trajectory_follower_node/#parameter","title":"Parameter","text":""},{"location":"control/trajectory_follower_node/#debugging","title":"Debugging","text":"

Debug information are published by the lateral and longitudinal controller using tier4_debug_msgs/Float32MultiArrayStamped messages.

A configuration file for PlotJuggler is provided in the config folder which, when loaded, allow to automatically subscribe and visualize information useful for debugging.

In addition, the predicted MPC trajectory is published on topic output/lateral/predicted_trajectory and can be visualized in Rviz.

"},{"location":"control/trajectory_follower_node/design/simple_trajectory_follower-design/","title":"Simple Trajectory Follower","text":""},{"location":"control/trajectory_follower_node/design/simple_trajectory_follower-design/#purpose","title":"Purpose","text":"

Provide a base trajectory follower code that is simple and flexible to use. This node calculates control command based on a reference trajectory and an ego vehicle kinematics.

"},{"location":"control/trajectory_follower_node/design/simple_trajectory_follower-design/#design","title":"Design","text":""},{"location":"control/trajectory_follower_node/design/simple_trajectory_follower-design/#inputs-outputs","title":"Inputs / Outputs","text":"

Inputs

"},{"location":"control/trajectory_follower_node/design/simple_trajectory_follower-design/#parameters","title":"Parameters","text":"Name Type Description Default value use_external_target_vel bool use external target velocity defined by parameter when true, else follow the velocity on target trajectory points. false external_target_vel float target velocity used when use_external_target_vel is true. 0.0 lateral_deviation float target lateral deviation when following. 0.0"},{"location":"control/vehicle_cmd_gate/","title":"vehicle_cmd_gate","text":""},{"location":"control/vehicle_cmd_gate/#purpose","title":"Purpose","text":"

vehicle_cmd_gate is the package to get information from emergency handler, planning module, external controller, and send a msg to vehicle.

"},{"location":"control/vehicle_cmd_gate/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"control/vehicle_cmd_gate/#input","title":"Input","text":"Name Type Description ~/input/steering autoware_auto_vehicle_msgs::msg::SteeringReport steering status ~/input/auto/control_cmd autoware_auto_control_msgs::msg::AckermannControlCommand command for lateral and longitudinal velocity from planning module ~/input/auto/turn_indicators_cmd autoware_auto_vehicle_msgs::msg::TurnIndicatorsCommand turn indicators command from planning module ~/input/auto/hazard_lights_cmd autoware_auto_vehicle_msgs::msg::HazardLightsCommand hazard lights command from planning module ~/input/auto/gear_cmd autoware_auto_vehicle_msgs::msg::GearCommand gear command from planning module ~/input/external/control_cmd autoware_auto_control_msgs::msg::AckermannControlCommand command for lateral and longitudinal velocity from external ~/input/external/turn_indicators_cmd autoware_auto_vehicle_msgs::msg::TurnIndicatorsCommand turn indicators command from external ~/input/external/hazard_lights_cmd autoware_auto_vehicle_msgs::msg::HazardLightsCommand hazard lights command from external ~/input/external/gear_cmd autoware_auto_vehicle_msgs::msg::GearCommand gear command from external ~/input/external_emergency_stop_heartbeat tier4_external_api_msgs::msg::Heartbeat heartbeat ~/input/gate_mode tier4_control_msgs::msg::GateMode gate mode (AUTO or EXTERNAL) ~/input/emergency/control_cmd autoware_auto_control_msgs::msg::AckermannControlCommand command for lateral and longitudinal velocity from emergency handler ~/input/emergency/hazard_lights_cmd autoware_auto_vehicle_msgs::msg::HazardLightsCommand hazard lights command from emergency handler ~/input/emergency/gear_cmd autoware_auto_vehicle_msgs::msg::GearCommand gear command from emergency handler ~/input/engage autoware_auto_vehicle_msgs::msg::Engage engage signal ~/input/operation_mode autoware_adapi_v1_msgs::msg::OperationModeState operation mode of Autoware"},{"location":"control/vehicle_cmd_gate/#output","title":"Output","text":"Name Type Description ~/output/vehicle_cmd_emergency autoware_auto_system_msgs::msg::EmergencyState emergency state which was originally in vehicle command ~/output/command/control_cmd autoware_auto_control_msgs::msg::AckermannControlCommand command for lateral and longitudinal velocity to vehicle ~/output/command/turn_indicators_cmd autoware_auto_vehicle_msgs::msg::TurnIndicatorsCommand turn indicators command to vehicle ~/output/command/hazard_lights_cmd autoware_auto_vehicle_msgs::msg::HazardLightsCommand hazard lights command to vehicle ~/output/command/gear_cmd autoware_auto_vehicle_msgs::msg::GearCommand gear command to vehicle ~/output/gate_mode tier4_control_msgs::msg::GateMode gate mode (AUTO or EXTERNAL) ~/output/engage autoware_auto_vehicle_msgs::msg::Engage engage signal ~/output/external_emergency tier4_external_api_msgs::msg::Emergency external emergency signal ~/output/operation_mode tier4_system_msgs::msg::OperationMode current operation mode of the vehicle_cmd_gate"},{"location":"control/vehicle_cmd_gate/#parameters","title":"Parameters","text":"Parameter Type Description update_period double update period use_emergency_handling bool true when emergency handler is used check_external_emergency_heartbeat bool true when checking heartbeat for emergency stop system_emergency_heartbeat_timeout double timeout for system emergency external_emergency_stop_heartbeat_timeout double timeout for external emergency stop_hold_acceleration double longitudinal acceleration cmd when vehicle should stop emergency_acceleration double longitudinal acceleration cmd when vehicle stop with emergency nominal.vel_lim double limit of longitudinal velocity (activated in AUTONOMOUS operation mode) nominal.lon_acc_lim double limit of longitudinal acceleration (activated in AUTONOMOUS operation mode) nominal.lon_jerk_lim double limit of longitudinal jerk (activated in AUTONOMOUS operation mode) nominal.lat_acc_lim double limit of lateral acceleration (activated in AUTONOMOUS operation mode) nominal.lat_jerk_lim double limit of lateral jerk (activated in AUTONOMOUS operation mode) on_transition.vel_lim double limit of longitudinal velocity (activated in TRANSITION operation mode) on_transition.lon_acc_lim double limit of longitudinal acceleration (activated in TRANSITION operation mode) on_transition.lon_jerk_lim double limit of longitudinal jerk (activated in TRANSITION operation mode) on_transition.lat_acc_lim double limit of lateral acceleration (activated in TRANSITION operation mode) on_transition.lat_jerk_lim double limit of lateral jerk (activated in TRANSITION operation mode)"},{"location":"control/vehicle_cmd_gate/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

The parameter check_external_emergency_heartbeat (true by default) enables an emergency stop request from external modules. This feature requires a ~/input/external_emergency_stop_heartbeat topic for health monitoring of the external module, and the vehicle_cmd_gate module will not start without the topic. The check_external_emergency_heartbeat parameter must be false when the \"external emergency stop\" function is not used.

"},{"location":"evaluator/diagnostic_converter/","title":"Planning Evaluator","text":""},{"location":"evaluator/diagnostic_converter/#purpose","title":"Purpose","text":"

This package provides a node to convert diagnostic_msgs::msg::DiagnosticArray messages into tier4_simulation_msgs::msg::UserDefinedValue messages.

"},{"location":"evaluator/diagnostic_converter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

The node subscribes to all topics listed in the parameters and assumes they publish DiagnosticArray messages. Each time such message is received, it is converted into as many UserDefinedValue messages as the number of KeyValue objects. The format of the output topic is detailed in the output section.

"},{"location":"evaluator/diagnostic_converter/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"evaluator/diagnostic_converter/#inputs","title":"Inputs","text":"

The node listens to DiagnosticArray messages on the topics specified in the parameters.

"},{"location":"evaluator/diagnostic_converter/#outputs","title":"Outputs","text":"

The node outputs UserDefinedValue messages that are converted from the received DiagnosticArray.

The name of the output topics are generated from the corresponding input topic, the name of the diagnostic status, and the key of the diagnostic. For example, we might listen to topic /diagnostic_topic and receive a DiagnosticArray with 2 status:

The resulting topics to publish the UserDefinedValue are as follows:

"},{"location":"evaluator/diagnostic_converter/#parameters","title":"Parameters","text":"Name Type Description diagnostic_topics list of string list of DiagnosticArray topics to convert to UserDefinedValue"},{"location":"evaluator/diagnostic_converter/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

Values in the KeyValue objects of a DiagnosticStatus are assumed to be of type double.

"},{"location":"evaluator/diagnostic_converter/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":""},{"location":"evaluator/kinematic_evaluator/","title":"Kinematic Evaluator","text":"

TBD

"},{"location":"evaluator/localization_evaluator/","title":"Localization Evaluator","text":"

TBD

"},{"location":"evaluator/planning_evaluator/","title":"Planning Evaluator","text":""},{"location":"evaluator/planning_evaluator/#purpose","title":"Purpose","text":"

This package provides nodes that generate metrics to evaluate the quality of planning and control.

"},{"location":"evaluator/planning_evaluator/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

The evaluation node calculates metrics each time it receives a trajectory T(0). Metrics are calculated using the following information:

These information are maintained by an instance of class MetricsCalculator which is also responsible for calculating metrics.

"},{"location":"evaluator/planning_evaluator/#stat","title":"Stat","text":"

Each metric is calculated using a Stat instance which contains the minimum, maximum, and mean values calculated for the metric as well as the number of values measured.

"},{"location":"evaluator/planning_evaluator/#metric-calculation-and-adding-more-metrics","title":"Metric calculation and adding more metrics","text":"

All possible metrics are defined in the Metric enumeration defined include/planning_evaluator/metrics/metric.hpp. This file also defines conversions from/to string as well as human readable descriptions to be used as header of the output file.

The MetricsCalculator is responsible for calculating metric statistics through calls to function:

Stat<double> MetricsCalculator::calculate(const Metric metric, const Trajectory & traj) const;\n

Adding a new metric M requires the following steps:

"},{"location":"evaluator/planning_evaluator/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"evaluator/planning_evaluator/#inputs","title":"Inputs","text":"Name Type Description ~/input/trajectory autoware_auto_planning_msgs::msg::Trajectory Main trajectory to evaluate ~/input/reference_trajectory autoware_auto_planning_msgs::msg::Trajectory Reference trajectory to use for deviation metrics ~/input/objects autoware_auto_perception_msgs::msg::PredictedObjects Obstacles"},{"location":"evaluator/planning_evaluator/#outputs","title":"Outputs","text":"

Each metric is published on a topic named after the metric name.

Name Type Description ~/metrics diagnostic_msgs::msg::DiagnosticArray DiagnosticArray with a DiagnosticStatus for each metric

When shut down, the evaluation node writes the values of the metrics measured during its lifetime to a file as specified by the output_file parameter.

"},{"location":"evaluator/planning_evaluator/#parameters","title":"Parameters","text":"Name Type Description output_file string file used to write metrics ego_frame string frame used for the ego pose selected_metrics List metrics to measure and publish trajectory.min_point_dist_m double minimum distance between two successive points to use for angle calculation trajectory.lookahead.max_dist_m double maximum distance from ego along the trajectory to use for calculation trajectory.lookahead.max_time_m double maximum time ahead of ego along the trajectory to use for calculation obstacle.dist_thr_m double distance between ego and the obstacle below which a collision is considered"},{"location":"evaluator/planning_evaluator/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

There is a strong assumption that when receiving a trajectory T(0), it has been generated using the last received reference trajectory and objects. This can be wrong if a new reference trajectory or objects are published while T(0) is being calculated.

Precision is currently limited by the resolution of the trajectories. It is possible to interpolate the trajectory and reference trajectory to increase precision but would make computation significantly more expensive.

"},{"location":"evaluator/planning_evaluator/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":""},{"location":"launch/tier4_autoware_api_launch/","title":"tier4_autoware_api_launch","text":""},{"location":"launch/tier4_autoware_api_launch/#description","title":"Description","text":"

This package contains launch files that run nodes to convert Autoware internal topics into consistent API used by external software (e.g., fleet management system, simulator).

"},{"location":"launch/tier4_autoware_api_launch/#package-dependencies","title":"Package Dependencies","text":"

Please see <exec_depend> in package.xml.

"},{"location":"launch/tier4_autoware_api_launch/#usage","title":"Usage","text":"

You can include as follows in *.launch.xml to use autoware_api.launch.xml.

  <include file=\"$(find-pkg-share tier4_autoware_api_launch)/launch/autoware_api.launch.xml\"/>\n
"},{"location":"launch/tier4_autoware_api_launch/#notes","title":"Notes","text":"

For reducing processing load, we use the Component feature in ROS2 (similar to Nodelet in ROS1 )

"},{"location":"launch/tier4_control_launch/","title":"tier4_control_launch","text":""},{"location":"launch/tier4_control_launch/#structure","title":"Structure","text":""},{"location":"launch/tier4_control_launch/#package-dependencies","title":"Package Dependencies","text":"

Please see <exec_depend> in package.xml.

"},{"location":"launch/tier4_control_launch/#usage","title":"Usage","text":"

You can include as follows in *.launch.xml to use control.launch.py.

Note that you should provide parameter paths as PACKAGE_param_path. The list of parameter paths you should provide is written at the top of planning.launch.xml.

<include file=\"$(find-pkg-share tier4_control_launch)/launch/control.launch.py\">\n<!-- options for lateral_controller_mode: mpc_follower, pure_pursuit -->\n<!-- Parameter files -->\n<arg name=\"FOO_NODE_param_path\" value=\"...\"/>\n<arg name=\"BAR_NODE_param_path\" value=\"...\"/>\n...\n  <arg name=\"lateral_controller_mode\" value=\"mpc_follower\" />\n</include>\n
"},{"location":"launch/tier4_control_launch/#notes","title":"Notes","text":"

For reducing processing load, we use the Component feature in ROS2 (similar to Nodelet in ROS1 )

"},{"location":"launch/tier4_localization_launch/","title":"tier4_localization_launch","text":""},{"location":"launch/tier4_localization_launch/#structure","title":"Structure","text":""},{"location":"launch/tier4_localization_launch/#package-dependencies","title":"Package Dependencies","text":"

Please see <exec_depend> in package.xml.

"},{"location":"launch/tier4_localization_launch/#usage","title":"Usage","text":"

Include localization.launch.xml in other launch files as follows.

Note that you should provide parameter paths as PACKAGE_param_path. The list of parameter paths you should provide is written at the top of localization.launch.xml.

  <include file=\"$(find-pkg-share tier4_localization_launch)/launch/localization.launch.xml\">\n<!-- Parameter files -->\n<arg name=\"FOO_param_path\" value=\"...\"/>\n<arg name=\"BAR_param_path\" value=\"...\"/>\n...\n  </include>\n
"},{"location":"launch/tier4_map_launch/","title":"tier4_map_launch","text":""},{"location":"launch/tier4_map_launch/#structure","title":"Structure","text":""},{"location":"launch/tier4_map_launch/#package-dependencies","title":"Package Dependencies","text":"

Please see <exec_depend> in package.xml.

"},{"location":"launch/tier4_map_launch/#usage","title":"Usage","text":"

You can include as follows in *.launch.xml to use map.launch.py.

Note that you should provide parameter paths as PACKAGE_param_path. The list of parameter paths you should provide is written at the top of map.launch.xml.

<arg name=\"map_path\" description=\"point cloud and lanelet2 map directory path\"/>\n<arg name=\"lanelet2_map_file\" default=\"lanelet2_map.osm\" description=\"lanelet2 map file name\"/>\n<arg name=\"pointcloud_map_file\" default=\"pointcloud_map.pcd\" description=\"pointcloud map file name\"/>\n\n<include file=\"$(find-pkg-share tier4_map_launch)/launch/map.launch.py\">\n<arg name=\"lanelet2_map_path\" value=\"$(var map_path)/$(var lanelet2_map_file)\" />\n<arg name=\"pointcloud_map_path\" value=\"$(var map_path)/$(var pointcloud_map_file)\"/>\n\n<!-- Parameter files -->\n<arg name=\"FOO_param_path\" value=\"...\"/>\n<arg name=\"BAR_param_path\" value=\"...\"/>\n...\n</include>\n
"},{"location":"launch/tier4_map_launch/#notes","title":"Notes","text":"

For reducing processing load, we use the Component feature in ROS2 (similar to Nodelet in ROS1 )

"},{"location":"launch/tier4_perception_launch/","title":"tier4_perception_launch","text":""},{"location":"launch/tier4_perception_launch/#structure","title":"Structure","text":""},{"location":"launch/tier4_perception_launch/#package-dependencies","title":"Package Dependencies","text":"

Please see <exec_depend> in package.xml.

"},{"location":"launch/tier4_perception_launch/#usage","title":"Usage","text":"

You can include as follows in *.launch.xml to use perception.launch.xml.

Note that you should provide parameter paths as PACKAGE_param_path. The list of parameter paths you should provide is written at the top of perception.launch.xml.

  <include file=\"$(find-pkg-share tier4_perception_launch)/launch/perception.launch.xml\">\n<!-- options for mode: camera_lidar_fusion, lidar, camera -->\n<arg name=\"mode\" value=\"lidar\" />\n\n<!-- Parameter files -->\n<arg name=\"FOO_param_path\" value=\"...\"/>\n<arg name=\"BAR_param_path\" value=\"...\"/>\n...\n  </include>\n
"},{"location":"launch/tier4_planning_launch/","title":"tier4_planning_launch","text":""},{"location":"launch/tier4_planning_launch/#structure","title":"Structure","text":""},{"location":"launch/tier4_planning_launch/#package-dependencies","title":"Package Dependencies","text":"

Please see <exec_depend> in package.xml.

"},{"location":"launch/tier4_planning_launch/#usage","title":"Usage","text":"

Note that you should provide parameter paths as PACKAGE_param_path. The list of parameter paths you should provide is written at the top of planning.launch.xml.

<include file=\"$(find-pkg-share tier4_planning_launch)/launch/planning.launch.xml\">\n<!-- Parameter files -->\n<arg name=\"FOO_NODE_param_path\" value=\"...\"/>\n<arg name=\"BAR_NODE_param_path\" value=\"...\"/>\n...\n</include>\n
"},{"location":"launch/tier4_sensing_launch/","title":"tier4_sensing_launch","text":""},{"location":"launch/tier4_sensing_launch/#structure","title":"Structure","text":""},{"location":"launch/tier4_sensing_launch/#package-dependencies","title":"Package Dependencies","text":"

Please see <exec_depend> in package.xml.

"},{"location":"launch/tier4_sensing_launch/#usage","title":"Usage","text":"

You can include as follows in *.launch.xml to use sensing.launch.xml.

  <include file=\"$(find-pkg-share tier4_sensing_launch)/launch/sensing.launch.xml\">\n<arg name=\"launch_driver\" value=\"true\"/>\n<arg name=\"sensor_model\" value=\"$(var sensor_model)\"/>\n<arg name=\"vehicle_param_file\" value=\"$(find-pkg-share $(var vehicle_model)_description)/config/vehicle_info.param.yaml\"/>\n<arg name=\"vehicle_mirror_param_file\" value=\"$(find-pkg-share $(var vehicle_model)_description)/config/mirror.param.yaml\"/>\n</include>\n
"},{"location":"launch/tier4_sensing_launch/#launch-directory-structure","title":"Launch Directory Structure","text":"

This package finds sensor settings of specified sensor model in launch.

launch/\n\u251c\u2500\u2500 aip_x1 # Sensor model name\n\u2502   \u251c\u2500\u2500 camera.launch.xml # Camera\n\u2502   \u251c\u2500\u2500 gnss.launch.xml # GNSS\n\u2502   \u251c\u2500\u2500 imu.launch.xml # IMU\n\u2502   \u251c\u2500\u2500 lidar.launch.xml # LiDAR\n\u2502   \u2514\u2500\u2500 pointcloud_preprocessor.launch.py # for preprocessing pointcloud\n...\n
"},{"location":"launch/tier4_sensing_launch/#notes","title":"Notes","text":"

This package finds settings with variables.

ex.)

<include file=\"$(find-pkg-share tier4_sensing_launch)/launch/$(var sensor_model)/lidar.launch.xml\">\n
"},{"location":"launch/tier4_simulator_launch/","title":"tier4_simulator_launch","text":""},{"location":"launch/tier4_simulator_launch/#structure","title":"Structure","text":""},{"location":"launch/tier4_simulator_launch/#package-dependencies","title":"Package Dependencies","text":"

Please see <exec_depend> in package.xml.

"},{"location":"launch/tier4_simulator_launch/#usage","title":"Usage","text":"
  <include file=\"$(find-pkg-share tier4_simulator_launch)/launch/simulator.launch.xml\">\n<arg name=\"vehicle_info_param_file\" value=\"VEHICLE_INFO_PARAM_FILE\" />\n<arg name=\"vehicle_model\" value=\"VEHICLE_MODEL\"/>\n</include>\n

The simulator model used in simple_planning_simulator is loaded from \"config/simulator_model.param.yaml\" in the \"VEHICLE_MODEL_description\" package.

"},{"location":"launch/tier4_system_launch/","title":"tier4_system_launch","text":""},{"location":"launch/tier4_system_launch/#structure","title":"Structure","text":""},{"location":"launch/tier4_system_launch/#package-dependencies","title":"Package Dependencies","text":"

Please see <exec_depend> in package.xml.

"},{"location":"launch/tier4_system_launch/#usage","title":"Usage","text":"

Note that you should provide parameter paths as PACKAGE_param_path. The list of parameter paths you should provide is written at the top of system.launch.xml.

  <include file=\"$(find-pkg-share tier4_system_launch)/launch/system.launch.xml\">\n<arg name=\"run_mode\" value=\"online\"/>\n<arg name=\"sensor_model\" value=\"SENSOR_MODEL\"/>\n\n<!-- Parameter files -->\n<arg name=\"FOO_param_path\" value=\"...\"/>\n<arg name=\"BAR_param_path\" value=\"...\"/>\n...\n  </include>\n

The sensing configuration parameters used in system_error_monitor are loaded from \"config/diagnostic_aggregator/sensor_kit.param.yaml\" in the \"SENSOR_MODEL_description\" package.

"},{"location":"launch/tier4_vehicle_launch/","title":"tier4_vehicle_launch","text":""},{"location":"launch/tier4_vehicle_launch/#structure","title":"Structure","text":""},{"location":"launch/tier4_vehicle_launch/#package-dependencies","title":"Package Dependencies","text":"

Please see <exec_depend> in package.xml.

"},{"location":"launch/tier4_vehicle_launch/#usage","title":"Usage","text":"

You can include as follows in *.launch.xml to use vehicle.launch.xml.

  <arg name=\"vehicle_model\" default=\"sample_vehicle\" description=\"vehicle model name\"/>\n<arg name=\"sensor_model\" default=\"sample_sensor_kit\" description=\"sensor model name\"/>\n\n<include file=\"$(find-pkg-share tier4_vehicle_launch)/launch/vehicle.launch.xml\">\n<arg name=\"vehicle_model\" value=\"$(var vehicle_model)\"/>\n<arg name=\"sensor_model\" value=\"$(var sensor_model)\"/>\n</include>\n
"},{"location":"launch/tier4_vehicle_launch/#notes","title":"Notes","text":"

This package finds some external packages and settings with variables and package names.

ex.)

<let name=\"vehicle_model_pkg\" value=\"$(find-pkg-share $(var vehicle_model)_description)\"/>\n
<arg name=\"config_dir\" default=\"$(find-pkg-share individual_params)/config/$(var vehicle_id)/$(var sensor_model)\"/>\n
"},{"location":"launch/tier4_vehicle_launch/#vehiclexacro","title":"vehicle.xacro","text":""},{"location":"launch/tier4_vehicle_launch/#arguments","title":"Arguments","text":"Name Type Description Default sensor_model String sensor model name \"\" vehicle_model String vehicle model name \"\""},{"location":"launch/tier4_vehicle_launch/#usage_1","title":"Usage","text":"

You can write as follows in *.launch.xml.

  <arg name=\"vehicle_model\" default=\"sample_vehicle\" description=\"vehicle model name\"/>\n<arg name=\"sensor_model\" default=\"sample_sensor_kit\" description=\"sensor model name\"/>\n<arg name=\"model\" default=\"$(find-pkg-share tier4_vehicle_launch)/urdf/vehicle.xacro\"/>\n\n<node name=\"robot_state_publisher\" pkg=\"robot_state_publisher\" exec=\"robot_state_publisher\">\n<param name=\"robot_description\" value=\"$(command 'xacro $(var model) vehicle_model:=$(var vehicle_model) sensor_model:=$(var sensor_model)')\"/>\n</node>\n
"},{"location":"localization/ekf_localizer/","title":"Overview","text":"

The Extend Kalman Filter Localizer estimates robust and less noisy robot pose and twist by integrating the 2D vehicle dynamics model with input ego-pose and ego-twist messages. The algorithm is designed especially for fast-moving robots such as autonomous driving systems.

"},{"location":"localization/ekf_localizer/#flowchart","title":"Flowchart","text":"

The overall flowchart of the ekf_localizer is described below.

"},{"location":"localization/ekf_localizer/#features","title":"Features","text":"

This package includes the following features:

"},{"location":"localization/ekf_localizer/#launch","title":"Launch","text":"

The ekf_localizer starts with the default parameters with the following command.

roslaunch ekf_localizer ekf_localizer.launch\n

The parameters and input topic names can be set in the ekf_localizer.launch file.

"},{"location":"localization/ekf_localizer/#node","title":"Node","text":""},{"location":"localization/ekf_localizer/#subscribed-topics","title":"Subscribed Topics","text":" "},{"location":"localization/ekf_localizer/#published-topics","title":"Published Topics","text":" "},{"location":"localization/ekf_localizer/#published-tf","title":"Published TF","text":""},{"location":"localization/ekf_localizer/#functions","title":"Functions","text":""},{"location":"localization/ekf_localizer/#predict","title":"Predict","text":"

The current robot state is predicted from previously estimated data using a given prediction model. This calculation is called at a constant interval (predict_frequency [Hz]). The prediction equation is described at the end of this page.

"},{"location":"localization/ekf_localizer/#measurement-update","title":"Measurement Update","text":"

Before the update, the Mahalanobis distance is calculated between the measured input and the predicted state, the measurement update is not performed for inputs where the Mahalanobis distance exceeds the given threshold.

The predicted state is updated with the latest measured inputs, measured_pose, and measured_twist. The updates are performed with the same frequency as prediction, usually at a high frequency, in order to enable smooth state estimation.

"},{"location":"localization/ekf_localizer/#parameter-description","title":"Parameter description","text":"

The parameters are set in launch/ekf_localizer.launch .

"},{"location":"localization/ekf_localizer/#for-node","title":"For Node","text":"Name Type Description Default value show_debug_info bool Flag to display debug info false predict_frequency double Frequency for filtering and publishing [Hz] 50.0 tf_rate double Frequency for tf broadcasting [Hz] 10.0 extend_state_step int Max delay step which can be dealt with in EKF. Large number increases computational cost. 50 enable_yaw_bias_estimation bool Flag to enable yaw bias estimation true"},{"location":"localization/ekf_localizer/#for-pose-measurement","title":"For pose measurement","text":"Name Type Description Default value pose_additional_delay double Additional delay time for pose measurement [s] 0.0 pose_measure_uncertainty_time double Measured time uncertainty used for covariance calculation [s] 0.01 pose_smoothing_steps int A value for smoothing steps 5 pose_gate_dist double Limit of Mahalanobis distance used for outliers detection 10000.0"},{"location":"localization/ekf_localizer/#for-twist-measurement","title":"For twist measurement","text":"Name Type Description Default value twist_additional_delay double Additional delay time for twist [s] 0.0 twist_smoothing_steps int A value for smoothing steps 2 twist_gate_dist double Limit of Mahalanobis distance used for outliers detection 10000.0"},{"location":"localization/ekf_localizer/#for-process-noise","title":"For process noise","text":"Name Type Description Default value proc_stddev_vx_c double Standard deviation of process noise in time differentiation expression of linear velocity x, noise for d_vx = 0 2.0 proc_stddev_wz_c double Standard deviation of process noise in time differentiation expression of angular velocity z, noise for d_wz = 0 0.2 proc_stddev_yaw_c double Standard deviation of process noise in time differentiation expression of yaw, noise for d_yaw = omega 0.005 proc_stddev_yaw_bias_c double Standard deviation of process noise in time differentiation expression of yaw_bias, noise for d_yaw_bias = 0 0.001

note: process noise for positions x & y are calculated automatically from nonlinear dynamics.

"},{"location":"localization/ekf_localizer/#how-to-tune-ekf-parameters","title":"How to tune EKF parameters","text":""},{"location":"localization/ekf_localizer/#0-preliminaries","title":"0. Preliminaries","text":""},{"location":"localization/ekf_localizer/#1-tune-sensor-parameters","title":"1. Tune sensor parameters","text":"

Set standard deviation for each sensor. The pose_measure_uncertainty_time is for the uncertainty of the header timestamp data. You can also tune a number of steps for smoothing for each observed sensor data by tuning *_smoothing_steps. Increasing the number will improve the smoothness of the estimation, but may have an adverse effect on the estimation performance.

"},{"location":"localization/ekf_localizer/#2-tune-process-model-parameters","title":"2. Tune process model parameters","text":""},{"location":"localization/ekf_localizer/#kalman-filter-model","title":"Kalman Filter Model","text":""},{"location":"localization/ekf_localizer/#kinematics-model-in-update-function","title":"kinematics model in update function","text":"

where b_k is the yawbias.

"},{"location":"localization/ekf_localizer/#time-delay-model","title":"time delay model","text":"

The measurement time delay is handled by an augmented state [1] (See, Section 7.3 FIXED-LAG SMOOTHING).

Note that, although the dimension gets larger since the analytical expansion can be applied based on the specific structures of the augmented states, the computational complexity does not significantly change.

"},{"location":"localization/ekf_localizer/#test-result-with-autoware-ndt","title":"Test Result with Autoware NDT","text":""},{"location":"localization/ekf_localizer/#known-issues","title":"Known issues","text":""},{"location":"localization/ekf_localizer/#reference","title":"reference","text":"

[1] Anderson, B. D. O., & Moore, J. B. (1979). Optimal filtering. Englewood Cliffs, NJ: Prentice-Hall.

"},{"location":"localization/gyro_odometer/","title":"gyro_odometer","text":""},{"location":"localization/gyro_odometer/#purpose","title":"Purpose","text":"

gyro_odometer is the package to estimate twist by combining imu and vehicle speed.

"},{"location":"localization/gyro_odometer/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"localization/gyro_odometer/#input","title":"Input","text":"Name Type Description vehicle/twist_with_covariance geometry_msgs::msg::TwistWithCovarianceStamped twist with covariance from vehicle imu sensor_msgs::msg::Imu imu from sensor"},{"location":"localization/gyro_odometer/#output","title":"Output","text":"Name Type Description twist_with_covariance geometry_msgs::msg::TwistWithCovarianceStamped estimated twist with covariance"},{"location":"localization/gyro_odometer/#parameters","title":"Parameters","text":"Parameter Type Description output_frame String output's frame id message_timeout_sec Double delay tolerance time for message"},{"location":"localization/gyro_odometer/#assumptions-known-limits","title":"Assumptions / Known limits","text":" "},{"location":"localization/initial_pose_button_panel/","title":"initial_pose_button_panel","text":""},{"location":"localization/initial_pose_button_panel/#role","title":"Role","text":"

initial_pose_button_panel is the package to send a request to the localization module to calculate the current ego pose.

It starts calculating the current ego pose by pushing the button on Rviz, implemented as an Rviz plugin. You can see the button on the right bottom of Rviz.

"},{"location":"localization/initial_pose_button_panel/#input-output","title":"Input / Output","text":""},{"location":"localization/initial_pose_button_panel/#input-topics","title":"Input topics","text":"Name Type Description /sensing/gnss/pose_with_covariance (default) geometry_msgs::msg::PoseWithCovarianceStamped initial pose with covariance to calculate the current ego pose"},{"location":"localization/localization_error_monitor/","title":"localization_error_monitor","text":""},{"location":"localization/localization_error_monitor/#purpose","title":"Purpose","text":"

localization_error_monitor is a package for diagnosing localization errors by monitoring uncertainty of the localization results. The package monitors the following two values:

"},{"location":"localization/localization_error_monitor/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"localization/localization_error_monitor/#input","title":"Input","text":"Name Type Description input/pose_with_cov geometry_msgs::msg::PoseWithCovarianceStamped localization result"},{"location":"localization/localization_error_monitor/#output","title":"Output","text":"Name Type Description debug/ellipse_marker visualization_msgs::msg::Marker ellipse marker diagnostics diagnostic_msgs::msg::DiagnosticArray diagnostics outputs"},{"location":"localization/localization_error_monitor/#parameters","title":"Parameters","text":"Name Type Description scale double scale factor for monitored values (default: 3.0) error_ellipse_size double error threshold for long radius of confidence ellipse [m] (default: 1.0) warn_ellipse_size double warning threshold for long radius of confidence ellipse [m] (default: 0.8) error_ellipse_size_lateral_direction double error threshold for size of confidence ellipse along lateral direction [m] (default: 0.3) warn_ellipse_size_lateral_direction double warning threshold for size of confidence ellipse along lateral direction [m] (default: 0.2)"},{"location":"localization/ndt_scan_matcher/","title":"ndt_scan_matcher","text":""},{"location":"localization/ndt_scan_matcher/#purpose","title":"Purpose","text":"

ndt_scan_matcher is a package for position estimation using the NDT scan matching method.

There are two main functions in this package:

One optional function is regularization. Please see the regularization chapter in the back for details. It is disabled by default.

"},{"location":"localization/ndt_scan_matcher/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"localization/ndt_scan_matcher/#input","title":"Input","text":"Name Type Description ekf_pose_with_covariance geometry_msgs::msg::PoseWithCovarianceStamped initial pose pointcloud_map sensor_msgs::msg::PointCloud2 map pointcloud points_raw sensor_msgs::msg::PointCloud2 sensor pointcloud sensing/gnss/pose_with_covariance sensor_msgs::msg::PoseWithCovarianceStamped base position for regularization term

sensing/gnss/pose_with_covariance is required only when regularization is enabled.

"},{"location":"localization/ndt_scan_matcher/#output","title":"Output","text":"Name Type Description ndt_pose geometry_msgs::msg::PoseStamped estimated pose ndt_pose_with_covariance geometry_msgs::msg::PoseWithCovarianceStamped estimated pose with covariance /diagnostics diagnostic_msgs::msg::DiagnosticArray diagnostics points_aligned sensor_msgs::msg::PointCloud2 [debug topic] pointcloud aligned by scan matching points_aligned_no_ground sensor_msgs::msg::PointCloud2 [debug topic] de-grounded pointcloud aligned by scan matching initial_pose_with_covariance geometry_msgs::msg::PoseWithCovarianceStamped [debug topic] initial pose used in scan matching exe_time_ms tier4_debug_msgs::msg::Float32Stamped [debug topic] execution time for scan matching [ms] transform_probability tier4_debug_msgs::msg::Float32Stamped [debug topic] score of scan matching no_ground_transform_probability tier4_debug_msgs::msg::Float32Stamped [debug topic] score of scan matching based on de-grounded LiDAR scan iteration_num tier4_debug_msgs::msg::Int32Stamped [debug topic] number of scan matching iterations initial_to_result_distance tier4_debug_msgs::msg::Float32Stamped [debug topic] distance difference between the initial point and the convergence point [m] initial_to_result_distance_old tier4_debug_msgs::msg::Float32Stamped [debug topic] distance difference between the older of the two initial points used in linear interpolation and the convergence point [m] initial_to_result_distance_new tier4_debug_msgs::msg::Float32Stamped [debug topic] distance difference between the newer of the two initial points used in linear interpolation and the convergence point [m] ndt_marker visualization_msgs::msg::MarkerArray [debug topic] markers for debugging monte_carlo_initial_pose_marker visualization_msgs::msg::MarkerArray [debug topic] particles used in initial position estimation"},{"location":"localization/ndt_scan_matcher/#service","title":"Service","text":"Name Type Description ndt_align_srv autoware_localization_srvs::srv::PoseWithCovarianceStamped service to estimate initial pose"},{"location":"localization/ndt_scan_matcher/#parameters","title":"Parameters","text":""},{"location":"localization/ndt_scan_matcher/#core-parameters","title":"Core Parameters","text":"Name Type Description base_frame string Vehicle reference frame input_sensor_points_queue_size int Subscriber queue size trans_epsilon double The maximum difference between two consecutive transformations in order to consider convergence step_size double The newton line search maximum step length resolution double The ND voxel grid resolution [m] max_iterations int The number of iterations required to calculate alignment converged_param_type int The type of indicators for scan matching score (0: TP, 1: NVTL) converged_param_transform_probability double Threshold for deciding whether to trust the estimation result num_threads int Number of threads used for parallel computing

(TP: Transform Probability, NVTL: Nearest Voxel Transform Probability)

"},{"location":"localization/ndt_scan_matcher/#regularization","title":"Regularization","text":""},{"location":"localization/ndt_scan_matcher/#abstract","title":"Abstract","text":"

This is a function that adds the regularization term to the NDT optimization problem as follows.

\\begin{align} \\min_{\\mathbf{R},\\mathbf{t}} \\mathrm{NDT}(\\mathbf{R},\\mathbf{t}) +\\mathrm{scale\\ factor}\\cdot \\left| \\mathbf{R}^\\top (\\mathbf{t_{base}-\\mathbf{t}}) \\cdot \\begin{pmatrix} 1\\\\ 0\\\\ 0 \\end{pmatrix} \\right|^2 \\end{align}

, where t_base is base position measured by GNSS or other means. NDT(R,t) stands for the pure NDT cost function. The regularization term shifts the optimal solution to the base position in the longitudinal direction of the vehicle. Only errors along the longitudinal direction with respect to the base position are considered; errors along Z-axis and lateral-axis error are not considered.

Although the regularization term has rotation as a parameter, the gradient and hessian associated with it is not computed to stabilize the optimization. Specifically, the gradients are computed as follows.

\\begin{align} &g_x=\\nabla_x \\mathrm{NDT}(\\mathbf{R},\\mathbf{t}) + 2 \\mathrm{scale\\ factor} \\cos\\theta_z\\cdot e_{\\mathrm{longitudinal}} \\\\ &g_y=\\nabla_y \\mathrm{NDT}(\\mathbf{R},\\mathbf{t}) + 2 \\mathrm{scale\\ factor} \\sin\\theta_z\\cdot e_{\\mathrm{longitudinal}} \\\\ &g_z=\\nabla_z \\mathrm{NDT}(\\mathbf{R},\\mathbf{t}) \\\\ &g_\\mathbf{R}=\\nabla_\\mathbf{R} \\mathrm{NDT}(\\mathbf{R},\\mathbf{t}) \\end{align}

Regularization is disabled by default. If you wish to use it, please edit the following parameters to enable it.

"},{"location":"localization/ndt_scan_matcher/#where-is-regularization-available","title":"Where is regularization available","text":"

This feature is effective on feature-less roads where GNSS is available, such as

By remapping the base position topic to something other than GNSS, as described below, it can be valid outside of these.

"},{"location":"localization/ndt_scan_matcher/#using-other-base-position","title":"Using other base position","text":"

Other than GNSS, you can give other global position topics obtained from magnetic markers, visual markers or etc. if they are available in your environment. (Currently Autoware does not provide a node that gives such pose.) To use your topic for regularization, you need to remap the input_regularization_pose_topic with your topic in ndt_scan_matcher.launch.xml. By default, it is remapped with /sensing/gnss/pose_with_covariance.

"},{"location":"localization/ndt_scan_matcher/#limitations","title":"Limitations","text":"

Since this function determines the base position by linear interpolation from the recently subscribed poses, topics that are published at a low frequency relative to the driving speed cannot be used. Inappropriate linear interpolation may result in bad optimization results.

When using GNSS for base location, the regularization can have negative effects in tunnels, indoors, and near skyscrapers. This is because if the base position is far off from the true value, NDT scan matching may converge to inappropriate optimal position.

"},{"location":"localization/ndt_scan_matcher/#parameters_1","title":"Parameters","text":"Name Type Description regularization_enabled bool Flag to add regularization term to NDT optimization (FALSE by default) regularization_scale_factor double Coefficient of the regularization term.

Regularization is disabled by default because GNSS is not always accurate enough to serve the appropriate base position in any scenes.

If the scale_factor is too large, the NDT will be drawn to the base position and scan matching may fail. Conversely, if it is too small, the regularization benefit will be lost.

Note that setting scale_factor to 0 is equivalent to disabling regularization.

"},{"location":"localization/ndt_scan_matcher/#example","title":"Example","text":"

The following figures show tested maps.

The following figures show the trajectories estimated on the feature-less map with standard NDT and regularization-enabled NDT, respectively. The color of the trajectory indicates the error (meter) from the reference trajectory, which is computed with the feature-rich map.

"},{"location":"localization/ndt_scan_matcher/#dynamic-map-loading","title":"Dynamic map loading","text":"

Autoware supports dynamic map loading feature for ndt_scan_matcher. Using this feature, NDT dynamically requests for the surrounding pointcloud map to pointcloud_map_loader, and then receive and preprocess the map in an online fashion.

Using the feature, ndt_scan_matcher can theoretically handle any large size maps in terms of memory usage. (Note that it is still possible that there exists a limitation due to other factors, e.g. floating-point error)

"},{"location":"localization/ndt_scan_matcher/#additional-interfaces","title":"Additional interfaces","text":""},{"location":"localization/ndt_scan_matcher/#additional-inputs","title":"Additional inputs","text":"Name Type Description input_ekf_odom nav_msgs::msg::Odometry Vehicle localization results (used for map update decision)"},{"location":"localization/ndt_scan_matcher/#additional-outputs","title":"Additional outputs","text":"Name Type Description debug/loaded_pointcloud_map sensor_msgs::msg::PointCloud2 pointcloud maps used for localization (for debug)"},{"location":"localization/ndt_scan_matcher/#additional-client","title":"Additional client","text":"Name Type Description client_map_loader autoware_map_msgs::srv::GetDifferentialPointCloudMap map loading client"},{"location":"localization/ndt_scan_matcher/#parameters_2","title":"Parameters","text":"Name Type Description use_dynamic_map_loading bool Flag to enable dynamic map loading feature for NDT (TRUE by default) dynamic_map_loading_update_distance double Distance traveled to load new map(s) dynamic_map_loading_map_radius double Map loading radius for every update lidar_radius double LiDAR radius used for localization (only used for diagnosis)"},{"location":"localization/ndt_scan_matcher/#enabling-the-dynamic-map-loading-feature","title":"Enabling the dynamic map loading feature","text":"

To use dynamic map loading feature for ndt_scan_matcher, you also need to appropriately configure some other settings outside of this node. Follow the next two instructions.

  1. enable dynamic map loading interface in pointcloud_map_loader (by setting enable_differential_load to true in the package)
  2. split the PCD files into grids (recommended size: 20[m] x 20[m])

Note that the dynamic map loading may FAIL if the map is split into two or more large size map (e.g. 1000[m] x 1000[m]). Please provide either of

Here is a split PCD map for sample-map-rosbag from Autoware tutorial: sample-map-rosbag_split.zip

PCD files use_dynamic_map_loading enable_differential_load How NDT loads map(s) single file true true at once (standard) single file true false does NOT work single file false true/false at once (standard) splitted true true dynamically splitted true false does NOT work splitted false true/false at once (standard)"},{"location":"localization/ndt_scan_matcher/#scan-matching-score-based-on-de-grounded-lidar-scan","title":"Scan matching score based on de-grounded LiDAR scan","text":""},{"location":"localization/ndt_scan_matcher/#abstract_1","title":"Abstract","text":"

This is a function that using de-grounded LiDAR scan estimate scan matching score.This score can more accurately reflect the current localization performance. related issue.

"},{"location":"localization/ndt_scan_matcher/#parameters_3","title":"Parameters","text":"Name Type Description estimate_scores_for_degrounded_scan bool Flag for using scan matching score based on de-grounded LiDAR scan (FALSE by default) z_margin_for_ground_removal double Z-value margin for removal ground points"},{"location":"localization/pose2twist/","title":"pose2twist","text":""},{"location":"localization/pose2twist/#purpose","title":"Purpose","text":"

This pose2twist calculates the velocity from the input pose history. In addition to the computed twist, this node outputs the linear-x and angular-z components as a float message to simplify debugging.

The twist.linear.x is calculated as sqrt(dx * dx + dy * dy + dz * dz) / dt, and the values in the y and z fields are zero. The twist.angular is calculated as d_roll / dt, d_pitch / dt and d_yaw / dt for each field.

"},{"location":"localization/pose2twist/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"localization/pose2twist/#input","title":"Input","text":"Name Type Description pose geometry_msgs::msg::PoseStamped pose source to used for the velocity calculation."},{"location":"localization/pose2twist/#output","title":"Output","text":"Name Type Description twist geometry_msgs::msg::TwistStamped twist calculated from the input pose history. linear_x tier4_debug_msgs::msg::Float32Stamped linear-x field of the output twist. angular_z tier4_debug_msgs::msg::Float32Stamped angular-z field of the output twist."},{"location":"localization/pose2twist/#parameters","title":"Parameters","text":"

none.

"},{"location":"localization/pose2twist/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

none.

"},{"location":"localization/pose_initializer/","title":"pose_initializer","text":""},{"location":"localization/pose_initializer/#purpose","title":"Purpose","text":"

The pose_initializer is the package to send an initial pose to ekf_localizer. It receives roughly estimated initial pose from GNSS/user. Passing the pose to ndt_scan_matcher, and it gets a calculated ego pose from ndt_scan_matcher via service. Finally, it publishes the initial pose to ekf_localizer. This node depends on the map height fitter library. See here for more details.

"},{"location":"localization/pose_initializer/#interfaces","title":"Interfaces","text":""},{"location":"localization/pose_initializer/#parameters","title":"Parameters","text":"Name Type Description ekf_enabled bool If true, EKF localizar is activated. ndt_enabled bool If true, the pose will be estimated by NDT scan matcher, otherwise it is passed through. stop_check_enabled bool If true, initialization is accepted only when the vehicle is stopped. stop_check_duration bool The duration used for the stop check above. gnss_enabled bool If true, use the GNSS pose when no pose is specified. gnss_pose_timeout bool The duration that the GNSS pose is valid."},{"location":"localization/pose_initializer/#services","title":"Services","text":"Name Type Description /localization/initialize autoware_adapi_v1_msgs::srv::InitializeLocalization initial pose from api"},{"location":"localization/pose_initializer/#clients","title":"Clients","text":"Name Type Description /localization/pose_estimator/ndt_align_srv tier4_localization_msgs::srv::PoseWithCovarianceStamped pose estimation service"},{"location":"localization/pose_initializer/#subscriptions","title":"Subscriptions","text":"Name Type Description /sensing/gnss/pose_with_covariance geometry_msgs::msg::PoseWithCovarianceStamped pose from gnss /sensing/vehicle_velocity_converter/twist_with_covariance geometry_msgs::msg::TwistStamped twist for stop check"},{"location":"localization/pose_initializer/#publications","title":"Publications","text":"Name Type Description /localization/initialization_state autoware_adapi_v1_msgs::msg::LocalizationInitializationState pose initialization state /initialpose3d geometry_msgs::msg::PoseWithCovarianceStamped calculated initial ego pose"},{"location":"localization/pose_initializer/#connection-with-default-ad-api","title":"Connection with Default AD API","text":"

This pose_initializer is used via default AD API. For detailed description of the API description, please refer to the description of default_ad_api.

"},{"location":"localization/stop_filter/","title":"stop_filter","text":""},{"location":"localization/stop_filter/#purpose","title":"Purpose","text":"

When this function did not exist, each node used a different criterion to determine whether the vehicle is stopping or not, resulting that some nodes were in operation of stopping the vehicle and some nodes continued running in the drive mode. This node aims to:

"},{"location":"localization/stop_filter/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"localization/stop_filter/#input","title":"Input","text":"Name Type Description input/odom nav_msgs::msg::Odometry localization odometry"},{"location":"localization/stop_filter/#output","title":"Output","text":"Name Type Description output/odom nav_msgs::msg::Odometry odometry with suppressed longitudinal and yaw twist debug/stop_flag tier4_debug_msgs::msg::BoolStamped flag to represent whether the vehicle is stopping or not"},{"location":"localization/stop_filter/#parameters","title":"Parameters","text":"Name Type Description vx_threshold double longitudinal velocity threshold to determine if the vehicle is stopping [m/s] (default: 0.01) wz_threshold double yaw velocity threshold to determine if the vehicle is stopping [rad/s] (default: 0.01)"},{"location":"localization/twist2accel/","title":"twist2accel","text":""},{"location":"localization/twist2accel/#purpose","title":"Purpose","text":"

This package is responsible for estimating acceleration using the output of ekf_localizer. It uses lowpass filter to mitigate the noise.

"},{"location":"localization/twist2accel/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"localization/twist2accel/#input","title":"Input","text":"Name Type Description input/odom nav_msgs::msg::Odometry localization odometry input/twist geometry_msgs::msg::TwistWithCovarianceStamped twist"},{"location":"localization/twist2accel/#output","title":"Output","text":"Name Type Description output/accel geometry_msgs::msg::AccelWithCovarianceStamped estimated acceleration"},{"location":"localization/twist2accel/#parameters","title":"Parameters","text":"Name Type Description use_odom bool use odometry if true, else use twist input (default: true) accel_lowpass_gain double lowpass gain for lowpass filter in estimating acceleration (default: 0.9)"},{"location":"localization/twist2accel/#future-work","title":"Future work","text":"

Future work includes integrating acceleration into the EKF state.

"},{"location":"map/map_height_fitter/","title":"map_height_fitter","text":"

This library fits the given point with the ground of the point cloud map. The map loading operation is switched by the parameter enable_partial_load of the node specified by map_loader_name. The node using this library must use multi thread executor.

Interface Local Name Description Parameter map_loader_name The point cloud map loader name. Subscription ~/pointcloud_map The topic name to load the whole map Client ~/partial_map_load The service name to load the partial map"},{"location":"map/map_loader/","title":"map_loader package","text":"

This package provides the features of loading various maps.

"},{"location":"map/map_loader/#pointcloud_map_loader","title":"pointcloud_map_loader","text":""},{"location":"map/map_loader/#feature","title":"Feature","text":"

pointcloud_map_loader provides pointcloud maps to the other Autoware nodes in various configurations. Currently, it supports the following two types:

"},{"location":"map/map_loader/#publish-raw-pointcloud-map-ros-2-topic","title":"Publish raw pointcloud map (ROS 2 topic)","text":"

The node publishes the raw pointcloud map loaded from the .pcd file(s).

"},{"location":"map/map_loader/#publish-downsampled-pointcloud-map-ros-2-topic","title":"Publish downsampled pointcloud map (ROS 2 topic)","text":"

The node publishes the downsampled pointcloud map loaded from the .pcd file(s). You can specify the downsample resolution by changing the leaf_size parameter.

"},{"location":"map/map_loader/#send-partial-pointcloud-map-ros-2-service","title":"Send partial pointcloud map (ROS 2 service)","text":"

Here, we assume that the pointcloud maps are divided into grids.

Given a query from a client node, the node sends a set of pointcloud maps that overlaps with the queried area. Please see the description of GetPartialPointCloudMap.srv for details.

"},{"location":"map/map_loader/#send-differential-pointcloud-map-ros-2-service","title":"Send differential pointcloud map (ROS 2 service)","text":"

Here, we assume that the pointcloud maps are divided into grids.

Given a query and set of map IDs, the node sends a set of pointcloud maps that overlap with the queried area and are not included in the set of map IDs. Please see the description of GetDifferentialPointCloudMap.srv for details.

"},{"location":"map/map_loader/#parameters","title":"Parameters","text":"Name Type Description Default value enable_whole_load bool A flag to enable raw pointcloud map publishing true enable_downsampled_whole_load bool A flag to enable downsampled pointcloud map publishing false enable_partial_load bool A flag to enable partial pointcloud map server false enable_differential_load bool A flag to enable differential pointcloud map server false leaf_size float Downsampling leaf size (only used when enable_downsampled_whole_load is set true) 3.0"},{"location":"map/map_loader/#interfaces","title":"Interfaces","text":""},{"location":"map/map_loader/#lanelet2_map_loader","title":"lanelet2_map_loader","text":""},{"location":"map/map_loader/#feature_1","title":"Feature","text":"

lanelet2_map_loader loads Lanelet2 file and publishes the map data as autoware_auto_mapping_msgs/HADMapBin message. The node projects lan/lon coordinates into MGRS coordinates.

"},{"location":"map/map_loader/#how-to-run","title":"How to run","text":"

ros2 run map_loader lanelet2_map_loader --ros-args -p lanelet2_map_path:=path/to/map.osm

"},{"location":"map/map_loader/#published-topics","title":"Published Topics","text":""},{"location":"map/map_loader/#lanelet2_map_visualization","title":"lanelet2_map_visualization","text":""},{"location":"map/map_loader/#feature_2","title":"Feature","text":"

lanelet2_map_visualization visualizes autoware_auto_mapping_msgs/HADMapBin messages into visualization_msgs/MarkerArray.

"},{"location":"map/map_loader/#how-to-run_1","title":"How to Run","text":"

ros2 run map_loader lanelet2_map_visualization

"},{"location":"map/map_loader/#subscribed-topics","title":"Subscribed Topics","text":""},{"location":"map/map_loader/#published-topics_1","title":"Published Topics","text":""},{"location":"map/map_tf_generator/Readme/","title":"map_tf_generator","text":""},{"location":"map/map_tf_generator/Readme/#purpose","title":"Purpose","text":"

The nodes in this package broadcast the viewer frame for visualization of the map in RViz.

Note that there is no module to need the viewer frame and this is used only for visualization.

The following are the supported methods to calculate the position of the viewer frame:

"},{"location":"map/map_tf_generator/Readme/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"map/map_tf_generator/Readme/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"map/map_tf_generator/Readme/#input","title":"Input","text":""},{"location":"map/map_tf_generator/Readme/#pcd_map_tf_generator","title":"pcd_map_tf_generator","text":"Name Type Description /map/pointcloud_map sensor_msgs::msg::PointCloud2 Subscribe pointcloud map to calculate position of viewer frames"},{"location":"map/map_tf_generator/Readme/#vector_map_tf_generator","title":"vector_map_tf_generator","text":"Name Type Description /map/vector_map autoware_auto_mapping_msgs::msg::HADMapBin Subscribe vector map to calculate position of viewer frames"},{"location":"map/map_tf_generator/Readme/#output","title":"Output","text":"Name Type Description /tf_static tf2_msgs/msg/TFMessage Broadcast viewer frames"},{"location":"map/map_tf_generator/Readme/#parameters","title":"Parameters","text":""},{"location":"map/map_tf_generator/Readme/#node-parameters","title":"Node Parameters","text":"

None

"},{"location":"map/map_tf_generator/Readme/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Explanation viewer_frame string viewer Name of viewer frame map_frame string map The parent frame name of viewer frame"},{"location":"map/map_tf_generator/Readme/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"perception/compare_map_segmentation/","title":"compare_map_segmentation","text":""},{"location":"perception/compare_map_segmentation/#purpose","title":"Purpose","text":"

The compare_map_segmentation is a node that filters the ground points from the input pointcloud by using map info (e.g. pcd, elevation map).

"},{"location":"perception/compare_map_segmentation/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"perception/compare_map_segmentation/#compare-elevation-map-filter","title":"Compare Elevation Map Filter","text":"

Compare the z of the input points with the value of elevation_map. The height difference is calculated by the binary integration of neighboring cells. Remove points whose height difference is below the height_diff_thresh.

"},{"location":"perception/compare_map_segmentation/#distance-based-compare-map-filter","title":"Distance Based Compare Map Filter","text":"

WIP

"},{"location":"perception/compare_map_segmentation/#voxel-based-approximate-compare-map-filter","title":"Voxel Based Approximate Compare Map Filter","text":"

WIP

"},{"location":"perception/compare_map_segmentation/#voxel-based-compare-map-filter","title":"Voxel Based Compare Map Filter","text":"

WIP

"},{"location":"perception/compare_map_segmentation/#voxel-distance-based-compare-map-filter","title":"Voxel Distance based Compare Map Filter","text":"

WIP

"},{"location":"perception/compare_map_segmentation/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/compare_map_segmentation/#compare-elevation-map-filter_1","title":"Compare Elevation Map Filter","text":""},{"location":"perception/compare_map_segmentation/#input","title":"Input","text":"Name Type Description ~/input/points sensor_msgs::msg::PointCloud2 reference points ~/input/elevation_map grid_map::msg::GridMap elevation map"},{"location":"perception/compare_map_segmentation/#output","title":"Output","text":"Name Type Description ~/output/points sensor_msgs::msg::PointCloud2 filtered points"},{"location":"perception/compare_map_segmentation/#other-filters","title":"Other Filters","text":""},{"location":"perception/compare_map_segmentation/#input_1","title":"Input","text":"Name Type Description ~/input/points sensor_msgs::msg::PointCloud2 reference points ~/input/map grid_map::msg::GridMap map"},{"location":"perception/compare_map_segmentation/#output_1","title":"Output","text":"Name Type Description ~/output/points sensor_msgs::msg::PointCloud2 filtered points"},{"location":"perception/compare_map_segmentation/#parameters","title":"Parameters","text":""},{"location":"perception/compare_map_segmentation/#core-parameters","title":"Core Parameters","text":"Name Type Description Default value map_layer_name string elevation map layer name elevation map_frame float frame_id of the map that is temporarily used before elevation_map is subscribed map height_diff_thresh float Remove points whose height difference is below this value [m] 0.15"},{"location":"perception/compare_map_segmentation/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/compare_map_segmentation/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/compare_map_segmentation/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/compare_map_segmentation/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/compare_map_segmentation/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/crosswalk_traffic_light_estimator/","title":"crosswalk_traffic_light_estimator","text":""},{"location":"perception/crosswalk_traffic_light_estimator/#purpose","title":"Purpose","text":"

crosswalk_traffic_light_estimator is a module that estimates pedestrian traffic signals from HDMap and detected vehicle traffic signals.

"},{"location":"perception/crosswalk_traffic_light_estimator/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/crosswalk_traffic_light_estimator/#input","title":"Input","text":"Name Type Description ~/input/vector_map autoware_auto_mapping_msgs::msg::HADMapBin vector map ~/input/route autoware_planning_msgs::msg::LaneletRoute route ~/input/classified/traffic_signals autoware_auto_perception_msgs::msg::TrafficSignalArray classified signals"},{"location":"perception/crosswalk_traffic_light_estimator/#output","title":"Output","text":"Name Type Description ~/output/traffic_signals autoware_auto_perception_msgs::msg::TrafficSignalArray output that contains estimated pedestrian traffic signals"},{"location":"perception/crosswalk_traffic_light_estimator/#parameters","title":"Parameters","text":"Name Type Description Default value use_last_detect_color bool If this parameter is true, this module estimates pedestrian's traffic signal as RED not only when vehicle's traffic signal is detected as GREEN/AMBER but also when detection results change GREEN/AMBER to UNKNOWN. (If detection results change RED or AMBER to UNKNOWN, this module estimates pedestrian's traffic signal as UNKNOWN.) If this parameter is false, this module use only latest detection results for estimation. (Only when the detection result is GREEN/AMBER, this module estimates pedestrian's traffic signal as RED.) true"},{"location":"perception/crosswalk_traffic_light_estimator/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

If traffic between pedestrians and vehicles is controlled by traffic signals, the crosswalk traffic signal maybe RED in order to prevent pedestrian from crossing when the following conditions are satisfied.

"},{"location":"perception/crosswalk_traffic_light_estimator/#situation1","title":"Situation1","text":""},{"location":"perception/crosswalk_traffic_light_estimator/#situation2","title":"Situation2","text":""},{"location":"perception/crosswalk_traffic_light_estimator/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/crosswalk_traffic_light_estimator/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":""},{"location":"perception/detected_object_feature_remover/","title":"detected_object_feature_remover","text":""},{"location":"perception/detected_object_feature_remover/#purpose","title":"Purpose","text":"

The detected_object_feature_remover is a package to convert topic-type from DetectedObjectWithFeatureArray to DetectedObjects.

"},{"location":"perception/detected_object_feature_remover/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"perception/detected_object_feature_remover/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/detected_object_feature_remover/#input","title":"Input","text":"Name Type Description ~/input tier4_perception_msgs::msg::DetectedObjectWithFeatureArray detected objects with feature field"},{"location":"perception/detected_object_feature_remover/#output","title":"Output","text":"Name Type Description ~/output autoware_auto_perception_msgs::msg::DetectedObjects detected objects"},{"location":"perception/detected_object_feature_remover/#parameters","title":"Parameters","text":"

None

"},{"location":"perception/detected_object_feature_remover/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/detected_object_validation/","title":"detected_object_validation","text":""},{"location":"perception/detected_object_validation/#purpose","title":"Purpose","text":"

The purpose of this package is to eliminate obvious false positives of DetectedObjects.

"},{"location":"perception/detected_object_validation/#referencesexternal-links","title":"References/External links","text":""},{"location":"perception/detected_object_validation/object-lanelet-filter/","title":"object_lanelet_filter","text":""},{"location":"perception/detected_object_validation/object-lanelet-filter/#purpose","title":"Purpose","text":"

The object_lanelet_filter is a node that filters detected object by using vector map. The objects only inside of the vector map will be published.

"},{"location":"perception/detected_object_validation/object-lanelet-filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"perception/detected_object_validation/object-lanelet-filter/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/detected_object_validation/object-lanelet-filter/#input","title":"Input","text":"Name Type Description input/vector_map autoware_auto_mapping_msgs::msg::HADMapBin vector map input/object autoware_auto_perception_msgs::msg::DetectedObjects input detected objects"},{"location":"perception/detected_object_validation/object-lanelet-filter/#output","title":"Output","text":"Name Type Description output/object autoware_auto_perception_msgs::msg::DetectedObjects filtered detected objects"},{"location":"perception/detected_object_validation/object-lanelet-filter/#parameters","title":"Parameters","text":""},{"location":"perception/detected_object_validation/object-lanelet-filter/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description filter_target_label.UNKNOWN bool false If true, unknown objects are filtered. filter_target_label.CAR bool false If true, car objects are filtered. filter_target_label.TRUCK bool false If true, truck objects are filtered. filter_target_label.BUS bool false If true, bus objects are filtered. filter_target_label.TRAILER bool false If true, trailer objects are filtered. filter_target_label.MOTORCYCLE bool false If true, motorcycle objects are filtered. filter_target_label.BICYCLE bool false If true, bicycle objects are filtered. filter_target_label.PEDESTRIAN bool false If true, pedestrian objects are filtered."},{"location":"perception/detected_object_validation/object-lanelet-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

Filtering is performed based on the shape polygon of the object.

"},{"location":"perception/detected_object_validation/object-lanelet-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/detected_object_validation/object-lanelet-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/detected_object_validation/object-lanelet-filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/detected_object_validation/object-lanelet-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/detected_object_validation/object-position-filter/","title":"object_position_filter","text":""},{"location":"perception/detected_object_validation/object-position-filter/#purpose","title":"Purpose","text":"

The object_position_filter is a node that filters detected object based on x,y values. The objects only inside of the x, y bound will be published.

"},{"location":"perception/detected_object_validation/object-position-filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"perception/detected_object_validation/object-position-filter/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/detected_object_validation/object-position-filter/#input","title":"Input","text":"Name Type Description input/object autoware_auto_perception_msgs::msg::DetectedObjects input detected objects"},{"location":"perception/detected_object_validation/object-position-filter/#output","title":"Output","text":"Name Type Description output/object autoware_auto_perception_msgs::msg::DetectedObjects filtered detected objects"},{"location":"perception/detected_object_validation/object-position-filter/#parameters","title":"Parameters","text":""},{"location":"perception/detected_object_validation/object-position-filter/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description filter_target_label.UNKNOWN bool false If true, unknown objects are filtered. filter_target_label.CAR bool false If true, car objects are filtered. filter_target_label.TRUCK bool false If true, truck objects are filtered. filter_target_label.BUS bool false If true, bus objects are filtered. filter_target_label.TRAILER bool false If true, trailer objects are filtered. filter_target_label.MOTORCYCLE bool false If true, motorcycle objects are filtered. filter_target_label.BICYCLE bool false If true, bicycle objects are filtered. filter_target_label.PEDESTRIAN bool false If true, pedestrian objects are filtered. upper_bound_x float 100.00 Bound for filtering. Only used if filter_by_xy_position is true lower_bound_x float 0.00 Bound for filtering. Only used if filter_by_xy_position is true upper_bound_y float 50.00 Bound for filtering. Only used if filter_by_xy_position is true lower_bound_y float -50.00 Bound for filtering. Only used if filter_by_xy_position is true"},{"location":"perception/detected_object_validation/object-position-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

Filtering is performed based on the center position of the object.

"},{"location":"perception/detected_object_validation/object-position-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/detected_object_validation/object-position-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/detected_object_validation/object-position-filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/detected_object_validation/object-position-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/detected_object_validation/obstacle-pointcloud-based-validator/","title":"obstacle pointcloud based validator","text":""},{"location":"perception/detected_object_validation/obstacle-pointcloud-based-validator/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

If the number of obstacle point groups in the DetectedObjects is small, it is considered a false positive and removed. The obstacle point cloud can be a point cloud after compare map filtering or a ground filtered point cloud.

In the debug image above, the red DetectedObject is the validated object. The blue object is the deleted object.

"},{"location":"perception/detected_object_validation/obstacle-pointcloud-based-validator/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/detected_object_validation/obstacle-pointcloud-based-validator/#input","title":"Input","text":"Name Type Description ~/input/detected_objects autoware_auto_perception_msgs::msg::DetectedObjects DetectedObjects ~/input/obstacle_pointcloud sensor_msgs::msg::PointCloud2 Obstacle point cloud of dynamic objects"},{"location":"perception/detected_object_validation/obstacle-pointcloud-based-validator/#output","title":"Output","text":"Name Type Description ~/output/objects autoware_auto_perception_msgs::msg::DetectedObjects validated DetectedObjects"},{"location":"perception/detected_object_validation/obstacle-pointcloud-based-validator/#parameters","title":"Parameters","text":"Name Type Description min_points_num int The minimum number of obstacle point clouds in DetectedObjects max_points_num int The max number of obstacle point clouds in DetectedObjects min_points_and_distance_ratio float Threshold value of the number of point clouds per object when the distance from baselink is 1m, because the number of point clouds varies with the distance from baselink. enable_debugger bool Whether to create debug topics or not?"},{"location":"perception/detected_object_validation/obstacle-pointcloud-based-validator/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

Currently, only represented objects as BoundingBox or Cylinder are supported.

"},{"location":"perception/detected_object_validation/occupancy-grid-based-validator/","title":"occupancy grid based validator","text":""},{"location":"perception/detected_object_validation/occupancy-grid-based-validator/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

Compare the occupancy grid map with the DetectedObject, and if a larger percentage of obstacles are in freespace, delete them.

Basically, it takes an occupancy grid map as input and generates a binary image of freespace or other.

A mask image is generated for each DetectedObject and the average value (percentage) in the mask image is calculated. If the percentage is low, it is deleted.

"},{"location":"perception/detected_object_validation/occupancy-grid-based-validator/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/detected_object_validation/occupancy-grid-based-validator/#input","title":"Input","text":"Name Type Description ~/input/detected_objects autoware_auto_perception_msgs::msg::DetectedObjects DetectedObjects ~/input/occupancy_grid_map nav_msgs::msg::OccupancyGrid OccupancyGrid with no time series calculation is preferred."},{"location":"perception/detected_object_validation/occupancy-grid-based-validator/#output","title":"Output","text":"Name Type Description ~/output/objects autoware_auto_perception_msgs::msg::DetectedObjects validated DetectedObjects"},{"location":"perception/detected_object_validation/occupancy-grid-based-validator/#parameters","title":"Parameters","text":"Name Type Description mean_threshold float The percentage threshold of allowed non-freespace. enable_debug bool Whether to display debug images or not?"},{"location":"perception/detected_object_validation/occupancy-grid-based-validator/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

Currently, only vehicle represented as BoundingBox are supported.

"},{"location":"perception/detection_by_tracker/","title":"detection_by_tracker","text":""},{"location":"perception/detection_by_tracker/#purpose","title":"Purpose","text":"

This package feeds back the tracked objects to the detection module to keep it stable and keep detecting objects.

The detection by tracker takes as input an unknown object containing a cluster of points and a tracker. The unknown object is optimized to fit the size of the tracker so that it can continue to be detected.

"},{"location":"perception/detection_by_tracker/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

The detection by tracker receives an unknown object containing a point cloud and a tracker, where the unknown object is mainly shape-fitted using euclidean clustering. Shape fitting using euclidean clustering and other methods has a problem called under segmentation and over segmentation.

Adapted from [3]

Simply looking at the overlap between the unknown object and the tracker does not work. We need to take measures for under segmentation and over segmentation.

"},{"location":"perception/detection_by_tracker/#policy-for-dealing-with-over-segmentation","title":"Policy for dealing with over segmentation","text":"
  1. Merge the unknown objects in the tracker as a single object.
  2. Shape fitting using the tracker information such as angle and size as reference information.
"},{"location":"perception/detection_by_tracker/#policy-for-dealing-with-under-segmentation","title":"Policy for dealing with under segmentation","text":"
  1. Compare the tracker and unknown objects, and determine that those with large recall and small precision are under segmented objects.
  2. In order to divide the cluster of under segmented objects, it iterate the parameters to make small clusters.
  3. Adjust the parameters several times and adopt the one with the highest IoU.
"},{"location":"perception/detection_by_tracker/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/detection_by_tracker/#input","title":"Input","text":"Name Type Description ~/input/initial_objects tier4_perception_msgs::msg::DetectedObjectsWithFeature unknown objects ~/input/tracked_objects tier4_perception_msgs::msg::TrackedObjects trackers"},{"location":"perception/detection_by_tracker/#output","title":"Output","text":"Name Type Description ~/output autoware_auto_perception_msgs::msg::DetectedObjects objects"},{"location":"perception/detection_by_tracker/#parameters","title":"Parameters","text":""},{"location":"perception/detection_by_tracker/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/detection_by_tracker/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/detection_by_tracker/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/detection_by_tracker/#optional-referencesexternal-links","title":"(Optional) References/External links","text":"

[1] M. Himmelsbach, et al. \"Tracking and classification of arbitrary objects with bottom-up/top-down detection.\" (2012).

[2] Arya Senna Abdul Rachman, Arya. \"3D-LIDAR Multi Object Tracking for Autonomous Driving: Multi-target Detection and Tracking under Urban Road Uncertainties.\" (2017).

[3] David Held, et al. \"A Probabilistic Framework for Real-time 3D Segmentation using Spatial, Temporal, and Semantic Cues.\" (2016).

"},{"location":"perception/detection_by_tracker/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/elevation_map_loader/","title":"elevation_map_loader","text":""},{"location":"perception/elevation_map_loader/#purpose","title":"Purpose","text":"

This package provides elevation map for compare_map_segmentation.

"},{"location":"perception/elevation_map_loader/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

Generate elevation_map from subscribed pointcloud_map and vector_map and publish it. Save the generated elevation_map locally and load it from next time.

The elevation value of each cell is the average value of z of the points of the lowest cluster. Cells with No elevation value can be inpainted using the values of neighboring cells.

"},{"location":"perception/elevation_map_loader/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/elevation_map_loader/#input","title":"Input","text":"Name Type Description input/pointcloud_map sensor_msgs::msg::PointCloud2 The point cloud map input/vector_map autoware_auto_mapping_msgs::msg::HADMapBin (Optional) The binary data of lanelet2 map"},{"location":"perception/elevation_map_loader/#output","title":"Output","text":"Name Type Description output/elevation_map grid_map_msgs::msg::GridMap The elevation map output/elevation_map_cloud sensor_msgs::msg::PointCloud2 (Optional) The point cloud generated from the value of elevation map"},{"location":"perception/elevation_map_loader/#parameters","title":"Parameters","text":""},{"location":"perception/elevation_map_loader/#node-parameters","title":"Node parameters","text":"Name Type Description Default value map_layer_name std::string elevation_map layer name elevation param_file_path std::string GridMap parameters config path_default elevation_map_file_path std::string elevation_map file (bag2) path_default map_frame std::string map_frame when loading elevation_map file map use_inpaint bool Whether to inpaint empty cells true inpaint_radius float Radius of a circular neighborhood of each point inpainted that is considered by the algorithm [m] 0.3 use_elevation_map_cloud_publisher bool Whether to publish output/elevation_map_cloud false use_lane_filter bool Whether to filter elevation_map with vector_map false lane_margin float Value of how much to expand the range of vector_map [m] 0.5 lane_height_diff_thresh float Only point clouds in the height range of this value from vector_map are used to generate elevation_map [m] 1.0 lane_filter_voxel_size_x float Voxel size x for calculating point clouds in vector_map [m] 0.04 lane_filter_voxel_size_y float Voxel size y for calculating point clouds in vector_map [m] 0.04 lane_filter_voxel_size_z float Voxel size z for calculating point clouds in vector_map [m] 0.04"},{"location":"perception/elevation_map_loader/#gridmap-parameters","title":"GridMap parameters","text":"

The parameters are described on config/elevation_map_parameters.yaml.

"},{"location":"perception/elevation_map_loader/#general-parameters","title":"General parameters","text":"Name Type Description Default value pcl_grid_map_extraction/num_processing_threads int Number of threads for processing grid map cells. Filtering of the raw input point cloud is not parallelized. 12"},{"location":"perception/elevation_map_loader/#grid-map-parameters","title":"Grid map parameters","text":"

See: https://github.com/ANYbotics/grid_map/tree/ros2/grid_map_pcl

Resulting grid map parameters.

Name Type Description Default value pcl_grid_map_extraction/grid_map/min_num_points_per_cell int Minimum number of points in the point cloud that have to fall within any of the grid map cells. Otherwise the cell elevation will be set to NaN. 3 pcl_grid_map_extraction/grid_map/resolution float Resolution of the grid map. Width and length are computed automatically. 0.3 pcl_grid_map_extraction/grid_map/height_type int The parameter that determine the elevation of a cell 0: Smallest value among the average values of each cluster, 1: Mean value of the cluster with the most points 1 pcl_grid_map_extraction/grid_map/height_thresh float Height range from the smallest cluster (Only for height_type 1) 1.0"},{"location":"perception/elevation_map_loader/#point-cloud-pre-processing-parameters","title":"Point Cloud Pre-processing Parameters","text":""},{"location":"perception/elevation_map_loader/#rigid-body-transform-parameters","title":"Rigid body transform parameters","text":"

Rigid body transform that is applied to the point cloud before computing elevation.

Name Type Description Default value pcl_grid_map_extraction/cloud_transform/translation float Translation (xyz) that is applied to the input point cloud before computing elevation. 0.0 pcl_grid_map_extraction/cloud_transform/rotation float Rotation (intrinsic rotation, convention X-Y'-Z'') that is applied to the input point cloud before computing elevation. 0.0"},{"location":"perception/elevation_map_loader/#cluster-extraction-parameters","title":"Cluster extraction parameters","text":"

Cluster extraction is based on pcl algorithms. See https://pointclouds.org/documentation/tutorials/cluster_extraction.html for more details.

Name Type Description Default value pcl_grid_map_extraction/cluster_extraction/cluster_tolerance float Distance between points below which they will still be considered part of one cluster. 0.2 pcl_grid_map_extraction/cluster_extraction/min_num_points int Min number of points that a cluster needs to have (otherwise it will be discarded). 3 pcl_grid_map_extraction/cluster_extraction/max_num_points int Max number of points that a cluster can have (otherwise it will be discarded). 1000000"},{"location":"perception/elevation_map_loader/#outlier-removal-parameters","title":"Outlier removal parameters","text":"

See https://pointclouds.org/documentation/tutorials/statistical_outlier.html for more explanation on outlier removal.

Name Type Description Default value pcl_grid_map_extraction/outlier_removal/is_remove_outliers float Whether to perform statistical outlier removal. false pcl_grid_map_extraction/outlier_removal/mean_K float Number of neighbors to analyze for estimating statistics of a point. 10 pcl_grid_map_extraction/outlier_removal/stddev_threshold float Number of standard deviations under which points are considered to be inliers. 1.0"},{"location":"perception/elevation_map_loader/#subsampling-parameters","title":"Subsampling parameters","text":"

See https://pointclouds.org/documentation/tutorials/voxel_grid.html for more explanation on point cloud downsampling.

Name Type Description Default value pcl_grid_map_extraction/downsampling/is_downsample_cloud bool Whether to perform downsampling or not. false pcl_grid_map_extraction/downsampling/voxel_size float Voxel sizes (xyz) in meters. 0.02"},{"location":"perception/euclidean_cluster/","title":"euclidean_cluster","text":""},{"location":"perception/euclidean_cluster/#purpose","title":"Purpose","text":"

euclidean_cluster is a package for clustering points into smaller parts to classify objects.

This package has two clustering methods: euclidean_cluster and voxel_grid_based_euclidean_cluster.

"},{"location":"perception/euclidean_cluster/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"perception/euclidean_cluster/#euclidean_cluster_1","title":"euclidean_cluster","text":"

pcl::EuclideanClusterExtraction is applied to points. See official document for details.

"},{"location":"perception/euclidean_cluster/#voxel_grid_based_euclidean_cluster","title":"voxel_grid_based_euclidean_cluster","text":"
  1. A centroid in each voxel is calculated by pcl::VoxelGrid.
  2. The centroids are clustered by pcl::EuclideanClusterExtraction.
  3. The input points are clustered based on the clustered centroids.
"},{"location":"perception/euclidean_cluster/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/euclidean_cluster/#input","title":"Input","text":"Name Type Description input sensor_msgs::msg::PointCloud2 input pointcloud"},{"location":"perception/euclidean_cluster/#output","title":"Output","text":"Name Type Description output tier4_perception_msgs::msg::DetectedObjectsWithFeature cluster pointcloud debug/clusters sensor_msgs::msg::PointCloud2 colored cluster pointcloud for visualization"},{"location":"perception/euclidean_cluster/#parameters","title":"Parameters","text":""},{"location":"perception/euclidean_cluster/#core-parameters","title":"Core Parameters","text":""},{"location":"perception/euclidean_cluster/#euclidean_cluster_2","title":"euclidean_cluster","text":"Name Type Description use_height bool use point.z for clustering min_cluster_size int the minimum number of points that a cluster needs to contain in order to be considered valid max_cluster_size int the maximum number of points that a cluster needs to contain in order to be considered valid tolerance float the spatial cluster tolerance as a measure in the L2 Euclidean space"},{"location":"perception/euclidean_cluster/#voxel_grid_based_euclidean_cluster_1","title":"voxel_grid_based_euclidean_cluster","text":"Name Type Description use_height bool use point.z for clustering min_cluster_size int the minimum number of points that a cluster needs to contain in order to be considered valid max_cluster_size int the maximum number of points that a cluster needs to contain in order to be considered valid tolerance float the spatial cluster tolerance as a measure in the L2 Euclidean space voxel_leaf_size float the voxel leaf size of x and y min_points_number_per_voxel int the minimum number of points for a voxel"},{"location":"perception/euclidean_cluster/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/euclidean_cluster/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/euclidean_cluster/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/euclidean_cluster/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/euclidean_cluster/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":"

The use_height option of voxel_grid_based_euclidean_cluster isn't implemented yet.

"},{"location":"perception/front_vehicle_velocity_estimator/","title":"front_vehicle_velocity_estimator","text":"

This package contains a front vehicle velocity estimation for offline perception module analysis. This package can:

"},{"location":"perception/front_vehicle_velocity_estimator/#algorithm","title":"Algorithm","text":""},{"location":"perception/front_vehicle_velocity_estimator/#input","title":"Input","text":"Name Type Description ~/input/objects autoware_auto_perception_msgs/msg/DetectedObject.msg 3D detected objects. ~/input/pointcloud sensor_msgs/msg/PointCloud2.msg LiDAR pointcloud. ~/input/odometry nav_msgs::msg::Odometry.msg Odometry data."},{"location":"perception/front_vehicle_velocity_estimator/#output","title":"Output","text":"Name Type Description ~/output/objects autoware_auto_perception_msgs/msg/DetectedObjects.msg 3D detected object with twist. ~/debug/nearest_neighbor_pointcloud sensor_msgs/msg/PointCloud2.msg The pointcloud msg of nearest neighbor point."},{"location":"perception/front_vehicle_velocity_estimator/#node-parameter","title":"Node parameter","text":"Name Type Description Default value update_rate_hz double The update rate [hz]. 10.0"},{"location":"perception/front_vehicle_velocity_estimator/#core-parameter","title":"Core parameter","text":"Name Type Description Default value moving_average_num int The moving average number for velocity estimation. 1 threshold_pointcloud_z_high float The threshold for z position value of point when choosing nearest neighbor point within front vehicle [m]. If z > threshold_pointcloud_z_high, the point is considered to noise. 1.0f threshold_pointcloud_z_low float The threshold for z position value of point when choosing nearest neighbor point within front vehicle [m]. If z < threshold_pointcloud_z_low, the point is considered to noise like ground. 0.6f threshold_relative_velocity double The threshold for min and max of estimated relative velocity (v_{re}) [m/s]. If v_{re} < - threshold_relative_velocity , then v_{re} = - threshold_relative_velocity. If v_{re} > threshold_relative_velocity, then v_{re} = threshold_relative_velocity. 10.0 threshold_absolute_velocity double The threshold for max of estimated absolute velocity (v_{ae}) [m/s]. If v_{ae} > threshold_absolute_velocity, then v_{ae} = threshold_absolute_velocity. 20.0"},{"location":"perception/ground_segmentation/","title":"ground_segmentation","text":""},{"location":"perception/ground_segmentation/#purpose","title":"Purpose","text":"

The ground_segmentation is a node that remove the ground points from the input pointcloud.

"},{"location":"perception/ground_segmentation/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

Detail description of each ground segmentation algorithm is in the following links.

Filter Name Description Detail ray_ground_filter A method of removing the ground based on the geometrical relationship between points lined up on radiation link scan_ground_filter Almost the same method as ray_ground_filter, but with slightly improved performance link ransac_ground_filter A method of removing the ground by approximating the ground to a plane link"},{"location":"perception/ground_segmentation/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/ground_segmentation/#input","title":"Input","text":"Name Type Description ~/input/points sensor_msgs::msg::PointCloud2 reference points ~/input/indices pcl_msgs::msg::Indices reference indices"},{"location":"perception/ground_segmentation/#output","title":"Output","text":"Name Type Description ~/output/points sensor_msgs::msg::PointCloud2 filtered points"},{"location":"perception/ground_segmentation/#parameters","title":"Parameters","text":""},{"location":"perception/ground_segmentation/#node-parameters","title":"Node Parameters","text":"Name Type Default Value Description input_frame string \" \" input frame id output_frame string \" \" output frame id max_queue_size int 5 max queue size of input/output topics use_indices bool false flag to use pointcloud indices latched_indices bool false flag to latch pointcloud indices approximate_sync bool false flag to use approximate sync option"},{"location":"perception/ground_segmentation/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

pointcloud_preprocessor::Filter is implemented based on pcl_perception [1] because of this issue.

"},{"location":"perception/ground_segmentation/#referencesexternal-links","title":"References/External links","text":"

[1] https://github.com/ros-perception/perception_pcl/blob/ros2/pcl_ros/src/pcl_ros/filters/filter.cpp

"},{"location":"perception/ground_segmentation/docs/ransac-ground-filter/","title":"RANSAC Ground Filter","text":""},{"location":"perception/ground_segmentation/docs/ransac-ground-filter/#purpose","title":"Purpose","text":"

The purpose of this node is that remove the ground points from the input pointcloud.

"},{"location":"perception/ground_segmentation/docs/ransac-ground-filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

Apply the input points to the plane, and set the points at a certain distance from the plane as points other than the ground. Normally, whn using this method, the input points is filtered so that it is almost flat before use. Since the drivable area is often flat, there are methods such as filtering by lane.

"},{"location":"perception/ground_segmentation/docs/ransac-ground-filter/#inputs-outputs","title":"Inputs / Outputs","text":"

This implementation inherits pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"perception/ground_segmentation/docs/ransac-ground-filter/#parameters","title":"Parameters","text":""},{"location":"perception/ground_segmentation/docs/ransac-ground-filter/#node-parameters","title":"Node Parameters","text":"

This implementation inherits pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"perception/ground_segmentation/docs/ransac-ground-filter/#core-parameters","title":"Core Parameters","text":"Name Type Description base_frame string base_link frame unit_axis string The axis which we need to search ground plane max_iterations int The maximum number of iterations outlier_threshold double The distance threshold to the model [m] plane_slope_threshold double The slope threshold to prevent mis-fitting [deg] voxel_size_x double voxel size x [m] voxel_size_y double voxel size y [m] voxel_size_z double voxel size z [m] height_threshold double The height threshold from ground plane for no ground points [m] debug bool whether to output debug information"},{"location":"perception/ground_segmentation/docs/ransac-ground-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/ground_segmentation/docs/ransac-ground-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/ground_segmentation/docs/ransac-ground-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/ground_segmentation/docs/ransac-ground-filter/#referencesexternal-links","title":"References/External links","text":"

https://pcl.readthedocs.io/projects/tutorials/en/latest/planar_segmentation.html

"},{"location":"perception/ground_segmentation/docs/ransac-ground-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/ground_segmentation/docs/ray-ground-filter/","title":"Ray Ground Filter","text":""},{"location":"perception/ground_segmentation/docs/ray-ground-filter/#purpose","title":"Purpose","text":"

The purpose of this node is that remove the ground points from the input pointcloud.

"},{"location":"perception/ground_segmentation/docs/ray-ground-filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

The points is separated radially (Ray), and the ground is classified for each Ray sequentially from the point close to ego-vehicle based on the geometric information such as the distance and angle between the points.

"},{"location":"perception/ground_segmentation/docs/ray-ground-filter/#inputs-outputs","title":"Inputs / Outputs","text":"

This implementation inherits pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"perception/ground_segmentation/docs/ray-ground-filter/#parameters","title":"Parameters","text":""},{"location":"perception/ground_segmentation/docs/ray-ground-filter/#node-parameters","title":"Node Parameters","text":"

This implementation inherits pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"perception/ground_segmentation/docs/ray-ground-filter/#core-parameters","title":"Core Parameters","text":"Name Type Description input_frame string frame id of input pointcloud output_frame string frame id of output pointcloud general_max_slope double The triangle created by general_max_slope is called the global cone. If the point is outside the global cone, it is judged to be a point that is no on the ground initial_max_slope double Generally, the point where the object first hits is far from ego-vehicle because of sensor blind spot, so resolution is different from that point and thereafter, so this parameter exists to set a separate local_max_slope local_max_slope double The triangle created by local_max_slope is called the local cone. This parameter for classification based on the continuity of points min_height_threshold double This parameter is used instead of height_threshold because it's difficult to determine continuity in the local cone when the points are too close to each other. radial_divider_angle double The angle of ray concentric_divider_distance double Only check points which radius is larger than concentric_divider_distance reclass_distance_threshold double To check if point is to far from previous one, if so classify again min_x double The parameter to set vehicle footprint manually max_x double The parameter to set vehicle footprint manually min_y double The parameter to set vehicle footprint manually max_y double The parameter to set vehicle footprint manually"},{"location":"perception/ground_segmentation/docs/ray-ground-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

The input_frame is set as parameter but it must be fixed as base_link for the current algorithm.

"},{"location":"perception/ground_segmentation/docs/ray-ground-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/ground_segmentation/docs/ray-ground-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/ground_segmentation/docs/ray-ground-filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/ground_segmentation/docs/ray-ground-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/ground_segmentation/docs/scan-ground-filter/","title":"Scan Ground Filter","text":""},{"location":"perception/ground_segmentation/docs/scan-ground-filter/#purpose","title":"Purpose","text":"

The purpose of this node is that remove the ground points from the input pointcloud.

"},{"location":"perception/ground_segmentation/docs/scan-ground-filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

This algorithm works by following steps,

  1. Divide whole pointclouds into groups by horizontal angle and sort by xy-distance.
  2. Divide sorted pointclouds of each ray into grids
  3. Check the xy distance to previous pointcloud, if the distance is large and previous pointcloud is \"no ground\" and the height level of current point greater than previous point, the current pointcloud is classified as no ground.
  4. Check vertical angle of the point compared with previous ground grid
  5. Check the height of the point compared with predicted ground level
  6. If vertical angle is greater than local_slope_max and related height to predicted ground level is greater than \"non ground height threshold\", the point is classified as \"non ground\"
  7. If the vertical angle is in range of [-local_slope_max, local_slope_max] or related height to predicted ground level is smaller than non_ground_height_threshold, the point is classified as \"ground\"
  8. If the vertical angle is lower than -local_slope_max or the related height to ground level is greater than detection_range_z_max, the point will be classified as out of range
"},{"location":"perception/ground_segmentation/docs/scan-ground-filter/#inputs-outputs","title":"Inputs / Outputs","text":"

This implementation inherits pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"perception/ground_segmentation/docs/scan-ground-filter/#parameters","title":"Parameters","text":""},{"location":"perception/ground_segmentation/docs/scan-ground-filter/#node-parameters","title":"Node Parameters","text":"

This implementation inherits pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"perception/ground_segmentation/docs/scan-ground-filter/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description input_frame string \"base_link\" frame id of input pointcloud output_frame string \"base_link\" frame id of output pointcloud global_slope_max_angle_deg double 8.0 The global angle to classify as the ground or object [deg] local_slope_max_angle_deg double 10.0 The local angle to classify as the ground or object [deg] radial_divider_angle_deg double 1.0 The angle which divide the whole pointcloud to sliced group [deg] split_points_distance_tolerance double 0.2 The xy-distance threshold to to distinguishing far and near [m] split_height_distance double 0.2 The height threshold to distinguishing far and near [m] use_virtual_ground_point bool true whether to use the ground center of front wheels as the virtual ground point. detection_range_z_max float 2.5 Maximum height of detection range [m], applied only for elevation_grid_mode center_pcl_shift float 0.0 The x-axis offset of addition LiDARs from vehicle center of mass [m], recommended to use only for additional LiDARs in elevation_grid_mode non_ground_height_threshold float 0.2 Height threshold of non ground objects [m], applied only for elevation_grid_mode grid_mode_switch_radius float 20.0 The distance where grid division mode change from by distance to by vertical angle [m], applied only for elevation_grid_mode grid_size_m float 0.5 The first grid size [m], applied only for elevation_grid_mode gnd_grid_buffer_size uint16 4 Number of grids using to estimate local ground slope, applied only for elevation_grid_mode low_priority_region_x float -20.0 The non-zero x threshold in back side from which small objects detection is low priority [m] elevation_grid_mode bool true Elevation grid scan mode option"},{"location":"perception/ground_segmentation/docs/scan-ground-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

The input_frame is set as parameter but it must be fixed as base_link for the current algorithm.

"},{"location":"perception/ground_segmentation/docs/scan-ground-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/ground_segmentation/docs/scan-ground-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/ground_segmentation/docs/scan-ground-filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":"

The elevation grid idea is referred from \"Shen Z, Liang H, Lin L, Wang Z, Huang W, Yu J. Fast Ground Segmentation for 3D LiDAR Point Cloud Based on Jump-Convolution-Process. Remote Sensing. 2021; 13(16):3239. https://doi.org/10.3390/rs13163239\"

"},{"location":"perception/ground_segmentation/docs/scan-ground-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/heatmap_visualizer/","title":"heatmap_visualizer","text":""},{"location":"perception/heatmap_visualizer/#purpose","title":"Purpose","text":"

heatmap_visualizer is a package for visualizing heatmap of detected 3D objects' positions on the BEV space.

This package is used for qualitative evaluation and trend analysis of the detector, it means, for instance, the heatmap shows \"This detector performs good for near around of our vehicle, but far is bad\".

"},{"location":"perception/heatmap_visualizer/#how-to-run","title":"How to run","text":"
ros2 launch heatmap_visualizer heatmap_visualizer.launch.xml input/objects:=<DETECTED_OBJECTS_TOPIC>\n
"},{"location":"perception/heatmap_visualizer/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

In this implementation, create heatmap of the center position of detected objects for each classes, for instance, CAR, PEDESTRIAN, etc, and publish them as occupancy grid maps.

In the above figure, the pink represents high detection frequency area and blue one is low, or black represents there is no detection.

As inner-workings, add center positions of detected objects to index of each corresponding grid map cell in a buffer. The created heatmap will be published by each specific frame, which can be specified with frame_count. Note that the buffer to be add the positions is not reset per publishing. When publishing, firstly these values are normalized to [0, 1] using maximum and minimum values in the buffer. Secondly, they are scaled to integer in [0, 100] because nav_msgs::msg::OccupancyGrid only allow the value in [0, 100].

"},{"location":"perception/heatmap_visualizer/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/heatmap_visualizer/#input","title":"Input","text":"Name Type Description ~/input/objects autoware_auto_perception_msgs::msg::DetectedObjects detected objects"},{"location":"perception/heatmap_visualizer/#output","title":"Output","text":"Name Type Description ~/output/objects/<CLASS_NAME> nav_msgs::msg::OccupancyGrid visualized heatmap"},{"location":"perception/heatmap_visualizer/#parameters","title":"Parameters","text":""},{"location":"perception/heatmap_visualizer/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description frame_count int 50 The number of frames to publish heatmap map_frame string base_link the frame of heatmap to be respected map_length float 200.0 the length of map in meter map_resolution float 0.8 the resolution of map use_confidence bool false the flag if use confidence score as heatmap value rename_car_to_truck_and_bus bool true the flag if rename car to truck or bus"},{"location":"perception/heatmap_visualizer/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

The heatmap depends on the data to be used, so if the objects in data are sparse the heatmap will be sparse.

"},{"location":"perception/heatmap_visualizer/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/heatmap_visualizer/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/heatmap_visualizer/#referencesexternal-links","title":"References/External links","text":""},{"location":"perception/heatmap_visualizer/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/image_projection_based_fusion/","title":"image_projection_based_fusion","text":""},{"location":"perception/image_projection_based_fusion/#purpose","title":"Purpose","text":"

The image_projection_based_fusion is a package to fuse detected obstacles (bounding box or segmentation) from image and 3d pointcloud or obstacles (bounding box, cluster or segmentation).

"},{"location":"perception/image_projection_based_fusion/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"perception/image_projection_based_fusion/#sync-algorithm","title":"Sync Algorithm","text":""},{"location":"perception/image_projection_based_fusion/#matching","title":"matching","text":"

The offset between each camera and the lidar is set according to their shutter timing. After applying the offset to the timestamp, if the interval between the timestamp of pointcloud topic and the roi message is less than the match threshold, the two messages are matched.

current default value at autoware.universe for TIER IV Robotaxi are: - input_offset_ms: [61.67, 111.67, 45.0, 28.33, 78.33, 95.0] - match_threshold_ms: 30.0

"},{"location":"perception/image_projection_based_fusion/#fusion-and-timer","title":"fusion and timer","text":"

The subscription status of the message is signed with 'O'. 1.if a pointcloud message is subscribed under the below condition: | | pointcloud | roi msg 1 | roi msg 2 | roi msg 3 | | :-----------------: | :--------: | :-------: | :-------: | :-------: | | subscription status | | O | O | O |

If the roi msgs can be matched, fuse them and postprocess the pointcloud message. Otherwise, fuse the matched roi msgs and cache the pointcloud. 2.if a pointcloud message is subscribed under the below condition: | | pointcloud | roi msg 1 | roi msg 2 | roi msg 3 | | :-----------------: | :--------: | :-------: | :-------: | :-------: | | subscription status | | O | O | |

if the roi msgs can be matched, fuse them and cache the pointcloud. 3.if a pointcloud message is subscribed under the below condition: | | pointcloud | roi msg 1 | roi msg 2 | roi msg 3 | | :-----------------: | :--------: | :-------: | :-------: | :-------: | | subscription status | O | O | O | |

If the roi msg 3 is subscribed before the next pointcloud messge coming or timeout, fuse it if matched, otherwise wait for the next roi msg 3. If the roi msg 3 is not subscribed before the next pointcloud messge coming or timeout, postprocess the pointcloud messege as it is.

The timeout threshold should be set according to the postprocessing time. E.g, if the postprocessing time is around 50ms, the timeout threshold should be set smaller than 50ms, so that the whole processing time could be less than 100ms. current default value at autoware.universe for XX1: - timeout_ms: 50.0

"},{"location":"perception/image_projection_based_fusion/#known-limits","title":"Known Limits","text":"

The rclcpp::TimerBase timer could not break a for loop, therefore even if time is out when fusing a roi msg at the middle, the program will run until all msgs are fused.

"},{"location":"perception/image_projection_based_fusion/#detail-description-of-each-fusions-algorithm-is-in-the-following-links","title":"Detail description of each fusion's algorithm is in the following links","text":"Fusion Name Description Detail roi_cluster_fusion Overwrite a classification label of clusters by that of ROIs from a 2D object detector. link roi_detected_object_fusion Overwrite a classification label of detected objects by that of ROIs from a 2D object detector. link pointpainting_fusion Paint the point cloud with the ROIs from a 2D object detector and feed to a 3D object detector. link"},{"location":"perception/image_projection_based_fusion/docs/pointpainting-fusion/","title":"pointpainting_fusion","text":""},{"location":"perception/image_projection_based_fusion/docs/pointpainting-fusion/#purpose","title":"Purpose","text":"

The pointpainting_fusion is a package for utilizing the class information detected by a 2D object detection in 3D object detection.

"},{"location":"perception/image_projection_based_fusion/docs/pointpainting-fusion/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

The lidar points are projected onto the output of an image-only 2d object detection network and the class scores are appended to each point. The painted point cloud can then be fed to the centerpoint network.

"},{"location":"perception/image_projection_based_fusion/docs/pointpainting-fusion/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/image_projection_based_fusion/docs/pointpainting-fusion/#input","title":"Input","text":"Name Type Description input sensor_msgs::msg::PointCloud2 pointcloud input/camera_info[0-7] sensor_msgs::msg::CameraInfo camera information to project 3d points onto image planes input/rois[0-7] tier4_perception_msgs::msg::DetectedObjectsWithFeature ROIs from each image input/image_raw[0-7] sensor_msgs::msg::Image images for visualization"},{"location":"perception/image_projection_based_fusion/docs/pointpainting-fusion/#output","title":"Output","text":"Name Type Description output sensor_msgs::msg::PointCloud2 painted pointcloud ~/output/objects autoware_auto_perception_msgs::msg::DetectedObjects detected objects ~/debug/image_raw[0-7] sensor_msgs::msg::Image images for visualization"},{"location":"perception/image_projection_based_fusion/docs/pointpainting-fusion/#parameters","title":"Parameters","text":""},{"location":"perception/image_projection_based_fusion/docs/pointpainting-fusion/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description score_threshold float 0.4 detected objects with score less than threshold are ignored densification_world_frame_id string map the world frame id to fuse multi-frame pointcloud densification_num_past_frames int 0 the number of past frames to fuse with the current frame trt_precision string fp16 TensorRT inference precision: fp32 or fp16 encoder_onnx_path string \"\" path to VoxelFeatureEncoder ONNX file encoder_engine_path string \"\" path to VoxelFeatureEncoder TensorRT Engine file head_onnx_path string \"\" path to DetectionHead ONNX file head_engine_path string \"\" path to DetectionHead TensorRT Engine file"},{"location":"perception/image_projection_based_fusion/docs/pointpainting-fusion/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/image_projection_based_fusion/docs/pointpainting-fusion/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/image_projection_based_fusion/docs/pointpainting-fusion/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/image_projection_based_fusion/docs/pointpainting-fusion/#referencesexternal-links","title":"References/External links","text":"

[1] Vora, Sourabh, et al. \"PointPainting: Sequential fusion for 3d object detection.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020.

[2] CVPR'20 Workshop on Scalability in Autonomous Driving] Waymo Open Dataset Challenge: https://youtu.be/9g9GsI33ol8?t=535 Ding, Zhuangzhuang, et al. \"1st Place Solution for Waymo Open Dataset Challenge--3D Detection and Domain Adaptation.\" arXiv preprint arXiv:2006.15505 (2020).

"},{"location":"perception/image_projection_based_fusion/docs/pointpainting-fusion/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/image_projection_based_fusion/docs/roi-cluster-fusion/","title":"roi_cluster_fusion","text":""},{"location":"perception/image_projection_based_fusion/docs/roi-cluster-fusion/#purpose","title":"Purpose","text":"

The roi_cluster_fusion is a package for filtering clusters that are less likely to be objects and overwriting labels of clusters with that of Region Of Interests (ROIs) by a 2D object detector.

"},{"location":"perception/image_projection_based_fusion/docs/roi-cluster-fusion/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

The clusters are projected onto image planes, and then if the ROIs of clusters and ROIs by a detector are overlapped, the labels of clusters are overwritten with that of ROIs by detector. Intersection over Union (IoU) is used to determine if there are overlaps between them.

"},{"location":"perception/image_projection_based_fusion/docs/roi-cluster-fusion/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/image_projection_based_fusion/docs/roi-cluster-fusion/#input","title":"Input","text":"Name Type Description input tier4_perception_msgs::msg::DetectedObjectsWithFeature clustered pointcloud input/camera_info[0-7] sensor_msgs::msg::CameraInfo camera information to project 3d points onto image planes input/rois[0-7] tier4_perception_msgs::msg::DetectedObjectsWithFeature ROIs from each image input/image_raw[0-7] sensor_msgs::msg::Image images for visualization"},{"location":"perception/image_projection_based_fusion/docs/roi-cluster-fusion/#output","title":"Output","text":"Name Type Description output tier4_perception_msgs::msg::DetectedObjectsWithFeature labeled cluster pointcloud ~/debug/image_raw[0-7] sensor_msgs::msg::Image images for visualization"},{"location":"perception/image_projection_based_fusion/docs/roi-cluster-fusion/#parameters","title":"Parameters","text":""},{"location":"perception/image_projection_based_fusion/docs/roi-cluster-fusion/#core-parameters","title":"Core Parameters","text":"Name Type Description use_iou_x bool calculate IoU only along x-axis use_iou_y bool calculate IoU only along y-axis use_iou bool calculate IoU both along x-axis and y-axis use_cluster_semantic_type bool if false, the labels of clusters are overwritten by UNKNOWN before fusion iou_threshold float the IoU threshold to overwrite a label of clusters with a label of roi rois_number int the number of input rois debug_mode bool If true, subscribe and publish images for visualization."},{"location":"perception/image_projection_based_fusion/docs/roi-cluster-fusion/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/image_projection_based_fusion/docs/roi-cluster-fusion/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/image_projection_based_fusion/docs/roi-cluster-fusion/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/image_projection_based_fusion/docs/roi-cluster-fusion/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/image_projection_based_fusion/docs/roi-cluster-fusion/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/image_projection_based_fusion/docs/roi-detected-object-fusion/","title":"roi_detected_object_fusion","text":""},{"location":"perception/image_projection_based_fusion/docs/roi-detected-object-fusion/#purpose","title":"Purpose","text":"

The roi_detected_object_fusion is a package to overwrite labels of detected objects with that of Region Of Interests (ROIs) by a 2D object detector.

"},{"location":"perception/image_projection_based_fusion/docs/roi-detected-object-fusion/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

In what follows, we describe the algorithm utilized by roi_detected_object_fusion (the meaning of each parameter can be found in the Parameters section):

  1. If the existence_probability of a detected object is greater than the threshold, it is accepted without any further processing and published in output.
  2. The remaining detected objects are projected onto image planes, and if the resulting ROIs overlap with the ones from the 2D detector, they are published as fused objects in output. The Intersection over Union (IoU) is used to determine if there are overlaps between the detections from input and the ROIs from input/rois.

The DetectedObject has three possible shape choices/implementations, where the polygon's vertices for each case are defined as follows:

"},{"location":"perception/image_projection_based_fusion/docs/roi-detected-object-fusion/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/image_projection_based_fusion/docs/roi-detected-object-fusion/#input","title":"Input","text":"Name Type Description input autoware_auto_perception_msgs::msg::DetectedObjects input detected objects input/camera_info[0-7] sensor_msgs::msg::CameraInfo camera information to project 3d points onto image planes. input/rois[0-7] tier4_perception_msgs::msg::DetectedObjectsWithFeature ROIs from each image. input/image_raw[0-7] sensor_msgs::msg::Image images for visualization."},{"location":"perception/image_projection_based_fusion/docs/roi-detected-object-fusion/#output","title":"Output","text":"Name Type Description output autoware_auto_perception_msgs::msg::DetectedObjects detected objects ~/debug/image_raw[0-7] sensor_msgs::msg::Image images for visualization, ~/debug/fused_objects autoware_auto_perception_msgs::msg::DetectedObjects fused detected objects ~/debug/ignored_objects autoware_auto_perception_msgs::msg::DetectedObjects not fused detected objects"},{"location":"perception/image_projection_based_fusion/docs/roi-detected-object-fusion/#parameters","title":"Parameters","text":""},{"location":"perception/image_projection_based_fusion/docs/roi-detected-object-fusion/#core-parameters","title":"Core Parameters","text":"Name Type Description rois_number int the number of input rois debug_mode bool If set to true, the node subscribes to the image topic and publishes an image with debug drawings. min_iou_threshold double If the iou between detected objects and rois is greater than min_iou_threshold, the objects are classified as fused. passthrough_lower_bound_probability_threshold double If the existence_probability of a detected object is greater than the threshold, it is published in output. use_roi_probability float If set to true, the algorithm uses existence_probability of ROIs to match with the that of detected objects. roi_probability_threshold double If the existence_probability of ROIs is greater than the threshold, matched detected objects are published in output."},{"location":"perception/image_projection_based_fusion/docs/roi-detected-object-fusion/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

POLYGON, which is a shape of a detected object, isn't supported yet.

"},{"location":"perception/lidar_apollo_instance_segmentation/","title":"lidar_apollo_instance_segmentation","text":""},{"location":"perception/lidar_apollo_instance_segmentation/#purpose","title":"Purpose","text":"

This node segments 3D pointcloud data from lidar sensors into obstacles, e.g., cars, trucks, bicycles, and pedestrians based on CNN based model and obstacle clustering method.

"},{"location":"perception/lidar_apollo_instance_segmentation/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

See the original design by Apollo.

"},{"location":"perception/lidar_apollo_instance_segmentation/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/lidar_apollo_instance_segmentation/#input","title":"Input","text":"Name Type Description input/pointcloud sensor_msgs/PointCloud2 Pointcloud data from lidar sensors"},{"location":"perception/lidar_apollo_instance_segmentation/#output","title":"Output","text":"Name Type Description output/labeled_clusters tier4_perception_msgs/DetectedObjectsWithFeature Detected objects with labeled pointcloud cluster. debug/instance_pointcloud sensor_msgs/PointCloud2 Segmented pointcloud for visualization."},{"location":"perception/lidar_apollo_instance_segmentation/#parameters","title":"Parameters","text":""},{"location":"perception/lidar_apollo_instance_segmentation/#node-parameters","title":"Node Parameters","text":"

None

"},{"location":"perception/lidar_apollo_instance_segmentation/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description score_threshold double 0.8 If the score of a detected object is lower than this value, the object is ignored. range int 60 Half of the length of feature map sides. [m] width int 640 The grid width of feature map. height int 640 The grid height of feature map. engine_file string \"vls-128.engine\" The name of TensorRT engine file for CNN model. prototxt_file string \"vls-128.prototxt\" The name of prototxt file for CNN model. caffemodel_file string \"vls-128.caffemodel\" The name of caffemodel file for CNN model. use_intensity_feature bool true The flag to use intensity feature of pointcloud. use_constant_feature bool false The flag to use direction and distance feature of pointcloud. target_frame string \"base_link\" Pointcloud data is transformed into this frame. z_offset int 2 z offset from target frame. [m]"},{"location":"perception/lidar_apollo_instance_segmentation/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

There is no training code for CNN model.

"},{"location":"perception/lidar_apollo_instance_segmentation/#note","title":"Note","text":"

This package makes use of three external codes. The trained files are provided by apollo. The trained files are automatically downloaded when you build.

Original URL

Supported lidars are velodyne 16, 64 and 128, but you can also use velodyne 32 and other lidars with good accuracy.

  1. apollo 3D Obstacle Perception description

    /******************************************************************************\n* Copyright 2017 The Apollo Authors. All Rights Reserved.\n*\n* Licensed under the Apache License, Version 2.0 (the \"License\");\n* you may not use this file except in compliance with the License.\n* You may obtain a copy of the License at\n*\n* http://www.apache.org/licenses/LICENSE-2.0\n*\n* Unless required by applicable law or agreed to in writing, software\n* distributed under the License is distributed on an \"AS IS\" BASIS,\n* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n* See the License for the specific language governing permissions and\n* limitations under the License.\n*****************************************************************************/\n
  2. tensorRTWrapper : It is used under the lib directory.

    MIT License\n\nCopyright (c) 2018 lewes6369\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n
  3. autoware_perception description

    /*\n* Copyright 2018-2019 Autoware Foundation. All rights reserved.\n*\n* Licensed under the Apache License, Version 2.0 (the \"License\");\n* you may not use this file except in compliance with the License.\n* You may obtain a copy of the License at\n*\n*     http://www.apache.org/licenses/LICENSE-2.0\n*\n* Unless required by applicable law or agreed to in writing, software\n* distributed under the License is distributed on an \"AS IS\" BASIS,\n* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n* See the License for the specific language governing permissions and\n* limitations under the License.\n*/\n
"},{"location":"perception/lidar_apollo_instance_segmentation/#special-thanks","title":"Special thanks","text":""},{"location":"perception/lidar_apollo_instance_segmentation/lib/","title":"Index","text":""},{"location":"perception/lidar_apollo_instance_segmentation/lib/#note","title":"Note","text":"

This library is customized. Original repository : https://github.com/lewes6369/tensorRTWrapper

MIT License\n\nCopyright (c) 2018 lewes6369\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n
"},{"location":"perception/lidar_apollo_segmentation_tvm/design/lidar-segmentation-design/","title":"Macro Syntax Error","text":"

File: perception/lidar_apollo_segmentation_tvm/design/lidar-segmentation-design.md

Line 1 in Markdown file: Missing end of comment tag

# lidar_apollo_segmentation_tvm {#lidar-apollo-segmentation-tvm-design}\n

"},{"location":"perception/lidar_apollo_segmentation_tvm_nodes/design/lidar-segmentation-nodes-design/","title":"Macro Syntax Error","text":"

File: perception/lidar_apollo_segmentation_tvm_nodes/design/lidar-segmentation-nodes-design.md

Line 1 in Markdown file: Missing end of comment tag

# lidar_apollo_segmentation_tvm_nodes {#lidar-apollo-segmentation-tvm-nodes-design}\n

"},{"location":"perception/lidar_centerpoint/","title":"lidar_centerpoint","text":""},{"location":"perception/lidar_centerpoint/#purpose","title":"Purpose","text":"

lidar_centerpoint is a package for detecting dynamic 3D objects.

"},{"location":"perception/lidar_centerpoint/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

In this implementation, CenterPoint [1] uses a PointPillars-based [2] network to inference with TensorRT.

We trained the models using https://github.com/open-mmlab/mmdetection3d.

"},{"location":"perception/lidar_centerpoint/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/lidar_centerpoint/#input","title":"Input","text":"Name Type Description ~/input/pointcloud sensor_msgs::msg::PointCloud2 input pointcloud"},{"location":"perception/lidar_centerpoint/#output","title":"Output","text":"Name Type Description ~/output/objects autoware_auto_perception_msgs::msg::DetectedObjects detected objects debug/cyclic_time_ms tier4_debug_msgs::msg::Float64Stamped cyclic time (msg) debug/processing_time_ms tier4_debug_msgs::msg::Float64Stamped processing time (ms)"},{"location":"perception/lidar_centerpoint/#parameters","title":"Parameters","text":""},{"location":"perception/lidar_centerpoint/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description score_threshold float 0.4 detected objects with score less than threshold are ignored densification_world_frame_id string map the world frame id to fuse multi-frame pointcloud densification_num_past_frames int 1 the number of past frames to fuse with the current frame trt_precision string fp16 TensorRT inference precision: fp32 or fp16 encoder_onnx_path string \"\" path to VoxelFeatureEncoder ONNX file encoder_engine_path string \"\" path to VoxelFeatureEncoder TensorRT Engine file head_onnx_path string \"\" path to DetectionHead ONNX file head_engine_path string \"\" path to DetectionHead TensorRT Engine file nms_iou_target_class_names list[string] - target classes for IoU-based Non Maximum Suppression nms_iou_search_distance_2d double - If two objects are farther than the value, NMS isn't applied. nms_iou_threshold double - IoU threshold for the IoU-based Non Maximum Suppression build_only bool false shutdown the node after TensorRT engine file is built"},{"location":"perception/lidar_centerpoint/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/lidar_centerpoint/#trained-models","title":"Trained Models","text":"

You can download the onnx format of trained models by clicking on the links below.

Centerpoint was trained in nuScenes (~110k lidar frames) [8] and TIER IV's internal database (~11k lidar frames) for 60 epochs. Centerpoint tiny was trained in Argoverse 2 (~28k lidar frames) [9] and TIER IV's internal database (~11k lidar frames) for 20 epochs.

"},{"location":"perception/lidar_centerpoint/#standalone-inference-and-visualization","title":"Standalone inference and visualization","text":"

In addition to its use as a standard ROS node, lidar_centerpoint can also be used to perform inferences in an isolated manner. To do so, execute the following launcher, where pcd_path is the path of the pointcloud to be used for inference.

ros2 launch lidar_centerpoint single_inference_lidar_centerpoint.launch.xml pcd_path:=test_pointcloud.pcd detections_path:=test_detections.ply\n

lidar_centerpoint generates a ply file in the provided detections_path, which contains the detections as triangle meshes. These detections can be visualized by most 3D tools, but we also integrate a visualization UI using Open3D which is launched alongside lidar_centerpoint.

"},{"location":"perception/lidar_centerpoint/#changelog","title":"Changelog","text":""},{"location":"perception/lidar_centerpoint/#v1-20220706","title":"v1 (2022/07/06)","text":"Name URLs Description centerpoint pts_voxel_encoder pts_backbone_neck_head There is a single change due to the limitation in the implementation of this package. num_filters=[32, 32] of PillarFeatureNet centerpoint_tiny pts_voxel_encoder pts_backbone_neck_head The same model as default of v0.

These changes are compared with this configuration.

"},{"location":"perception/lidar_centerpoint/#v0-20211203","title":"v0 (2021/12/03)","text":"Name URLs Description default pts_voxel_encoder pts_backbone_neck_head There are two changes from the original CenterPoint architecture. num_filters=[32] of PillarFeatureNet and ds_layer_strides=[2, 2, 2] of RPN"},{"location":"perception/lidar_centerpoint/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/lidar_centerpoint/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/lidar_centerpoint/#referencesexternal-links","title":"References/External links","text":"

[1] Yin, Tianwei, Xingyi Zhou, and Philipp Kr\u00e4henb\u00fchl. \"Center-based 3d object detection and tracking.\" arXiv preprint arXiv:2006.11275 (2020).

[2] Lang, Alex H., et al. \"Pointpillars: Fast encoders for object detection from point clouds.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019.

[3] https://github.com/tianweiy/CenterPoint

[4] https://github.com/open-mmlab/mmdetection3d

[5] https://github.com/open-mmlab/OpenPCDet

[6] https://github.com/yukkysaito/autoware_perception

[7] https://github.com/NVIDIA-AI-IOT/CUDA-PointPillars

[8] https://www.nuscenes.org/nuscenes

[9] https://www.argoverse.org/av2.html

"},{"location":"perception/lidar_centerpoint/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/lidar_centerpoint/launch/centerpoint_vs_centerpoint-tiny/","title":"Run lidar_centerpoint and lidar_centerpoint-tiny simultaneously","text":"

This tutorial is for showing centerpoint and centerpoint_tinymodels\u2019 results simultaneously, making it easier to visualize and compare the performance.

"},{"location":"perception/lidar_centerpoint/launch/centerpoint_vs_centerpoint-tiny/#setup-development-environment","title":"Setup Development Environment","text":"

Follow the steps in the Source Installation (link) in Autoware doc.

If you fail to build autoware environment according to lack of memory, then it is recommended to build autoware sequentially.

Source the ROS 2 Galactic setup script.

source /opt/ros/galactic/setup.bash\n

Build the entire autoware repository.

colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release --parallel-workers=1\n

Or you can use a constrained number of CPU to build only one package.

export MAKEFLAGS=\"-j 4\" && MAKE_JOBS=4 colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release --parallel-workers 1 --packages-select PACKAGE_NAME\n

Source the package.

source install/setup.bash\n
"},{"location":"perception/lidar_centerpoint/launch/centerpoint_vs_centerpoint-tiny/#data-preparation","title":"Data Preparation","text":""},{"location":"perception/lidar_centerpoint/launch/centerpoint_vs_centerpoint-tiny/#using-rosbag-dataset","title":"Using rosbag dataset","text":"
ros2 bag play /YOUR/ROSBAG/PATH/ --clock 100\n

Don't forget to add clock in order to sync between two rviz display.

You can also use the sample rosbag provided by autoware here.

If you want to merge several rosbags into one, you can refer to this tool.

"},{"location":"perception/lidar_centerpoint/launch/centerpoint_vs_centerpoint-tiny/#using-realtime-lidar-dataset","title":"Using realtime LiDAR dataset","text":"

Set up your Ethernet connection according to 1.1 - 1.3 in this website.

Download Velodyne ROS driver

git clone -b ros2 https://github.com/ros-drivers/velodyne.git\n

Source the ROS 2 Galactic setup script.

source /opt/ros/galactic/setup.bash\n

Compile Velodyne driver

cd velodyne\nrosdep install -y --from-paths . --ignore-src --rosdistro $ROS_DISTRO\ncolcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release\n

Edit the configuration file. Specify the LiDAR device IP address in ./velodyne_driver/config/VLP32C-velodyne_driver_node-params.yaml

velodyne_driver_node:\nros__parameters:\ndevice_ip: 192.168.1.201 //change to your LiDAR device IP address\ngps_time: false\ntime_offset: 0.0\nenabled: true\nread_once: false\nread_fast: false\nrepeat_delay: 0.0\nframe_id: velodyne\nmodel: 32C\nrpm: 600.0\nport: 2368\n

Launch the velodyne driver.

# Terminal 1\nros2 launch velodyne_driver velodyne_driver_node-VLP32C-launch.py\n

Launch the velodyne_pointcloud.

# Terminal 2\nros2 launch velodyne_pointcloud velodyne_convert_node-VLP32C-launch.py\n

Point Cloud data will be available on topic /velodyne_points. You can check with ros2 topic echo /velodyne_points.

Check this website if there is any unexpected issue.

"},{"location":"perception/lidar_centerpoint/launch/centerpoint_vs_centerpoint-tiny/#launch-file-setting","title":"Launch file setting","text":"

Several fields to check in centerpoint_vs_centerpoint-tiny.launch.xml before running lidar centerpoint.

"},{"location":"perception/lidar_centerpoint/launch/centerpoint_vs_centerpoint-tiny/#run-centerpoint-and-centerpoint-tiny-simultaneously","title":"Run CenterPoint and CenterPoint-tiny simultaneously","text":"

Run

ros2 launch lidar_centerpoint centerpoint_vs_centerpoint-tiny.launch.xml\n

Then you will see two rviz window show immediately. On the left is the result for lidar centerpoint tiny, and on the right is the result for lidar centerpoint.

"},{"location":"perception/lidar_centerpoint/launch/centerpoint_vs_centerpoint-tiny/#troubleshooting","title":"Troubleshooting","text":""},{"location":"perception/lidar_centerpoint/launch/centerpoint_vs_centerpoint-tiny/#bounding-box-blink-on-rviz","title":"Bounding Box blink on rviz","text":"

To avoid Bounding Boxs blinking on rviz, you can extend bbox marker lifetime.

Set marker_ptr->lifetime and marker.lifetime to a longer lifetime.

Make sure to rebuild packages after any change.

"},{"location":"perception/lidar_centerpoint_tvm/","title":"lidar_centerpoint_tvm","text":""},{"location":"perception/lidar_centerpoint_tvm/#design","title":"Design","text":""},{"location":"perception/lidar_centerpoint_tvm/#usage","title":"Usage","text":"

lidar_centerpoint_tvm is a package for detecting dynamic 3D objects using TVM compiled centerpoint module for different backends. To use this package, replace lidar_centerpoint with lidar_centerpoint_tvm in perception launch files(for example, lidar_based_detection.launch.xml is lidar based detection is chosen.).

"},{"location":"perception/lidar_centerpoint_tvm/#neural-network","title":"Neural network","text":"

This package will not build without a neural network for its inference. The network is provided by the tvm_utility package. See its design page for more information on how to enable downloading pre-compiled networks (by setting the DOWNLOAD_ARTIFACTS cmake variable), or how to handle user-compiled networks.

"},{"location":"perception/lidar_centerpoint_tvm/#backend","title":"Backend","text":"

The backend used for the inference can be selected by setting the lidar_centerpoint_tvm_BACKEND cmake variable. The current available options are llvm for a CPU backend, and vulkan or opencl for a GPU backend. It defaults to llvm.

"},{"location":"perception/lidar_centerpoint_tvm/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/lidar_centerpoint_tvm/#input","title":"Input","text":"Name Type Description ~/input/pointcloud sensor_msgs::msg::PointCloud2 input pointcloud"},{"location":"perception/lidar_centerpoint_tvm/#output","title":"Output","text":"Name Type Description ~/output/objects autoware_auto_perception_msgs::msg::DetectedObjects detected objects debug/cyclic_time_ms tier4_debug_msgs::msg::Float64Stamped cyclic time (msg) debug/processing_time_ms tier4_debug_msgs::msg::Float64Stamped processing time (ms)"},{"location":"perception/lidar_centerpoint_tvm/#parameters","title":"Parameters","text":""},{"location":"perception/lidar_centerpoint_tvm/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description score_threshold float 0.1 detected objects with score less than threshold are ignored densification_world_frame_id string map the world frame id to fuse multi-frame pointcloud densification_num_past_frames int 1 the number of past frames to fuse with the current frame"},{"location":"perception/lidar_centerpoint_tvm/#bounding-box","title":"Bounding Box","text":"

The lidar segmentation node establishes a bounding box for the detected obstacles. The L-fit method of fitting a bounding box to a cluster is used for that.

"},{"location":"perception/lidar_centerpoint_tvm/#limitation-and-known-issue","title":"Limitation and Known Issue","text":"

Due to an accuracy issue of centerpoint model, vulkan cannot be used at the moment. As for 'llvm' backend, real-time performance cannot be achieved.

"},{"location":"perception/lidar_centerpoint_tvm/#scatter-implementation","title":"Scatter Implementation","text":"

Scatter function can be implemented using either TVMScript or C++. For C++ implementation, please refer to https://github.com/angry-crab/autoware.universe/blob/c020419fe52e359287eccb1b77e93bdc1a681e24/perception/lidar_centerpoint_tvm/lib/network/scatter.cpp#L65

"},{"location":"perception/lidar_centerpoint_tvm/#reference","title":"Reference","text":"

[1] Yin, Tianwei, Xingyi Zhou, and Philipp Kr\u00e4henb\u00fchl. \"Center-based 3d object detection and tracking.\" arXiv preprint arXiv:2006.11275 (2020).

[2] Lang, Alex H., et al. \"Pointpillars: Fast encoders for object detection from point clouds.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019.

[3] https://github.com/tianweiy/CenterPoint

[4] https://github.com/Abraham423/CenterPoint

[5] https://github.com/open-mmlab/OpenPCDet

"},{"location":"perception/lidar_centerpoint_tvm/#related-issues","title":"Related issues","text":""},{"location":"perception/lidar_centerpoint_tvm/#908-run-lidar-centerpoint-with-tvm","title":"908: Run Lidar Centerpoint with TVM","text":""},{"location":"perception/map_based_prediction/","title":"map_based_prediction","text":""},{"location":"perception/map_based_prediction/#role","title":"Role","text":"

map_based_prediction is a module to predict the future paths (and their probabilities) of other vehicles and pedestrians according to the shape of the map and the surrounding environment.

"},{"location":"perception/map_based_prediction/#assumptions","title":"Assumptions","text":""},{"location":"perception/map_based_prediction/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"perception/map_based_prediction/#flow-chart","title":"Flow chart","text":""},{"location":"perception/map_based_prediction/#path-prediction-for-road-users","title":"Path prediction for road users","text":""},{"location":"perception/map_based_prediction/#remove-old-object-history","title":"Remove old object history","text":"

Store time-series data of objects to determine the vehicle's route and to detect lane change for several duration. Object Data contains the object's position, speed, and time information.

"},{"location":"perception/map_based_prediction/#get-current-lanelet-and-update-object-history","title":"Get current lanelet and update Object history","text":"

Search one or more lanelets satisfying the following conditions for each target object and store them in the ObjectData.

"},{"location":"perception/map_based_prediction/#get-predicted-reference-path","title":"Get predicted reference path","text":"

The conditions for the lane change detection then becomes

//  Left Lane Change Detection\n(d_current_left / d_lane) > dl_ratio_threshold &&\n(d_current_left - d_previous_left) > ddl_threshold\n\n// Right Lane Change Detection\n(d_current_right / d_lane) < dr_ratio_threshold &&\n(d_current_right - d_previous_right) < ddr_threshold\n

where the parameter is explained in the picture below. An example of how to tune the parameters is described later.

"},{"location":"perception/map_based_prediction/#lane-change-detection-logic","title":"Lane change detection logic","text":"

This is an example to tune the parameters for lane change detection. The default parameters are set so that the lane change can be detected 1 second before the vehicle crosses the lane boundary. Here, 15 data in the lane change / non lane change cases are plotted.

On the top plot, blue dots are the distance from the lane boundary one second before the lane change, and red dots are the average distance from the lane boundary when driving straight. From this plot, the most conservative value where lane change can be detected for all of these data can be seen as -1.1. Note that the larger number makes the decision conservative (lane change may not be detected) and the lower number makes the decision aggressive (many false positive occurs).

On the bottom plot, blue dots are the lateral velocity one second before the lane change, and red dots are the average of the (absolute) lateral velocity when driving straight. In the same policy above, the parameter can be set as 0.5.

"},{"location":"perception/map_based_prediction/#limitations","title":"Limitations","text":""},{"location":"perception/map_based_prediction/#path-prediction-for-crosswalk-users","title":"Path prediction for crosswalk users","text":"

This module treats Pedestrians and Bicycles as objects using the crosswalk, and outputs prediction path based on map and estimated object's velocity, assuming the object has intention to cross the crosswalk, if the objects satisfies at least one of the following conditions:

If there are a reachable crosswalk entry points within the prediction_time_horizon and the objects satisfies above condition, this module outputs additional predicted path to cross the opposite side via the crosswalk entry point.

If the target object is inside the road or crosswalk, this module outputs one or two additional prediction path(s) to reach exit point of the crosswalk. The number of prediction paths are depend on whether object is moving or not. If the object is moving, this module outputs one prediction path toward an exit point that existed in the direction of object's movement. One the other hand, if the object has stopped, it is impossible to infer which exit points the object want to go, so this module outputs two prediction paths toward both side exit point.

"},{"location":"perception/map_based_prediction/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/map_based_prediction/#input","title":"Input","text":"Name Type Description ~/perception/object_recognition/tracking/objects autoware_auto_perception_msgs::msg::TrackedObjects tracking objects without predicted path. ~/vector_map autoware_auto_mapping_msgs::msg::HADMapBin binary data of Lanelet2 Map."},{"location":"perception/map_based_prediction/#output","title":"Output","text":"Name Type Description ~/objects autoware_auto_perception_msgs::msg::PredictedObjects tracking objects with predicted path. ~/objects_path_markers visualization_msgs::msg::MarkerArray marker for visualization."},{"location":"perception/map_based_prediction/#parameters","title":"Parameters","text":"Parameter Type Description enable_delay_compensation bool flag to enable the time delay compensation for the position of the object prediction_time_horizon double predict time duration for predicted path [s] prediction_sampling_delta_time double sampling time for points in predicted path [s] min_velocity_for_map_based_prediction double apply map-based prediction to the objects with higher velocity than this value min_crosswalk_user_velocity double minimum velocity use in path prediction for crosswalk users dist_threshold_for_searching_lanelet double The threshold of the angle used when searching for the lane to which the object belongs [rad] delta_yaw_threshold_for_searching_lanelet double The threshold of the distance used when searching for the lane to which the object belongs [m] sigma_lateral_offset double Standard deviation for lateral position of objects [m] sigma_yaw_angle double Standard deviation yaw angle of objects [rad] object_buffer_time_length double Time span of object history to store the information [s] history_time_length double Time span of object information used for prediction [s] dist_ratio_threshold_to_left_bound double Conditions for using lane change detection of objects. Distance to the left bound of lanelet. dist_ratio_threshold_to_right_bound double Conditions for using lane change detection of objects. Distance to the right bound of lanelet. diff_dist_threshold_to_left_bound double Conditions for using lane change detection of objects. Differential value of horizontal position of objects. diff_dist_threshold_to_right_bound double Conditions for using lane change detection of objects. Differential value of horizontal position of objects."},{"location":"perception/map_based_prediction/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/map_based_prediction/#reference","title":"Reference","text":"
  1. M. Werling, J. Ziegler, S. Kammel, and S. Thrun, \u201cOptimal trajectory generation for dynamic street scenario in a frenet frame,\u201d IEEE International Conference on Robotics and Automation, Anchorage, Alaska, USA, May 2010.
  2. A. Houenou, P. Bonnifait, V. Cherfaoui, and Wen Yao, \u201cVehicle trajectory prediction based on motion model and maneuver recognition,\u201d in 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, nov 2013, pp. 4363-4369.
"},{"location":"perception/multi_object_tracker/","title":"multi_object_tracker","text":""},{"location":"perception/multi_object_tracker/#purpose","title":"Purpose","text":"

The results of the detection are processed by a time series. The main purpose is to give ID and estimate velocity.

"},{"location":"perception/multi_object_tracker/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

This multi object tracker consists of data association and EKF.

"},{"location":"perception/multi_object_tracker/#data-association","title":"Data association","text":"

The data association performs maximum score matching, called min cost max flow problem. In this package, mussp[1] is used as solver. In addition, when associating observations to tracers, data association have gates such as the area of the object from the BEV, Mahalanobis distance, and maximum distance, depending on the class label.

"},{"location":"perception/multi_object_tracker/#ekf-tracker","title":"EKF Tracker","text":"

Models for pedestrians, bicycles (motorcycles), cars and unknown are available. The pedestrian or bicycle tracker is running at the same time as the respective EKF model in order to enable the transition between pedestrian and bicycle tracking. For big vehicles such as trucks and buses, we have separate models for passenger cars and large vehicles because they are difficult to distinguish from passenger cars and are not stable. Therefore, separate models are prepared for passenger cars and big vehicles, and these models are run at the same time as the respective EKF models to ensure stability.

"},{"location":"perception/multi_object_tracker/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/multi_object_tracker/#input","title":"Input","text":"Name Type Description ~/input autoware_auto_perception_msgs::msg::DetectedObjects obstacles"},{"location":"perception/multi_object_tracker/#output","title":"Output","text":"Name Type Description ~/output autoware_auto_perception_msgs::msg::TrackedObjects modified obstacles"},{"location":"perception/multi_object_tracker/#parameters","title":"Parameters","text":""},{"location":"perception/multi_object_tracker/#core-parameters","title":"Core Parameters","text":"Name Type Description can_assign_matrix double Assignment table for data association max_dist_matrix double Maximum distance table for data association max_area_matrix double Maximum area table for data association min_area_matrix double Minimum area table for data association max_rad_matrix double Maximum angle table for data association world_frame_id double tracking frame enable_delay_compensation bool Estimate obstacles at current time considering detection delay publish_rate double if enable_delay_compensation is true, how many hertz to output"},{"location":"perception/multi_object_tracker/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/multi_object_tracker/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/multi_object_tracker/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/multi_object_tracker/#evaluation-of-mussp","title":"Evaluation of muSSP","text":"

According to our evaluation, muSSP is faster than normal SSP when the matrix size is more than 100.

Execution time for varying matrix size at 95% sparsity. In real data, the sparsity was often around 95%.

Execution time for varying the sparsity with matrix size 100.

"},{"location":"perception/multi_object_tracker/#optional-referencesexternal-links","title":"(Optional) References/External links","text":"

This package makes use of external code.

Name License Original Repository muSSP Apache-2.0 https://github.com/yu-lab-vt/muSSP

[1] C. Wang, Y. Wang, Y. Wang, C.-t. Wu, and G. Yu, \u201cmuSSP: Efficient Min-cost Flow Algorithm for Multi-object Tracking,\u201d NeurIPS, 2019

"},{"location":"perception/multi_object_tracker/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/object_merger/","title":"object_merger","text":""},{"location":"perception/object_merger/#purpose","title":"Purpose","text":"

object_merger is a package for merging detected objects from two methods by data association.

"},{"location":"perception/object_merger/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

The successive shortest path algorithm is used to solve the data association problem (the minimum-cost flow problem). The cost is calculated by the distance between two objects and gate functions are applied to reset cost, s.t. the maximum distance, the maximum area and the minimum area.

"},{"location":"perception/object_merger/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/object_merger/#input","title":"Input","text":"Name Type Description input/object0 autoware_auto_perception_msgs::msg::DetectedObjects detection objects input/object1 autoware_auto_perception_msgs::msg::DetectedObjects detection objects"},{"location":"perception/object_merger/#output","title":"Output","text":"Name Type Description output/object autoware_auto_perception_msgs::msg::DetectedObjects modified Objects"},{"location":"perception/object_merger/#parameters","title":"Parameters","text":"Name Type Description can_assign_matrix double Assignment table for data association max_dist_matrix double Maximum distance table for data association max_area_matrix double Maximum area table for data association min_area_matrix double Minimum area table for data association max_rad_matrix double Maximum angle table for data association base_link_frame_id double association frame distance_threshold_list std::vector<double> Distance threshold for each class used in judging overlap. The class order depends on ObjectClassification. generalized_iou_threshold double Generalized IoU threshold"},{"location":"perception/object_merger/#tips","title":"Tips","text":""},{"location":"perception/object_merger/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/object_merger/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/object_merger/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/object_merger/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/object_merger/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":"

Data association algorithm was the same as that of multi_object_tracker, but the algorithm of multi_object_tracker was already updated.

"},{"location":"perception/object_range_splitter/","title":"object_range_splitter","text":""},{"location":"perception/object_range_splitter/#purpose","title":"Purpose","text":"

object_range_splitter is a package to divide detected objects into two messages by the distance from the origin.

"},{"location":"perception/object_range_splitter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"perception/object_range_splitter/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/object_range_splitter/#input","title":"Input","text":"Name Type Description input/object autoware_auto_perception_msgs::msg::DetectedObjects detected objects"},{"location":"perception/object_range_splitter/#output","title":"Output","text":"Name Type Description output/long_range_object autoware_auto_perception_msgs::msg::DetectedObjects long range detected objects output/short_range_object autoware_auto_perception_msgs::msg::DetectedObjects short range detected objects"},{"location":"perception/object_range_splitter/#parameters","title":"Parameters","text":"Name Type Description split_range float the distance boundary to divide detected objects [m]"},{"location":"perception/object_range_splitter/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/object_range_splitter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/object_range_splitter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/object_range_splitter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/object_range_splitter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/occupancy_grid_map_outlier_filter/","title":"occupancy_grid_map_outlier_filter","text":""},{"location":"perception/occupancy_grid_map_outlier_filter/#purpose","title":"Purpose","text":"

This node is an outlier filter based on a occupancy grid map. Depending on the implementation of occupancy grid map, it can be called an outlier filter in time series, since the occupancy grid map expresses the occupancy probabilities in time series.

"},{"location":"perception/occupancy_grid_map_outlier_filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"
  1. Use the occupancy grid map to separate point clouds into those with low occupancy probability and those with high occupancy probability.

  2. The point clouds that belong to the low occupancy probability are not necessarily outliers. In particular, the top of the moving object tends to belong to the low occupancy probability. Therefore, if use_radius_search_2d_filter is true, then apply an radius search 2d outlier filter to the point cloud that is determined to have a low occupancy probability.

    1. For each low occupancy probability point, determine the outlier from the radius (radius_search_2d_filter/search_radius) and the number of point clouds. In this case, the point cloud to be referenced is not only low occupancy probability points, but all point cloud including high occupancy probability points.
    2. The number of point clouds can be multiplied by radius_search_2d_filter/min_points_and_distance_ratio and distance from base link. However, the minimum and maximum number of point clouds is limited.

The following video is a sample. Yellow points are high occupancy probability, green points are low occupancy probability which is not an outlier, and red points are outliers. At around 0:15 and 1:16 in the first video, a bird crosses the road, but it is considered as an outlier.

"},{"location":"perception/occupancy_grid_map_outlier_filter/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/occupancy_grid_map_outlier_filter/#input","title":"Input","text":"Name Type Description ~/input/pointcloud sensor_msgs/PointCloud2 Obstacle point cloud with ground removed. ~/input/occupancy_grid_map nav_msgs/OccupancyGrid A map in which the probability of the presence of an obstacle is occupancy probability map"},{"location":"perception/occupancy_grid_map_outlier_filter/#output","title":"Output","text":"Name Type Description ~/output/pointcloud sensor_msgs/PointCloud2 Point cloud with outliers removed. trajectory ~/output/debug/outlier/pointcloud sensor_msgs/PointCloud2 Point clouds removed as outliers. ~/output/debug/low_confidence/pointcloud sensor_msgs/PointCloud2 Point clouds that had a low probability of occupancy in the occupancy grid map. However, it is not considered as an outlier. ~/output/debug/high_confidence/pointcloud sensor_msgs/PointCloud2 Point clouds that had a high probability of occupancy in the occupancy grid map. trajectory"},{"location":"perception/occupancy_grid_map_outlier_filter/#parameters","title":"Parameters","text":"Name Type Description map_frame string map frame id base_link_frame string base link frame id cost_threshold int Cost threshold of occupancy grid map (0~100). 100 means 100% probability that there is an obstacle, close to 50 means that it is indistinguishable whether it is an obstacle or free space, 0 means that there is no obstacle. enable_debugger bool Whether to output the point cloud for debugging. use_radius_search_2d_filter bool Whether or not to apply density-based outlier filters to objects that are judged to have low probability of occupancy on the occupancy grid map. radius_search_2d_filter/search_radius float Radius when calculating the density radius_search_2d_filter/min_points_and_distance_ratio float Threshold value of the number of point clouds per radius when the distance from baselink is 1m, because the number of point clouds varies with the distance from baselink. radius_search_2d_filter/min_points int Minimum number of point clouds per radius radius_search_2d_filter/max_points int Maximum number of point clouds per radius"},{"location":"perception/occupancy_grid_map_outlier_filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/occupancy_grid_map_outlier_filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/occupancy_grid_map_outlier_filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/occupancy_grid_map_outlier_filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/occupancy_grid_map_outlier_filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/probabilistic_occupancy_grid_map/","title":"probabilistic_occupancy_grid_map","text":""},{"location":"perception/probabilistic_occupancy_grid_map/#purpose","title":"Purpose","text":"

This package outputs the probability of having an obstacle as occupancy grid map.

"},{"location":"perception/probabilistic_occupancy_grid_map/#settings","title":"Settings","text":"

Occupancy grid map is generated on map_frame, and grid orientation is fixed.

You may need to choose output_frame which means grid map origin. Default is base_link, but your main LiDAR sensor frame (e.g. velodyne_top in sample_vehicle) would be the better choice.

"},{"location":"perception/probabilistic_occupancy_grid_map/#each-config-paramters","title":"Each config paramters","text":"

Config parameters are managed in config/*.yaml and here shows its outline.

Ros param name Default value map_frame \"map\" base_link_frame \"base_link\" output_frame \"base_link\" use_height_filter true enable_single_frame_mode false map_length 100.0 [m] map_width 100.0 [m] map_resolution 0.5 [m] input_obstacle_pointcloud true input_obstacle_and_raw_pointcloud true Ros param name Default value map_length 100 [m] map_resolution 0.5 [m] use_height_filter true enable_single_frame_mode false map_frame \"map\" base_link_frame \"base_link\" output_frame \"base_link\""},{"location":"perception/probabilistic_occupancy_grid_map/#referencesexternal-links","title":"References/External links","text":""},{"location":"perception/probabilistic_occupancy_grid_map/laserscan-based-occupancy-grid-map/","title":"laserscan based occupancy grid map","text":""},{"location":"perception/probabilistic_occupancy_grid_map/laserscan-based-occupancy-grid-map/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

The basic idea is to take a 2D laserscan and ray trace it to create a time-series processed occupancy grid map.

  1. the node take a laserscan and make an occupancy grid map with one frame. ray trace is done by Bresenham's line algorithm.
  2. Optionally, obstacle point clouds and raw point clouds can be received and reflected in the occupancy grid map. The reason is that laserscan only uses the most foreground point in the polar coordinate system, so it throws away a lot of information. As a result, the occupancy grid map is almost an UNKNOWN cell. Therefore, the obstacle point cloud and the raw point cloud are used to reflect what is judged to be the ground and what is judged to be an obstacle in the occupancy grid map. The black and red dots represent raw point clouds, and the red dots represent obstacle point clouds. In other words, the black points are determined as the ground, and the red point cloud is the points determined as obstacles. The gray cells are represented as UNKNOWN cells.

  3. Using the previous occupancy grid map, update the existence probability using a binary Bayesian filter (1). Also, the unobserved cells are time-decayed like the system noise of the Kalman filter (2).

    \\hat{P_{o}} = \\frac{(P_{o} * P_{z})}{(P_{o} * P_{z} + (1 - P_{o}) * \\bar{P_{z}})} \\tag{1}\n
    \\hat{P_{o}} = \\frac{(P_{o} + 0.5 * \\frac{1}{ratio})}{(\\frac{1}{ratio} + 1)} \\tag{2}\n
"},{"location":"perception/probabilistic_occupancy_grid_map/laserscan-based-occupancy-grid-map/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/probabilistic_occupancy_grid_map/laserscan-based-occupancy-grid-map/#input","title":"Input","text":"Name Type Description ~/input/laserscan sensor_msgs::LaserScan laserscan ~/input/obstacle_pointcloud sensor_msgs::PointCloud2 obstacle pointcloud ~/input/raw_pointcloud sensor_msgs::PointCloud2 The overall point cloud used to input the obstacle point cloud"},{"location":"perception/probabilistic_occupancy_grid_map/laserscan-based-occupancy-grid-map/#output","title":"Output","text":"Name Type Description ~/output/occupancy_grid_map nav_msgs::OccupancyGrid occupancy grid map"},{"location":"perception/probabilistic_occupancy_grid_map/laserscan-based-occupancy-grid-map/#parameters","title":"Parameters","text":""},{"location":"perception/probabilistic_occupancy_grid_map/laserscan-based-occupancy-grid-map/#node-parameters","title":"Node Parameters","text":"Name Type Description map_frame string map frame base_link_frame string base_link frame input_obstacle_pointcloud bool whether to use the optional obstacle point cloud? If this is true, ~/input/obstacle_pointcloud topics will be received. input_obstacle_and_raw_pointcloud bool whether to use the optional obstacle and raw point cloud? If this is true, ~/input/obstacle_pointcloud and ~/input/raw_pointcloud topics will be received. use_height_filter bool whether to height filter for ~/input/obstacle_pointcloud and ~/input/raw_pointcloud? By default, the height is set to -1~2m. map_length double The length of the map. -100 if it is 50~50[m] map_resolution double The map cell resolution [m]"},{"location":"perception/probabilistic_occupancy_grid_map/laserscan-based-occupancy-grid-map/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

In several places we have modified the external code written in BSD3 license.

"},{"location":"perception/probabilistic_occupancy_grid_map/laserscan-based-occupancy-grid-map/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/probabilistic_occupancy_grid_map/laserscan-based-occupancy-grid-map/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/probabilistic_occupancy_grid_map/laserscan-based-occupancy-grid-map/#optional-referencesexternal-links","title":"(Optional) References/External links","text":"

Bresenham's_line_algorithm

"},{"location":"perception/probabilistic_occupancy_grid_map/laserscan-based-occupancy-grid-map/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/","title":"pointcloud based occupancy grid map","text":""},{"location":"perception/probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"perception/probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#1st-step","title":"1st step","text":"

First of all, obstacle and raw pointcloud as input are transformed into a polar coordinate system and divided into bin per angle_increment. At this time, each point belonging to each bin is stored as range data. In addition, the x,y information in the map coordinate is also stored for ray trace on map coordinate. The bin contains the following information for each point

The following figure shows each of the bins from side view.

"},{"location":"perception/probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#2nd-step","title":"2nd step","text":"

The ray trace is performed in three steps for each cell. The ray trace is done by Bresenham's line algorithm.

  1. Initialize freespace to the farthest point of each bin.

  2. Fill in the unknown cells. Assume that unknown is behind the obstacle, since the back of the obstacle is a blind spot. Therefore, the unknown are assumed to be the cells that are more than a distance margin from each obstacle point.

    There are three reasons for setting a distance margin.

    • It is unlikely that a point on the ground will be immediately behind an obstacle.
    • The obstacle point cloud is processed and may not match the raw pointcloud.
    • The input may be inaccurate and obstacle points may not be determined as obstacles.
  3. Fill in the occupied cells. Fill in the point where the obstacle point is located with occupied. In addition, If the distance between obstacle points is less than or equal to the distance margin, it is filled with occupied because the input may be inaccurate and obstacle points may not be determined as obstacles.

"},{"location":"perception/probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#3rd-step","title":"3rd step","text":"

Using the previous occupancy grid map, update the existence probability using a binary Bayesian filter (1). Also, the unobserved cells are time-decayed like the system noise of the Kalman filter (2).

    \\hat{P_{o}} = \\frac{(P_{o} * P_{z})}{(P_{o} * P_{z} + (1 - P_{o}) * \\bar{P_{z}})} \\tag{1}\n
    \\hat{P_{o}} = \\frac{(P_{o} + 0.5 * \\frac{1}{ratio})}{(\\frac{1}{ratio} + 1)} \\tag{2}\n
"},{"location":"perception/probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#input","title":"Input","text":"Name Type Description ~/input/obstacle_pointcloud sensor_msgs::PointCloud2 obstacle pointcloud ~/input/raw_pointcloud sensor_msgs::PointCloud2 The overall point cloud used to input the obstacle point cloud"},{"location":"perception/probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#output","title":"Output","text":"Name Type Description ~/output/occupancy_grid_map nav_msgs::OccupancyGrid occupancy grid map"},{"location":"perception/probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#parameters","title":"Parameters","text":""},{"location":"perception/probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#node-parameters","title":"Node Parameters","text":"Name Type Description map_frame string map frame base_link_frame string base_link frame use_height_filter bool whether to height filter for ~/input/obstacle_pointcloud and ~/input/raw_pointcloud? By default, the height is set to -1~2m. map_length double The length of the map. -100 if it is 50~50[m] map_resolution double The map cell resolution [m]"},{"location":"perception/probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

In several places we have modified the external code written in BSD3 license.

"},{"location":"perception/probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/radar_fusion_to_detected_object/","title":"radar_fusion_to_detected_object","text":"

This package contains a sensor fusion module for radar-detected objects and 3D detected objects. The fusion node can:

"},{"location":"perception/radar_fusion_to_detected_object/#core-algorithm","title":"Core algorithm","text":"

The document of core algorithm is here

"},{"location":"perception/radar_fusion_to_detected_object/#parameters-for-sensor-fusion","title":"Parameters for sensor fusion","text":"Name Type Description Default value bounding_box_margin double The distance to extend the 2D bird's-eye view Bounding Box on each side. This distance is used as a threshold to find radar centroids falling inside the extended box. [m] 2.0 split_threshold_velocity double The object's velocity threshold to decide to split for two objects from radar information (currently not implemented) [m/s] 5.0 threshold_yaw_diff double The yaw orientation threshold. If $ \\vert \\theta {ob} - \\theta {ra} \\vert < thresholdyaw_diff $ attached to radar information include estimated velocity, where $ \\theta $ is yaw angle from 3d detected object, $ \\theta_ {ra} $ is yaw angle from radar object. [rad] 0.35"},{"location":"perception/radar_fusion_to_detected_object/#weight-parameters-for-velocity-estimation","title":"Weight parameters for velocity estimation","text":"

To tune these weight parameters, please see document in detail.

Name Type Description Default value velocity_weight_average double The twist coefficient of average twist of radar data in velocity estimation. 0.0 velocity_weight_median double The twist coefficient of median twist of radar data in velocity estimation. 0.0 velocity_weight_min_distance double The twist coefficient of radar data nearest to the center of bounding box in velocity estimation. 1.0 velocity_weight_target_value_average double The twist coefficient of target value weighted average in velocity estimation. Target value is amplitude if using radar pointcloud. Target value is probability if using radar objects. 0.0 velocity_weight_target_value_top double The twist coefficient of top target value radar data in velocity estimation. Target value is amplitude if using radar pointcloud. Target value is probability if using radar objects. 0.0"},{"location":"perception/radar_fusion_to_detected_object/#parameters-for-fixed-object-information","title":"Parameters for fixed object information","text":"Name Type Description Default value convert_doppler_to_twist bool Convert doppler velocity to twist using the yaw information of a detected object. false threshold_probability float If the probability of an output object is lower than this parameter, and the output object does not have radar points/objects, then delete the object. 0.4 compensate_probability bool If this parameter is true, compensate probability of objects to threshold probability. false"},{"location":"perception/radar_fusion_to_detected_object/#radar_object_fusion_to_detected_object","title":"radar_object_fusion_to_detected_object","text":"

Sensor fusion with radar objects and a detected object.

"},{"location":"perception/radar_fusion_to_detected_object/#how-to-launch","title":"How to launch","text":"
ros2 launch radar_fusion_to_detected_object radar_object_to_detected_object.launch.xml\n
"},{"location":"perception/radar_fusion_to_detected_object/#input","title":"Input","text":"Name Type Description ~/input/objects autoware_auto_perception_msgs/msg/DetectedObject.msg 3D detected objects. ~/input/radar_objects autoware_auto_perception_msgs/msg/TrackedObjects.msg Radar objects. Note that frame_id need to be same as ~/input/objects"},{"location":"perception/radar_fusion_to_detected_object/#output","title":"Output","text":"Name Type Description ~/output/objects autoware_auto_perception_msgs/msg/DetectedObjects.msg 3D detected object with twist. ~/debug/low_confidence_objects autoware_auto_perception_msgs/msg/DetectedObjects.msg 3D detected object that doesn't output as ~/output/objects because of low confidence"},{"location":"perception/radar_fusion_to_detected_object/#parameters","title":"Parameters","text":"Name Type Description Default value update_rate_hz double The update rate [hz]. 20.0"},{"location":"perception/radar_fusion_to_detected_object/#radar_scan_fusion_to_detected_object-tbd","title":"radar_scan_fusion_to_detected_object (TBD)","text":"

TBD

"},{"location":"perception/radar_fusion_to_detected_object/docs/algorithm/","title":"Algorithm","text":""},{"location":"perception/radar_fusion_to_detected_object/docs/algorithm/#common-algorithm","title":"Common Algorithm","text":""},{"location":"perception/radar_fusion_to_detected_object/docs/algorithm/#1-link-between-3d-bounding-box-and-radar-data","title":"1. Link between 3d bounding box and radar data","text":"

Choose radar pointcloud/objects within 3D bounding box from lidar-base detection with margin space from bird's-eye view.

"},{"location":"perception/radar_fusion_to_detected_object/docs/algorithm/#2-feature-support-split-the-object-going-in-a-different-direction","title":"2. [Feature support] Split the object going in a different direction","text":""},{"location":"perception/radar_fusion_to_detected_object/docs/algorithm/#3-estimate-twist-of-object","title":"3. Estimate twist of object","text":"

Estimate twist from chosen radar pointcloud/objects using twist and target value (Target value is amplitude if using radar pointcloud. Target value is probability if using radar objects). First, the estimation function calculate

Second, the estimation function calculate weighted average of these list. Third, twist information of estimated twist is attached to an object.

"},{"location":"perception/radar_fusion_to_detected_object/docs/algorithm/#4-feature-support-option-convert-doppler-velocity-to-twist","title":"4. [Feature support] [Option] Convert doppler velocity to twist","text":"

If the twist information of radars is doppler velocity, convert from doppler velocity to twist using yaw angle of DetectedObject. Because radar pointcloud has only doppler velocity information, radar pointcloud fusion should use this feature. On the other hand, because radar objects have twist information, radar object fusion should not use this feature.

"},{"location":"perception/radar_fusion_to_detected_object/docs/algorithm/#5-delete-objects-with-low-probability","title":"5. Delete objects with low probability","text":""},{"location":"perception/radar_tracks_msgs_converter/","title":"radar_tracks_msgs_converter","text":"

This package convert from radar_msgs/msg/RadarTracks into autoware_auto_perception_msgs/msg/TrackedObject.

"},{"location":"perception/radar_tracks_msgs_converter/#design","title":"Design","text":""},{"location":"perception/radar_tracks_msgs_converter/#input-output","title":"Input / Output","text":""},{"location":"perception/radar_tracks_msgs_converter/#parameters","title":"Parameters","text":""},{"location":"perception/radar_tracks_msgs_converter/#note","title":"Note","text":"

This package convert the label from radar_msgs/msg/RadarTrack.msg to Autoware label. Label id is defined as below.

RadarTrack Autoware UNKNOWN 32000 0 CAR 32001 1 TRUCK 32002 2 BUS 32003 3 TRAILER 32004 4 MOTORCYCLE 32005 5 BICYCLE 32006 6 PEDESTRIAN 32007 7 "},{"location":"perception/shape_estimation/","title":"shape_estimation","text":""},{"location":"perception/shape_estimation/#purpose","title":"Purpose","text":"

This node calculates a refined object shape (bounding box, cylinder, convex hull) in which a pointcloud cluster fits according to a label.

"},{"location":"perception/shape_estimation/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"perception/shape_estimation/#fitting-algorithms","title":"Fitting algorithms","text":" "},{"location":"perception/shape_estimation/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/shape_estimation/#input","title":"Input","text":"Name Type Description input tier4_perception_msgs::msg::DetectedObjectsWithFeature detected objects with labeled cluster"},{"location":"perception/shape_estimation/#output","title":"Output","text":"Name Type Description output/objects autoware_auto_perception_msgs::msg::DetectedObjects detected objects with refined shape"},{"location":"perception/shape_estimation/#parameters","title":"Parameters","text":"Name Type Default Value Description use_corrector bool true The flag to apply rule-based filter use_filter bool true The flag to apply rule-based corrector use_vehicle_reference_yaw bool true The flag to use vehicle reference yaw for corrector"},{"location":"perception/shape_estimation/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD

"},{"location":"perception/shape_estimation/#referencesexternal-links","title":"References/External links","text":"

L-shape fitting implementation of the paper:

@conference{Zhang-2017-26536,\nauthor = {Xiao Zhang and Wenda Xu and Chiyu Dong and John M. Dolan},\ntitle = {Efficient L-Shape Fitting for Vehicle Detection Using Laser Scanners},\nbooktitle = {2017 IEEE Intelligent Vehicles Symposium},\nyear = {2017},\nmonth = {June},\nkeywords = {autonomous driving, laser scanner, perception, segmentation},\n}\n
"},{"location":"perception/tensorrt_yolo/","title":"tensorrt_yolo","text":""},{"location":"perception/tensorrt_yolo/#purpose","title":"Purpose","text":"

This package detects 2D bounding boxes for target objects e.g., cars, trucks, bicycles, and pedestrians on a image based on YOLO(You only look once) model.

"},{"location":"perception/tensorrt_yolo/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"perception/tensorrt_yolo/#cite","title":"Cite","text":"

yolov3

Redmon, J., & Farhadi, A. (2018). Yolov3: An incremental improvement. arXiv preprint arXiv:1804.02767.

yolov4

Bochkovskiy, A., Wang, C. Y., & Liao, H. Y. M. (2020). Yolov4: Optimal speed and accuracy of object detection. arXiv preprint arXiv:2004.10934.

yolov5

Jocher, G., et al. (2021). ultralytics/yolov5: v6.0 - YOLOv5n 'Nano' models, Roboflow integration, TensorFlow export, OpenCV DNN support (v6.0). Zenodo. https://doi.org/10.5281/zenodo.5563715

"},{"location":"perception/tensorrt_yolo/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/tensorrt_yolo/#input","title":"Input","text":"Name Type Description in/image sensor_msgs/Image The input image"},{"location":"perception/tensorrt_yolo/#output","title":"Output","text":"Name Type Description out/objects tier4_perception_msgs/DetectedObjectsWithFeature The detected objects with 2D bounding boxes out/image sensor_msgs/Image The image with 2D bounding boxes for visualization"},{"location":"perception/tensorrt_yolo/#parameters","title":"Parameters","text":""},{"location":"perception/tensorrt_yolo/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description anchors double array [10.0, 13.0, 16.0, 30.0, 33.0, 23.0, 30.0, 61.0, 62.0, 45.0, 59.0, 119.0, 116.0, 90.0, 156.0, 198.0, 373.0, 326.0] The anchors to create bounding box candidates scale_x_y double array [1.0, 1.0, 1.0] The scale parameter to eliminate grid sensitivity score_thresh double 0.1 If the objectness score is less than this value, the object is ignored in yolo layer. iou_thresh double 0.45 The iou threshold for NMS method detections_per_im int 100 The maximum detection number for one frame use_darknet_layer bool true The flag to use yolo layer in darknet ignore_thresh double 0.5 If the output score is less than this value, ths object is ignored."},{"location":"perception/tensorrt_yolo/#node-parameters","title":"Node Parameters","text":"Name Type Default Value Description onnx_file string \"\" The onnx file name for yolo model engine_file string \"\" The tensorrt engine file name for yolo model label_file string \"\" The label file with label names for detected objects written on it calib_image_directory string \"\" The directory name including calibration images for int8 inference calib_cache_file string \"\" The calibration cache file for int8 inference mode string \"FP32\" The inference mode: \"FP32\", \"FP16\", \"INT8\" gpu_id int 0 GPU device ID that runs the model"},{"location":"perception/tensorrt_yolo/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

This package includes multiple licenses.

"},{"location":"perception/tensorrt_yolo/#onnx-model","title":"Onnx model","text":"

All YOLO ONNX models are converted from the officially trained model. If you need information about training datasets and conditions, please refer to the official repositories.

All models are downloaded automatically when building. When launching the node with a model for the first time, the model is automatically converted to TensorRT, although this may take some time.

"},{"location":"perception/tensorrt_yolo/#yolov3","title":"YOLOv3","text":"

YOLOv3: Converted from darknet weight file and conf file.

"},{"location":"perception/tensorrt_yolo/#yolov4","title":"YOLOv4","text":"

YOLOv4: Converted from darknet weight file and conf file.

YOLOv4-tiny: Converted from darknet weight file and conf file.

"},{"location":"perception/tensorrt_yolo/#yolov5","title":"YOLOv5","text":"

Refer to this guide

"},{"location":"perception/tensorrt_yolo/#limitations","title":"Limitations","text":""},{"location":"perception/tensorrt_yolo/#reference-repositories","title":"Reference repositories","text":" "},{"location":"perception/tensorrt_yolox/","title":"tensorrt_yolox","text":""},{"location":"perception/tensorrt_yolox/#purpose","title":"Purpose","text":"

This package detects target objects e.g., cars, trucks, bicycles, and pedestrians on a image based on YOLOX model.

"},{"location":"perception/tensorrt_yolox/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"perception/tensorrt_yolox/#cite","title":"Cite","text":"

Zheng Ge, Songtao Liu, Feng Wang, Zeming Li, Jian Sun, \"YOLOX: Exceeding YOLO Series in 2021\", arXiv preprint arXiv:2107.08430, 2021 [ref]

"},{"location":"perception/tensorrt_yolox/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/tensorrt_yolox/#input","title":"Input","text":"Name Type Description in/image sensor_msgs/Image The input image"},{"location":"perception/tensorrt_yolox/#output","title":"Output","text":"Name Type Description out/objects tier4_perception_msgs/DetectedObjectsWithFeature The detected objects with 2D bounding boxes out/image sensor_msgs/Image The image with 2D bounding boxes for visualization"},{"location":"perception/tensorrt_yolox/#parameters","title":"Parameters","text":""},{"location":"perception/tensorrt_yolox/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description score_threshold float 0.3 If the objectness score is less than this value, the object is ignored in yolox layer. nms_threshold float 0.7 The IoU threshold for NMS method

NOTE: These two parameters are only valid for \"plain\" model (described later).

"},{"location":"perception/tensorrt_yolox/#node-parameters","title":"Node Parameters","text":"Name Type Default Value Description model_path string \"\" The onnx file name for yolox model label_path string \"\" The label file with label names for detected objects written on it trt_precision string \"fp32\" The inference mode: \"fp32\", \"fp16\", \"int8\" build_only bool false shutdown node after TensorRT engine file is built"},{"location":"perception/tensorrt_yolox/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

The label contained in detected 2D bounding boxes (i.e., out/objects) will be either one of the followings:

If other labels (case insensitive) are contained in the file specified via the label_file parameter, those are labeled as UNKNOWN, while detected rectangles are drawn in the visualization result (out/image).

"},{"location":"perception/tensorrt_yolox/#onnx-model","title":"Onnx model","text":"

A sample model (named yolox-tiny.onnx) is downloaded automatically during the build process. To accelerate Non-maximum-suppression (NMS), which is one of the common post-process after object detection inference, EfficientNMS_TRT module is attached after the ordinal YOLOX (tiny) network. The EfficientNMS_TRT module contains fixed values for score_threshold and nms_threshold in it, hence these parameters are ignored when users specify ONNX models including this module.

This package accepts both EfficientNMS_TRT attached ONNXs and models published from the official YOLOX repository (we referred to them as \"plain\" models).

All models are automatically converted to TensorRT format. These converted files will be saved in the same directory as specified ONNX files with .engine filename extension and reused from the next run. The conversion process may take a while (typically a few minutes) and the inference process is blocked until complete the conversion, so it will take some time until detection results are published on the first run.

"},{"location":"perception/tensorrt_yolox/#package-acceptable-model-generation","title":"Package acceptable model generation","text":"

To convert users' own model that saved in PyTorch's pth format into ONNX, users can exploit the converter offered by the official repository. For the convenience, only procedures are described below. Please refer the official document for more detail.

"},{"location":"perception/tensorrt_yolox/#for-plain-models","title":"For plain models","text":"
  1. Install dependency

    git clone git@github.com:Megvii-BaseDetection/YOLOX.git\ncd YOLOX\npython3 setup.py develop --user\n
  2. Convert pth into ONNX

    python3 tools/export_onnx.py \\\n--output-name YOUR_YOLOX.onnx \\\n-f YOUR_YOLOX.py \\\n-c YOUR_YOLOX.pth\n
"},{"location":"perception/tensorrt_yolox/#for-efficientnms_trt-embedded-models","title":"For EfficientNMS_TRT embedded models","text":"
  1. Install dependency

    git clone git@github.com:Megvii-BaseDetection/YOLOX.git\ncd YOLOX\npython3 setup.py develop --user\npip3 install git+ssh://git@github.com/wep21/yolox_onnx_modifier.git --user\n
  2. Convert pth into ONNX

    python3 tools/export_onnx.py \\\n--output-name YOUR_YOLOX.onnx \\\n-f YOUR_YOLOX.py \\\n-c YOUR_YOLOX.pth\n  --decode_in_inference\n
  3. Embed EfficientNMS_TRT to the end of YOLOX

    yolox_onnx_modifier YOUR_YOLOX.onnx -o YOUR_YOLOX_WITH_NMS.onnx\n
"},{"location":"perception/tensorrt_yolox/#label-file","title":"Label file","text":"

A sample label file (named label.txt)is also downloaded automatically during the build process (NOTE: This file is incompatible with models that output labels for the COCO dataset (e.g., models from the official YOLOX repository)).

This file represents the correspondence between class index (integer outputted from YOLOX network) and class label (strings making understanding easier). This package maps class IDs (incremented from 0) with labels according to the order in this file.

"},{"location":"perception/tensorrt_yolox/#reference-repositories","title":"Reference repositories","text":""},{"location":"perception/traffic_light_classifier/","title":"traffic_light_classifier","text":""},{"location":"perception/traffic_light_classifier/#purpose","title":"Purpose","text":"

traffic_light_classifier is a package for classifying traffic light labels using cropped image around a traffic light. This package has two classifier models: cnn_classifier and hsv_classifier.

"},{"location":"perception/traffic_light_classifier/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"perception/traffic_light_classifier/#cnn_classifier","title":"cnn_classifier","text":"

Traffic light labels are classified by MobileNetV2. Totally 37600 (26300 for training, 6800 for evaluation and 4500 for test) TIER IV internal images of Japanese traffic lights were used for fine-tuning.

"},{"location":"perception/traffic_light_classifier/#hsv_classifier","title":"hsv_classifier","text":"

Traffic light colors (green, yellow and red) are classified in HSV model.

"},{"location":"perception/traffic_light_classifier/#about-label","title":"About Label","text":"

The message type is designed to comply with the unified road signs proposed at the Vienna Convention. This idea has been also proposed in Autoware.Auto.

There are rules for naming labels that nodes receive. One traffic light is represented by the following character string separated by commas. color1-shape1, color2-shape2 .

For example, the simple red and red cross traffic light label must be expressed as \"red-circle, red-cross\".

These colors and shapes are assigned to the message as follows:

"},{"location":"perception/traffic_light_classifier/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/traffic_light_classifier/#input","title":"Input","text":"Name Type Description ~/input/image sensor_msgs::msg::Image input image ~/input/rois autoware_auto_perception_msgs::msg::TrafficLightRoiArray rois of traffic lights"},{"location":"perception/traffic_light_classifier/#output","title":"Output","text":"Name Type Description ~/output/traffic_signals autoware_auto_perception_msgs::msg::TrafficSignalArray classified signals ~/output/debug/image sensor_msgs::msg::Image image for debugging"},{"location":"perception/traffic_light_classifier/#parameters","title":"Parameters","text":""},{"location":"perception/traffic_light_classifier/#node-parameters","title":"Node Parameters","text":"Name Type Description classifier_type int if the value is 1, cnn_classifier is used"},{"location":"perception/traffic_light_classifier/#core-parameters","title":"Core Parameters","text":""},{"location":"perception/traffic_light_classifier/#cnn_classifier_1","title":"cnn_classifier","text":"Name Type Description model_file_path str path to the model file label_file_path str path to the label file precision str TensorRT precision, fp16 or int8 input_c str the channel size of an input image input_h str the height of an input image input_w str the width of an input image build_only bool shutdown node after TensorRT engine file is built"},{"location":"perception/traffic_light_classifier/#hsv_classifier_1","title":"hsv_classifier","text":"Name Type Description green_min_h int the minimum hue of green color green_min_s int the minimum saturation of green color green_min_v int the minimum value (brightness) of green color green_max_h int the maximum hue of green color green_max_s int the maximum saturation of green color green_max_v int the maximum value (brightness) of green color yellow_min_h int the minimum hue of yellow color yellow_min_s int the minimum saturation of yellow color yellow_min_v int the minimum value (brightness) of yellow color yellow_max_h int the maximum hue of yellow color yellow_max_s int the maximum saturation of yellow color yellow_max_v int the maximum value (brightness) of yellow color red_min_h int the minimum hue of red color red_min_s int the minimum saturation of red color red_min_v int the minimum value (brightness) of red color red_max_h int the maximum hue of red color red_max_s int the maximum saturation of red color red_max_v int the maximum value (brightness) of red color"},{"location":"perception/traffic_light_classifier/#customization-of-cnn-model","title":"Customization of CNN model","text":"

Currently, in Autoware, MobileNetV2 is used as CNN classifier by default. The corresponding onnx file is data/traffic_light_classifier_mobilenetv2.onnx. Also, you can apply the following models shown as below, for example.

In order to train models and export onnx model, we recommend open-mmlab/mmclassification. Please follow the official document to install and experiment with mmclassification. If you get into troubles, FAQ page would help you.

The following steps are example of a quick-start.

"},{"location":"perception/traffic_light_classifier/#step-0-install-mmcv-and-mim","title":"step 0. Install MMCV and MIM","text":"

NOTE : First of all, install PyTorch suitable for your CUDA version (CUDA11.6 is supported in Autoware).

In order to install mmcv suitable for your CUDA version, install it specifying a url.

# Install mim\n$ pip install -U openmim\n\n# Install mmcv on a machine with CUDA11.6 and PyTorch1.13.0\n$ pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu116/torch1.13/index.html\n
"},{"location":"perception/traffic_light_classifier/#step-1-install-mmclassification","title":"step 1. Install MMClassification","text":"

You can install MMClassification as a Python package or from source.

# As a Python package\n$ pip install mmcls\n\n# From source\n$ git clone https://github.com/open-mmlab/mmclassification.git\n$ cd mmclassification\n$ pip install -v -e .\n
"},{"location":"perception/traffic_light_classifier/#step-2-train-your-model","title":"step 2. Train your model","text":"

Train model with your experiment configuration file. For the details of config file, see here.

# [] is optional, you can start training from pre-trained checkpoint\n$ mim train mmcls YOUR_CONFIG.py [--resume-from YOUR_CHECKPOINT.pth]\n
"},{"location":"perception/traffic_light_classifier/#step-3-export-onnx-model","title":"step 3. Export onnx model","text":"

In exporting onnx, use mmclassificatoin/tools/deployment/pytorch2onnx.py or open-mmlab/mmdeploy.

cd ~/mmclassification/tools/deployment\npython3 pytorch2onnx.py YOUR_CONFIG.py ...\n

After obtaining your onnx model, update parameters defined in the launch file (e.g. model_file_path, label_file_path, input_h, input_w...). Note that, we only support labels defined in autoware_auto_perception_msgs::msg::TrafficLight.

"},{"location":"perception/traffic_light_classifier/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/traffic_light_classifier/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/traffic_light_classifier/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/traffic_light_classifier/#referencesexternal-links","title":"References/External links","text":"

[1] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov and L. Chen, \"MobileNetV2: Inverted Residuals and Linear Bottlenecks,\" 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, 2018, pp. 4510-4520, doi: 10.1109/CVPR.2018.00474.

"},{"location":"perception/traffic_light_classifier/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/traffic_light_map_based_detector/","title":"The traffic_light_map_based_detector Package","text":""},{"location":"perception/traffic_light_map_based_detector/#overview","title":"Overview","text":"

traffic_light_map_based_detector calculates where the traffic lights will appear in the image based on the HD map.

Calibration and vibration errors can be entered as parameters, and the size of the detected RegionOfInterest will change according to the error.

If the node receives route information, it only looks at traffic lights on that route. If the node receives no route information, it looks at a radius of 200 meters and the angle between the traffic light and the camera is less than 40 degrees.

"},{"location":"perception/traffic_light_map_based_detector/#input-topics","title":"Input topics","text":"Name Type Description ~input/vector_map autoware_auto_mapping_msgs::HADMapBin vector map ~input/camera_info sensor_msgs::CameraInfo target camera parameter ~input/route autoware_planning_msgs::LaneletRoute optional: route"},{"location":"perception/traffic_light_map_based_detector/#output-topics","title":"Output topics","text":"Name Type Description ~output/rois autoware_auto_perception_msgs::TrafficLightRoiArray location of traffic lights in image corresponding to the camera info ~debug/markers visualization_msgs::MarkerArray visualization to debug"},{"location":"perception/traffic_light_map_based_detector/#node-parameters","title":"Node parameters","text":"Parameter Type Description max_vibration_pitch double Maximum error in pitch direction. If -5~+5, it will be 10. max_vibration_yaw double Maximum error in yaw direction. If -5~+5, it will be 10. max_vibration_height double Maximum error in height direction. If -5~+5, it will be 10. max_vibration_width double Maximum error in width direction. If -5~+5, it will be 10. max_vibration_depth double Maximum error in depth direction. If -5~+5, it will be 10."},{"location":"perception/traffic_light_selector/","title":"traffic_light_selector","text":""},{"location":"perception/traffic_light_selector/#purpose","title":"Purpose","text":"

This package receives multiple traffic light/signal states and outputs a single traffic signal state for use by the planning component.

"},{"location":"perception/traffic_light_selector/#trafficlightselector","title":"TrafficLightSelector","text":"

A node that merges traffic light/signal state from image recognition and V2X to provide a planning component. It's currently a provisional implementation.

"},{"location":"perception/traffic_light_selector/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/traffic_light_selector/#input","title":"Input","text":"Name Type Description ~/sub/vector_map autoware_auto_mapping_msgs/msg/HADMapBin The vector map to get traffic light id relations. ~/sub/traffic_lights autoware_perception_msgs/msg/TrafficLightArray The traffic light state from image recognition."},{"location":"perception/traffic_light_selector/#output","title":"Output","text":"Name Type Description ~/pub/traffic_signals autoware_perception_msgs/msg/TrafficSignalArray The merged traffic signal state."},{"location":"perception/traffic_light_selector/#trafficlightconverter","title":"TrafficLightConverter","text":"

A temporary node that converts old message type to new message type.

"},{"location":"perception/traffic_light_selector/#inputs-outputs_1","title":"Inputs / Outputs","text":""},{"location":"perception/traffic_light_selector/#input_1","title":"Input","text":"Name Type Description ~/sub/traffic_lights autoware_auto_perception_msgs/msg/TrafficSignalArray The state in old message type."},{"location":"perception/traffic_light_selector/#output_1","title":"Output","text":"Name Type Description ~/pub/traffic_lights autoware_perception_msgs/msg/TrafficLightArray The state in new message type."},{"location":"perception/traffic_light_ssd_fine_detector/","title":"traffic_light_ssd_fine_detector","text":""},{"location":"perception/traffic_light_ssd_fine_detector/#purpose","title":"Purpose","text":"

It is a package for traffic light detection using MobileNetV2 and SSDLite.

"},{"location":"perception/traffic_light_ssd_fine_detector/#training-information","title":"Training Information","text":""},{"location":"perception/traffic_light_ssd_fine_detector/#pretrained-model","title":"Pretrained Model","text":"

The model is based on pytorch-ssd and the pretrained model could be downloaded from here.

"},{"location":"perception/traffic_light_ssd_fine_detector/#training-data","title":"Training Data","text":"

The model was fine-tuned on 1750 TierIV internal images of Japanese traffic lights.

"},{"location":"perception/traffic_light_ssd_fine_detector/#trained-onnx-model","title":"Trained Onnx model","text":""},{"location":"perception/traffic_light_ssd_fine_detector/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

Based on the camera image and the global ROI array detected by map_based_detection node, a CNN-based detection method enables highly accurate traffic light detection.

"},{"location":"perception/traffic_light_ssd_fine_detector/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/traffic_light_ssd_fine_detector/#input","title":"Input","text":"Name Type Description ~/input/image sensor_msgs/Image The full size camera image ~/input/rois autoware_auto_perception_msgs::msg::TrafficLightRoiArray The array of ROIs detected by map_based_detector"},{"location":"perception/traffic_light_ssd_fine_detector/#output","title":"Output","text":"Name Type Description ~/output/rois autoware_auto_perception_msgs::msg::TrafficLightRoiArray The detected accurate rois ~/debug/exe_time_ms tier4_debug_msgs::msg::Float32Stamped The time taken for inference"},{"location":"perception/traffic_light_ssd_fine_detector/#parameters","title":"Parameters","text":""},{"location":"perception/traffic_light_ssd_fine_detector/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description score_thresh double 0.7 If the objectness score is less than this value, the object is ignored mean std::vector [0.5,0.5,0.5] Average value of the normalized values of the image data used for training std std::vector [0.5,0.5,0.5] Standard deviation of the normalized values of the image data used for training"},{"location":"perception/traffic_light_ssd_fine_detector/#node-parameters","title":"Node Parameters","text":"Name Type Default Value Description onnx_file string \"./data/mb2-ssd-lite-tlr.onnx\" The onnx file name for yolo model label_file string \"./data/voc_labels_tl.txt\" The label file with label names for detected objects written on it mode string \"FP32\" The inference mode: \"FP32\", \"FP16\", \"INT8\" max_batch_size int 8 The size of the batch processed at one time by inference by TensorRT approximate_sync bool false Flag for whether to ues approximate sync policy build_only bool false shutdown node after TensorRT engine file is built"},{"location":"perception/traffic_light_ssd_fine_detector/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/traffic_light_ssd_fine_detector/#reference-repositories","title":"Reference repositories","text":"

pytorch-ssd github repository

MobileNetV2

"},{"location":"perception/traffic_light_visualization/","title":"traffic_light_visualization","text":""},{"location":"perception/traffic_light_visualization/#purpose","title":"Purpose","text":"

The traffic_light_visualization is a package that includes two visualizing nodes:

"},{"location":"perception/traffic_light_visualization/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"perception/traffic_light_visualization/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/traffic_light_visualization/#traffic_light_map_visualizer","title":"traffic_light_map_visualizer","text":""},{"location":"perception/traffic_light_visualization/#input","title":"Input","text":"Name Type Description ~/input/tl_state autoware_auto_perception_msgs::msg::TrafficSignalArray status of traffic lights ~/input/vector_map autoware_auto_mapping_msgs::msg::HADMapBin vector map"},{"location":"perception/traffic_light_visualization/#output","title":"Output","text":"Name Type Description ~/output/traffic_light visualization_msgs::msg::MarkerArray marker array that indicates status of traffic lights"},{"location":"perception/traffic_light_visualization/#traffic_light_roi_visualizer","title":"traffic_light_roi_visualizer","text":""},{"location":"perception/traffic_light_visualization/#input_1","title":"Input","text":"Name Type Description ~/input/tl_state autoware_auto_perception_msgs::msg::TrafficSignalArray status of traffic lights ~/input/image sensor_msgs::msg::Image the image captured by perception cameras ~/input/rois autoware_auto_perception_msgs::msg::TrafficLightRoiArray the ROIs detected by traffic_light_ssd_fine_detector ~/input/rough/rois (option) autoware_auto_perception_msgs::msg::TrafficLightRoiArray the ROIs detected by traffic_light_map_based_detector"},{"location":"perception/traffic_light_visualization/#output_1","title":"Output","text":"Name Type Description ~/output/image sensor_msgs::msg::Image output image with ROIs"},{"location":"perception/traffic_light_visualization/#parameters","title":"Parameters","text":""},{"location":"perception/traffic_light_visualization/#traffic_light_map_visualizer_1","title":"traffic_light_map_visualizer","text":"

None

"},{"location":"perception/traffic_light_visualization/#traffic_light_roi_visualizer_1","title":"traffic_light_roi_visualizer","text":""},{"location":"perception/traffic_light_visualization/#node-parameters","title":"Node Parameters","text":"Name Type Default Value Description enable_fine_detection bool false whether to visualize result of the traffic light fine detection"},{"location":"perception/traffic_light_visualization/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/traffic_light_visualization/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/traffic_light_visualization/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/traffic_light_visualization/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/traffic_light_visualization/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"planning/behavior_path_planner/","title":"Behavior Path Planner","text":""},{"location":"planning/behavior_path_planner/#purpose-use-cases","title":"Purpose / Use cases","text":"

The behavior_path_planner module is responsible to generate

  1. path based on the traffic situation,
  2. drivable area that the vehicle can move (defined in the path msg),
  3. turn signal command to be sent to the vehicle interface.

Depending on the situation, a suitable module is selected and executed on the behavior tree system.

The following modules are currently supported:

[WIP]

"},{"location":"planning/behavior_path_planner/#design","title":"Design","text":""},{"location":"planning/behavior_path_planner/#inputs-outputs-api","title":"Inputs / Outputs / API","text":""},{"location":"planning/behavior_path_planner/#output","title":"output","text":""},{"location":"planning/behavior_path_planner/#input","title":"input","text":""},{"location":"planning/behavior_path_planner/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"planning/behavior_path_planner/#parameters-for-drivable-area-expansion","title":"Parameters for drivable area expansion","text":"

Optionally, the drivable area can be expanded by a static distance. Expansion parameters are defined for each module of the behavior_path_planner and should be prefixed accordingly (see config/drivable_area_expansion.yaml for an example).

Name Unit Type Description Default value drivable_area_right_bound_offset [m] double expansion distance of the right bound 0.0 drivable_area_right_bound_offset [m] double expansion distance of the left bound 0.0 drivable_area_types_to_skip list of strings types of linestrings that are not expanded [road_border]"},{"location":"planning/behavior_path_planner/#behavior-tree","title":"Behavior Tree","text":"

In the behavior path planner, the behavior tree mechanism is used to manage which modules are activated in which situations. In general, this \"behavior manager\" like function is expected to become bigger as more and more modules are added in the future. To improve maintainability, we adopted the behavior tree. The behavior tree has the following advantages: easy visualization, easy configuration management (behaviors can be changed by replacing configuration files), and high scalability compared to the state machine.

The current behavior tree structure is shown below. Each modules (LaneChange, Avoidance, etc) have Request, Ready, and Plan nodes as a common function.

"},{"location":"planning/behavior_path_planner/#lane-following","title":"Lane Following","text":"

Generate path from center line of the route.

"},{"location":"planning/behavior_path_planner/#special-case","title":"special case","text":"

In the case of a route that requires a lane change, the path is generated with a specific distance margin (default: 12.0 m) from the end of the lane to ensure the minimum distance for lane change. (This function works not only for lane following but also for all modules.)

"},{"location":"planning/behavior_path_planner/#lane-change","title":"Lane Change","text":"

The Lane Change module is activated when lane change is needed and can be safely executed.

"},{"location":"planning/behavior_path_planner/#start-lane-change-condition-need-to-meet-all-of-the-conditions-below","title":"start lane change condition (need to meet all of the conditions below)","text":""},{"location":"planning/behavior_path_planner/#finish-lane-change-condition-need-to-meet-any-of-the-conditions-below","title":"finish lane change condition (need to meet any of the conditions below)","text":""},{"location":"planning/behavior_path_planner/#collision-prediction-with-obstacles","title":"Collision prediction with obstacles","text":"
  1. Predict each position of the ego-vehicle and other vehicle on the target lane of the lane change at t1, t2,...tn
  2. If a distance between the ego-vehicle and other one is lower than the threshold (ego_velocity * stop_time (2s)) at each time, that is judged as a collision
"},{"location":"planning/behavior_path_planner/#path-generation","title":"Path Generation","text":"

Path to complete the lane change in n + m seconds under an assumption that a velocity of the ego-vehicle is constant. Once the lane change is executed, the path won\u2019t be updated until the \"finish-lane-change-condition\" is satisfied.

"},{"location":"planning/behavior_path_planner/#avoidance","title":"Avoidance","text":"

The Avoidance module is activated when dynamic objects to be avoided exist and can be safely avoided.

"},{"location":"planning/behavior_path_planner/#target-objects","title":"Target objects","text":"

Dynamic objects that satisfy the following conditions are considered to be avoidance targets.

"},{"location":"planning/behavior_path_planner/#how-to-generate-avoidance-path","title":"How to generate avoidance path","text":"

To prevent sudden changes in the vicinity of the ego-position, an avoidance path is generated after a certain distance of straight lane driving. The process of generating the avoidance path is as follows:

  1. detect the target object and calculate the lateral shift distance (default: 2.0 m from closest footprint point)
  2. calculate the avoidance distance within the constraint of lateral jerk. (default: 0.3 ~ 2.0 m/s3)
    1. If the maximum jerk constraint is exceeded to keep the straight margin, the avoidance path generation is aborted.
  3. generates the smooth path with given avoiding distance and lateral shift length.
  4. generate \"return to center\" path if there is no next target within a certain distance (default: 50 m) after the current target.
"},{"location":"planning/behavior_path_planner/#single-objects-case","title":"single objects case","text":""},{"location":"planning/behavior_path_planner/#multiple-objects-case","title":"multiple objects case","text":"

If there are multiple avoidance targets and the lateral distances of these are close (default: < 0.5m), those objects are considered as a single avoidance target and avoidance is performed simultaneously with a single steering operation. If the lateral distances of the avoidance targets differ greatly than threshold, multiple steering operations are used to avoid them.

"},{"location":"planning/behavior_path_planner/#smooth-path-generation","title":"Smooth path generation","text":"

The path generation is computed in Frenet coordinates. The shift length profile for avoidance is generated by four segmental constant jerk polynomials, and added to the original path. Since the lateral jerk can be approximately seen as a steering maneuver, this calculation yields a result similar to a Clothoid curve.

For more detail, see behavior-path-planner-path-generation.

"},{"location":"planning/behavior_path_planner/#unimplemented-parts-limitations-for-avoidance","title":"Unimplemented parts / limitations for avoidance","text":""},{"location":"planning/behavior_path_planner/#pull-over","title":"Pull Over","text":"

The Pull Over module is activated when goal is in the shoulder lane. Ego-vehicle will stop at the goal.

"},{"location":"planning/behavior_path_planner/#start-pull-over-condition-need-to-meet-all-of-the-conditions-below","title":"start pull over condition (need to meet all of the conditions below)","text":" "},{"location":"planning/behavior_path_planner/#finish-pull-over-condition-need-to-meet-any-of-the-conditions-below","title":"finish pull over condition (need to meet any of the conditions below)","text":""},{"location":"planning/behavior_path_planner/#path-generation_1","title":"Path Generation","text":"

There are three path generation methods. The path is generated with a certain margin (default: 0.5 m) from left boundary of shoulder lane.

"},{"location":"planning/behavior_path_planner/#shift-parking","title":"shift parking","text":"

Pull over distance is calculated by the speed, lateral deviation, and the lateral jerk. The lateral jerk is searched for among the predetermined minimum and maximum values, and the one satisfies ready conditions described above is output.

  1. Apply uniform offset to centerline of shoulder lane for ensuring margin
  2. In the section between merge start and end, path is shifted by a method that is used to generate avoidance path (four segmental constant jerk polynomials)
  3. Combine this path with center line of road lane

"},{"location":"planning/behavior_path_planner/#geometric-parallel-parking","title":"geometric parallel parking","text":"

Generate two arc paths with discontinuous curvature. It stops twice in the middle of the path to control the steer on the spot. There are two path generation methods: forward and backward. See also [1] for details of the algorithm. There is also a simple python implementation.

"},{"location":"planning/behavior_path_planner/#arc-forward-parking","title":"arc forward parking","text":"

Generate two forward arc paths.

"},{"location":"planning/behavior_path_planner/#arc-backward-parking","title":"arc backward parking","text":"

Generate two backward arc paths.

.

"},{"location":"planning/behavior_path_planner/#unimplemented-parts-limitations-for-pull-over","title":"Unimplemented parts / limitations for pull over","text":""},{"location":"planning/behavior_path_planner/#pull-out","title":"Pull Out","text":"

The Pull Out module is activated when ego-vehicle is stationary and footprint of ego-vehicle is included in shoulder lane. This module ends when ego-vehicle merges into the road.

"},{"location":"planning/behavior_path_planner/#start-pull-out-condition-need-to-meet-all-of-the-conditions-below","title":"start pull out condition (need to meet all of the conditions below)","text":" "},{"location":"planning/behavior_path_planner/#finish-pull-out-condition-need-to-meet-any-of-the-conditions-below","title":"finish pull out condition (need to meet any of the conditions below)","text":""},{"location":"planning/behavior_path_planner/#path-generation_2","title":"Path Generation","text":"

There are two path generation methods.

"},{"location":"planning/behavior_path_planner/#shift-pull-out","title":"shift pull out","text":"

Pull out distance is calculated by the speed, lateral deviation, and the lateral jerk. The lateral jerk is searched for among the predetermined minimum and maximum values, and the one that generates a safe path is selected.

"},{"location":"planning/behavior_path_planner/#geometric-pull-out","title":"geometric pull out","text":"

Generate two arc paths with discontinuous curvature. Ego-vehicle stops once in the middle of the path to control the steer on the spot. See also [1] for details of the algorithm.

"},{"location":"planning/behavior_path_planner/#unimplemented-parts-limitations-for-pull-put","title":"Unimplemented parts / limitations for pull put","text":""},{"location":"planning/behavior_path_planner/#side-shift","title":"Side Shift","text":"

The role of the Side Shift module is to shift the reference path laterally in response to external instructions (such as remote operation).

"},{"location":"planning/behavior_path_planner/#parameters-for-path-generation","title":"Parameters for path generation","text":"

In the figure, straight margin distance is to avoid sudden shifting, that is calculated by max(min_distance_to_start_shifting, ego_speed * time_to_start_shifting) . The shifting distance is calculated by jerk, with minimum speed and minimum distance parameter, described below. The minimum speed is used to prevent sharp shift when ego vehicle is stopped.

Name Unit Type Description Default value min_distance_to_start_shifting [m] double minimum straight distance before shift start. 5.0 time_to_start_shifting [s] double time of minimum straight distance before shift start. 1.0 shifting_lateral_jerk [m/s3] double lateral jerk to calculate shifting distance. 0.2 min_shifting_distance [m] double the shifting distance is longer than this length. 5.0 min_shifting_speed [m/s] double lateral jerk is calculated with the greater of current_speed or this speed. 5.56"},{"location":"planning/behavior_path_planner/#smooth-goal-connection","title":"Smooth goal connection","text":"

If the target path contains a goal, modify the points of the path so that the path and the goal are connected smoothly. This process will change the shape of the path by the distance of refine_goal_search_radius_range from the goal. Note that this logic depends on the interpolation algorithm that will be executed in a later module (at the moment it uses spline interpolation), so it needs to be updated in the future.

"},{"location":"planning/behavior_path_planner/#references-external-links","title":"References / External links","text":"

This module depends on the external BehaviorTreeCpp library.

[1] H. Vorobieva, S. Glaser, N. Minoiu-Enache, and S. Mammar, \u201cGeometric path planning for automatic parallel parking in tiny spots\u201d, IFAC Proceedings Volumes, vol. 45, no. 24, pp. 36\u201342, 2012.

"},{"location":"planning/behavior_path_planner/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":"

-

"},{"location":"planning/behavior_path_planner/#related-issues","title":"Related issues","text":"

-

"},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/","title":"Avoidance design","text":"

This is a rule-based path planning module designed for obstacle avoidance.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#purpose-role","title":"Purpose / Role","text":"

This module is designed for rule-based avoidance that is easy for developers to design its behavior. It generates avoidance path parameterized by intuitive parameters such as lateral jerk and avoidance distance margin. This makes it possible to pre-define avoidance behavior.

In addition, the approval interface of behavior_path_planner allows external users / modules (e.g. remote operation) to intervene the decision of the vehicle behavior.\u3000 This function is expected to be used, for example, for remote intervention in emergency situations or gathering information on operator decisions during development.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#limitations","title":"Limitations","text":"

This module allows developers to design vehicle behavior in avoidance planning using specific rules. Due to the property of rule-based planning, the algorithm can not compensate for not colliding with obstacles in complex cases. This is a trade-off between \"be intuitive and easy to design\" and \"be hard to tune but can handle many cases\". This module adopts the former policy and therefore this output should be checked more strictly in the later stage. In the .iv reference implementation, there is another avoidance module in motion planning module that uses optimization to handle the avoidance in complex cases. (Note that, the motion planner needs to be adjusted so that the behavior result will not be changed much in the simple case and this is a typical challenge for the behavior-motion hierarchical architecture.)

"},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#why-is-avoidance-in-behavior-module","title":"Why is avoidance in behavior module?","text":"

This module executes avoidance over lanes, and the decision requires the lane structure information to take care of traffic rules (e.g. it needs to send an indicator signal when the vehicle crosses a lane). The difference between motion and behavior module in the planning stack is whether the planner takes traffic rules into account, which is why this avoidance module exists in the behavior module.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

The following figure shows a simple explanation of the logic for avoidance path generation. First, target objects are picked up, and shift requests are generated for each object. These shift requests are generated by taking into account the lateral jerk required for avoidance (red lines). Then these requests are merged and the shift points are created on the reference path (blue line). Filtering operations are performed on the shift points such as removing unnecessary shift points (yellow line), and finally a smooth avoidance path is generated by combining Clothoid-like curve primitives (green line).

"},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#flowchart","title":"Flowchart","text":""},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#overview-of-algorithm-for-target-object-filtering","title":"Overview of algorithm for target object filtering","text":""},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#how-to-decide-the-target-obstacles","title":"How to decide the target obstacles","text":"

The avoidance target should be limited to stationary objects (you should not avoid a vehicle waiting at a traffic light even if it blocks your path). Therefore, target vehicles for avoidance should meet the following specific conditions.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#parked-car-detection","title":"Parked-car detection","text":"

Not only the length from the centerline, but also the length from the road shoulder is calculated and used for the filtering process. It calculates the ratio of the actual length between the the object's center and the center line shift_length and the maximum length the object can shift shiftable_length.

l_D = l_a - \\frac{width}{2}\nratio =  \\frac{l_d}{l_D}\n

The closer the object is to the shoulder, the larger the value of ratio (theoretical max value is 1.0), and it compares the value and object_check_shiftable_ratio to determine whether the object is a parked-car.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#compensation-for-detection-lost","title":"Compensation for detection lost","text":"

In order to prevent chattering of recognition results, once an obstacle is targeted, it is hold for a while even if it disappears. This is effective when recognition is unstable. However, since it will result in over-detection (increase a number of false-positive), it is necessary to adjust parameters according to the recognition accuracy (if object_last_seen_threshold = 0.0, the recognition result is 100% trusted).

"},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#overview-of-algorithm-for-avoidance-path-generation","title":"Overview of algorithm for avoidance path generation","text":""},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#how-to-prevent-shift-line-chattering-that-is-caused-by-perception-noise","title":"How to prevent shift line chattering that is caused by perception noise","text":"

Since object recognition results contain noise related to position ,orientation and boundary size, if the raw object recognition results are used in path generation, the avoidance path will be directly affected by the noise.

Therefore, in order to reduce the influence of the noise, avoidance module generate a envelope polygon for the avoidance target that covers it, and the avoidance path should be generated based on that polygon. The envelope polygons are generated so that they are parallel to the reference path and the polygon size is larger than the avoidance target (define by object_envelope_buffer). The position and size of the polygon is not updated as long as the avoidance target exists within that polygon.

# default value\nobject_envelope_buffer: 0.3 # [m]\n

"},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#computing-shift-length-and-shift-points","title":"Computing Shift Length and Shift Points","text":"

The lateral shift length is affected by 4 variables, namely lateral_collision_safety_buffer, lateral_collision_margin, vehicle_width and overhang_distance. The equation is as follows

avoid_margin = lateral_collision_margin + lateral_collision_safety_buffer + 0.5 * vehicle_width\nmax_allowable_lateral_distance = to_road_shoulder_distance - road_shoulder_safety_margin - 0.5 * vehicle_width\nif(isOnRight(o))\n{\nshift_length = avoid_margin + overhang_distance\n}\nelse\n{\nshift_length = avoid_margin - overhang_distance\n}\n

The following figure illustrates these variables(This figure just shows the max value of lateral shift length).

"},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#rationale-of-having-safety-buffer-and-safety-margin","title":"Rationale of having safety buffer and safety margin","text":"

To compute the shift length, additional parameters that can be tune are lateral_collision_safety_buffer and road_shoulder_safety_margin.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#generating-path-only-within-lanelet-boundaries","title":"Generating path only within lanelet boundaries","text":"

The shift length is set as a constant value before the feature is implemented. Setting the shift length like this will cause the module to generate an avoidance path regardless of actual environmental properties. For example, the path might exceed the actual road boundary or go towards a wall. Therefore, to address this limitation, in addition to how to decide the target obstacle, the module also takes into account the following additional element

These elements are used to compute the distance from the object to the road's shoulder (to_road_shoulder_distance). The parameters enable_avoidance_over_same_direction and enable_avoidance_over_opposite_direction allows further configuration of the to to_road_shoulder_distance. The following image illustrates the configuration.

If one of the following conditions is false, then the shift point will not be generated.

avoid_margin = lateral_collision_margin + lateral_collision_safety_buffer + 0.5 * vehicle_width\navoid_margin <= (to_road_shoulder_distance - 0.5 * vehicle_width - road_shoulder_safety_margin)\n
"},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#details-of-algorithm-for-avoidance-path-generation","title":"Details of algorithm for avoidance path generation","text":""},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#flow-chart-of-the-process","title":"Flow-chart of the process","text":""},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#how-to-decide-the-path-shape","title":"How to decide the path shape","text":"

Generate shift points for obstacles with given lateral jerk. These points are integrated to generate an avoidance path. The detailed process flow for each case corresponding to the obstacle placement are described below. The actual implementation is not separated for each case, but the function corresponding to multiple obstacle case (both directions) is always running.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#one-obstacle-case","title":"One obstacle case","text":"

The lateral shift distance to the obstacle is calculated, and then the shift point is generated from the ego vehicle speed and the given lateral jerk as shown in the figure below. A smooth avoidance path is then calculated based on the shift point.

Additionally, the following processes are executed in special cases.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#lateral-jerk-relaxation-conditions","title":"Lateral jerk relaxation conditions","text":""},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#minimum-velocity-relaxation-conditions","title":"Minimum velocity relaxation conditions","text":"

There is a problem that we can not know the actual speed during avoidance in advance. This is especially critical when the ego vehicle speed is 0. To solve that, this module provides a parameter for the minimum avoidance speed, which is used for the lateral jerk calculation when the vehicle speed is low.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#multiple-obstacle-case-one-direction","title":"Multiple obstacle case (one direction)","text":"

Generate shift points for multiple obstacles. All of them are merged to generate new shift points along the reference path. The new points are filtered (e.g. remove small-impact shift points), and the avoidance path is computed for the filtered shift points.

Merge process of raw shift points: check the shift length on each path points. If the shift points are overlapped, the maximum shift value is selected for the same direction.

For the details of the shift point filtering, see filtering for shift points.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#multiple-obstacle-case-both-direction","title":"Multiple obstacle case (both direction)","text":"

Generate shift points for multiple obstacles. All of them are merged to generate new shift points. If there are areas where the desired shifts conflict in different directions, the sum of the maximum shift amounts of these areas is used as the final shift amount. The rest of the process is the same as in the case of one direction.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#filtering-for-shift-points","title":"Filtering for shift points","text":"

The shift points are modified by a filtering process in order to get the expected shape of the avoidance path. It contains the following filters.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#other-features","title":"Other features","text":""},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#safety-check","title":"Safety check","text":"

The avoidance module has a safety check logic. The result of safe check is used for yield maneuver. It is enable by setting enable_safety_check as true.

enable_safety_check: false\n\n# For safety check\nsafety_check_backward_distance: 50.0 # [m]\nsafety_check_time_horizon: 10.0 # [s]\nsafety_check_idling_time: 1.5 # [s]\nsafety_check_accel_for_rss: 2.5 # [m/ss]\n

safety_check_backward_distance is the parameter related to the safety check area. The module checks a collision risk for all vehicle that is within shift side lane and between object object_check_forward_distance ahead and safety_check_backward_distance behind.

NOTE: Even if a part of an object polygon overlaps the detection area, if the center of gravity of the object does not exist on the lane, the vehicle is excluded from the safety check target.

Judge the risk of collision based on ego future position and object prediction path. The module calculates Ego's future position in the time horizon (safety_check_time_horizon), and use object's prediction path as object future position.

After calculating the future position of Ego and object, the module calculates the lateral/longitudinal deviation of Ego and the object. The module also calculates the lateral/longitudinal margin necessary to determine that it is safe to execute avoidance maneuver, and if both the lateral and longitudinal distances are less than the margins, it determines that there is a risk of a collision at that time.

The value of the longitudinal margin is calculated based on Responsibility-Sensitive Safety theory (RSS). The safety_check_idling_time represents T_{idle}, and safety_check_accel_for_rss represents a_{max}.

D_{lon} = V_{ego}T_{idle} + \\frac{1}{2}a_{max}T_{idle}^2 + \\frac{(V_{ego} + a_{max}T_{idle})^2}{2a_{max}} - \\frac{V_{obj}^2}{2a_{max}}\n

The lateral margin is changeable based on ego longitudinal velocity. If the vehicle is driving at a high speed, the lateral margin should be larger, and if the vehicle is driving at a low speed, the value of the lateral margin should be set to a smaller value. Thus, the lateral margin for each vehicle speed is set as a parameter, and the module determines the lateral margin from the current vehicle speed as shown in the following figure.

target_velocity_matrix:\ncol_size: 5\nmatrix: [2.78 5.56 ... 16.7  # target velocity [m/s]\n0.50 0.75 ... 1.50] # margin [m]\n
"},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#yield-maneuver","title":"Yield maneuver","text":""},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#overview","title":"Overview","text":"

If an avoidance path can be generated and it is determined that avoidance maneuver should not be executed due to surrounding traffic conditions, the module executes YIELD maneuver. In yield maneuver, the vehicle slows down to the target vehicle velocity (yield_velocity) and keep that speed until the module judge that avoidance path is safe. If the YIELD condition goes on and the vehicle approaches the avoidance target, it stops at the avoidable position and waits until the safety is confirmed.

# For yield maneuver\nyield_velocity: 2.78 # [m/s]\n

NOTE: In yield maneuver, the vehicle decelerates target velocity under constraints.

nominal_deceleration: -1.0 # [m/ss]\nnominal_jerk: 0.5 # [m/sss]\n

If it satisfies following all of three conditions, the module inserts stop point in front of the avoidance target with an avoidable interval.

The module determines that it is NOT passable without avoidance if the object overhang is less than the threshold.

lateral_passable_collision_margin: 0.5 # [-]\n
L_{overhang} < \\frac{W}{2} + L_{margin} (not passable)\n

The W represents vehicle width, and L_{margin} represents lateral_passable_collision_margin.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#limitation","title":"Limitation","text":""},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#limitation1","title":"Limitation1","text":"

The current behavior in unsafe condition is just slow down and it is so conservative. It is difficult to achieve aggressive behavior in the current architecture because of modularity. There are many modules in autoware that change the vehicle speed, and the avoidance module cannot know what speed planning they will output, so it is forced to choose a behavior that is as independent of other modules' processing as possible.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#limitation2","title":"Limitation2","text":"

The YIELD maneuver is executed ONLY when the vehicle has NOT initiated avoidance maneuver. The module has a threshold parameter (avoidance_initiate_threshold) for the amount of shifting and determines that the vehicle is initiating avoidance if the vehicle current shift exceeds the threshold.

SHIFT_{current} > L_{threshold}\n

"},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#avoidance-cancelling-maneuver","title":"Avoidance cancelling maneuver","text":"

If enable_update_path_when_object_is_gone parameter is true, Avoidance Module takes different actions according to the situations as follows:

If enable_update_path_when_object_is_gone parameter is false, Avoidance Module doesn't revert generated avoidance path even if path objects are gone.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#how-to-keep-the-consistency-of-the-optimize-base-path-generation-logic","title":"How to keep the consistency of the optimize-base path generation logic","text":"

WIP

"},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#parameters","title":"Parameters","text":"

The avoidance specific parameter configuration file can be located at src/autoware/launcher/planning_launch/config/scenario_planning/lane_driving/behavior_planning/behavior_path_planner/avoidance/avoidance.param.yaml.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#general-parameters","title":"General parameters","text":"

namespace: avoidance.

Name Unit Type Description Default value resample_interval_for_planning [m] double Path resample interval for avoidance planning path. 0.3 resample_interval_for_output [m] double Path resample interval for output path. Too short interval increases computational cost for latter modules. 4.0 detection_area_right_expand_dist [m] double Lanelet expand length for right side to find avoidance target vehicles. 0.0 detection_area_left_expand_dist [m] double Lanelet expand length for left side to find avoidance target vehicles. 1.0 object_envelope_buffer [m] double Envelope object polygon with buffer in order to prevent shift line chattering. 0.3 enable_bound_clipping [-] bool Enable clipping left and right bound of drivable area when obstacles are in the drivable area false enable_avoidance_over_same_direction [-] bool Extend avoidance trajectory to adjacent lanes that has same direction. If false, avoidance only happen in current lane. true enable_avoidance_over_opposite_direction [-] bool Extend avoidance trajectory to adjacent lanes that has opposite direction. enable_avoidance_over_same_direction must be true to take effects true enable_update_path_when_object_is_gone [-] bool Reset trajectory when avoided objects are gone. If false, shifted path points remain same even though the avoided objects are gone. false enable_safety_check [-] bool Flag to enable safety check. false enable_yield_maneuver [-] bool Flag to enable yield maneuver. false publish_debug_marker [-] bool Flag to publish debug marker (set false as default since it takes considerable cost). false print_debug_info [-] bool Flag to print debug info (set false as default since it takes considerable cost). false

NOTE: It has to set both enable_safety_check and enable_yield_maneuver to enable yield maneuver.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#avoidance-target-filtering-parameters","title":"Avoidance target filtering parameters","text":"

namespace: avoidance.target_object.

Name Unit Type Description Default value car [-] bool Allow avoidance for object type CAR true truck [-] bool Allow avoidance for object type TRUCK true bus [-] bool Allow avoidance for object type BUS true trailer [-] bool Allow avoidance for object type TRAILER true unknown [-] bool Allow avoidance for object type UNKNOWN false bicycle [-] bool Allow avoidance for object type BICYCLE false motorcycle [-] bool Allow avoidance for object type MOTORCYCLE false pedestrian [-] bool Allow avoidance for object type PEDESTRIAN false

namespace: avoidance.target_filtering.

Name Unit Type Description Default value threshold_speed_object_is_stopped [m/s] double Vehicles with speed greater than this will be excluded from avoidance target. 1.0 threshold_time_object_is_moving [s] double Forward distance to search the avoidance target. 1.0 threshold_distance_object_is_on_center [m] double Vehicles around the center line within this distance will be excluded from avoidance target. 1.0 object_check_forward_distance [m] double Forward distance to search the avoidance target. 150.0 object_check_backward_distance [m] double Backward distance to search the avoidance target. 2.0 object_check_goal_distance [m] double Backward distance to search the avoidance target. 20.0 object_check_shiftable_ratio [m] double Vehicles around the center line within this distance will be excluded from avoidance target. 0.6 object_check_min_road_shoulder_width [m] double Vehicles around the center line within this distance will be excluded from avoidance target. 0.5 object_last_seen_threshold [s] double For the compensation of the detection lost. The object is registered once it is observed as an avoidance target. When the detection loses, the timer will start and the object will be un-registered when the time exceeds this limit. 2.0 left_hand_traffic [-] bool Flag to select left or right hand traffic. TRUE: LEFT-HAND TRAFFIC / FALSE: RIGHT-HAND TRAFFIC true"},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#safety-check-parameters","title":"Safety check parameters","text":"

namespace: avoidance.safety_check.

Name Unit Type Description Default value safety_check_backward_distance [m] double Backward distance to search the dynamic objects. 50.0 safety_check_time_horizon [s] double Time horizon to check lateral/longitudinal margin is enough or not. 10.0 safety_check_idling_time [t] double Time delay constant that be use for longitudinal margin calculation based on RSS. 1.5 safety_check_accel_for_rss [m/ss] double Accel constant that be used for longitudinal margin calculation based on RSS. 2.5 safety_check_hysteresis_factor [-] double Hysteresis factor that be used for chattering prevention. 2.0"},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#avoidance-maneuver-parameters","title":"Avoidance maneuver parameters","text":"

namespace: avoidance.avoidance.lateral.

Name Unit Type Description Default value lateral_collision_margin [m] double The lateral distance between ego and avoidance targets. 1.0 lateral_collision_safety_buffer [m] double Creates an additional gap that will prevent the vehicle from getting to near to the obstacle 0.7 road_shoulder_safety_margin [m] double Prevents the generated path to come too close to the road shoulders. 0.0 avoidance_execution_lateral_threshold [m] double The lateral distance deviation threshold between the current path and suggested avoidance point to execute avoidance. (*2) 0.499 max_right_shift_length [m] double Maximum shift length for right direction 5.0 max_left_shift_length [m] double Maximum shift length for left direction 5.0

namespace: avoidance.avoidance.longitudinal.

Name Unit Type Description Default value prepare_time [s] double Avoidance shift starts from point ahead of this time x ego_speed to avoid sudden path change. 2.0 longitudinal_collision_safety_buffer [s] double Longitudinal collision buffer between target object and shift line. 0.0 min_prepare_distance [m] double Minimum distance for \"prepare_time\" x \"ego_speed\". 1.0 min_avoidance_distance [m] double Minimum distance of avoidance path (i.e. this distance is needed even if its lateral jerk is very low) 10.0 min_nominal_avoidance_speed [m/s] double Minimum speed for jerk calculation in a nominal situation (*1). 7.0 min_sharp_avoidance_speed [m/s] double Minimum speed for jerk calculation in a sharp situation (*1). 1.0"},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#yield-maneuver-parameters","title":"Yield maneuver parameters","text":"

namespace: avoidance.yield.

Name Unit Type Description Default value yield_velocity [m/s] double The ego will decelerate yield velocity in the yield maneuver. 2.78"},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#stop-maneuver-parameters","title":"Stop maneuver parameters","text":"

namespace: avoidance.stop.

Name Unit Type Description Default value min_distance [m] double Minimum stop distance in the situation where avoidance maneuver is not approved or in yield maneuver. 10.0 max_distance [m] double Maximum stop distance in the situation where avoidance maneuver is not approved or in yield maneuver. 20.0"},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#constraints-parameters","title":"Constraints parameters","text":"

namespace: avoidance.constraints.

Name Unit Type Description Default value use_constraints_for_decel [-] bool Flag to decel under longitudinal constraints. TRUE: allow to control breaking mildness false

namespace: avoidance.constraints.lateral.

a Name Unit Type Description Default value prepare_time [s] double Avoidance shift starts from point ahead of this time x ego_speed to avoid sudden path change. 2.0 min_prepare_distance [m] double Minimum distance for \"prepare_time\" x \"ego_speed\". 1.0 nominal_lateral_jerk [m/s3] double Avoidance path is generated with this jerk when there is enough distance from ego. 0.2 max_lateral_jerk [m/s3] double Avoidance path gets sharp up to this jerk limit when there is not enough distance from ego. 1.0

namespace: avoidance.constraints.longitudinal.

Name Unit Type Description Default value nominal_deceleration [m/ss] double Nominal deceleration limit. -1.0 nominal_jerk [m/sss] double Nominal jerk limit. 0.5 max_deceleration [m/ss] double Max decelerate limit. -2.0 max_jerk [m/sss] double Max jerk limit. 1.0 min_avoidance_speed_for_acc_prevention [m] double Minimum speed limit to be applied to prevent acceleration during avoidance. 3.0 max_avoidance_acceleration [m/ss] double Maximum acceleration during avoidance. 0.5

(*2) If there are multiple vehicles in a row to be avoided, no new avoidance path will be generated unless their lateral margin difference exceeds this value.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":" "},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#how-to-debug","title":"How to debug","text":""},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#publishing-visualization-marker","title":"Publishing Visualization Marker","text":"

Developers can see what is going on in each process by visualizing all the avoidance planning process outputs. The example includes target vehicles, shift points for each object, shift points after each filtering process, etc.

To enable the debug marker, execute ros2 param set /planning/scenario_planning/lane_driving/behavior_planning/behavior_path_planner avoidance.publish_debug_marker true (no restart is needed) or simply set the publish_debug_marker to true in the avoidance.param.yaml for permanent effect (restart is needed). Then add the marker /planning/scenario_planning/lane_driving/behavior_planning/behavior_path_planner/debug/avoidance in rviz2.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_avoidance_design/#echoing-debug-message-to-find-out-why-the-objects-were-ignored","title":"Echoing debug message to find out why the objects were ignored","text":"

If for some reason, no shift point is generated for your object, you can check for the failure reason via ros2 topic echo.

To print the debug message, just run the following

ros2 topic echo /planning/scenario_planning/lane_driving/behavior_planning/behavior_path_planner/debug/avoidance_debug_message_array\n
"},{"location":"planning/behavior_path_planner/behavior_path_planner_drivable_area_design/","title":"Drivable Area design","text":"

Drivable Area represents the area where ego vehicle can pass.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_drivable_area_design/#purpose-role","title":"Purpose / Role","text":"

In order to defined the area that ego vehicle can travel safely, we generate drivable area in behavior path planner module. Our drivable area is represented by two line strings, which are left_bound line and right_bound line respectively. Both left_bound and right_bound are created from left and right boundaries of lanelets. Note that left_bound and right bound are generated by generateDrivableArea function.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_drivable_area_design/#assumption","title":"Assumption","text":"

Our drivable area has several assumptions.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_drivable_area_design/#limitations","title":"Limitations","text":"

Currently, when clipping left bound or right bound, it can clip the bound more than necessary and the generated path might be conservative.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_drivable_area_design/#parameters-for-drivable-area-generation","title":"Parameters for drivable area generation","text":"Name Unit Type Description Default value enabled [-] boolean whether to dynamically the drivable area using the ego footprint false ego.extra_footprint_offset.front [m] double extra length to add to the front of the ego footprint 0.0 ego.extra_footprint_offset.rear [m] double extra length to add to the rear of the ego footprint 0.0 ego.extra_footprint_offset.left [m] double extra length to add to the left of the ego footprint 0.0 ego.extra_footprint_offset.right [m] double extra length to add to the rear of the ego footprint 0.0 dynamic_objects.avoid [-] boolean if true, the drivable area is not expanded in the predicted path of dynamic objects true dynamic_objects.extra_footprint_offset.front [m] double extra length to add to the front of the ego footprint 0.5 dynamic_objects.extra_footprint_offset.rear [m] double extra length to add to the rear of the ego footprint 0.5 dynamic_objects.extra_footprint_offset.left [m] double extra length to add to the left of the ego footprint 0.5 dynamic_objects.extra_footprint_offset.right [m] double extra length to add to the rear of the ego footprint 0.5 max_distance [m] double maximum distance by which the drivable area can be expended. A value of 0.0 means no maximum distance. 0.0 expansion.method [-] string method to use for the expansion: \"polygon\" will expand the drivable area around the ego footprint polygons; \"lanelet\" will expand to the whole lanelets overlapped by the ego footprints \"polygon\" expansion.max_arc_path_length [m] double maximum length along the path where the ego footprint is projected 50.0 expansion.extra_arc_length [m] double extra arc length (relative to the path) around ego footprints where the drivable area is expanded 0.5 expansion.avoid_linestring.types [-] string array linestring types in the lanelet maps that will not be crossed when expanding the drivable area [guard_rail, road_border] avoid_linestring.distance [m] double distance to keep between the drivable area and the linestrings to avoid 0.0 avoid_linestring.compensate.enable [-] bool if true, when the expansion is blocked by a linestring on one side of the path, we try to compensate and expand on the other side true avoid_linestring.compensate.extra_distance [m] double extra distance added to the expansion when compensating 3.0

The following parameters are defined for each module. Please refer to the config/drivable_area_expansion.yaml file.

Name Unit Type Description Default value drivable_area_right_bound_offset [m] double right offset length to expand drivable area 5.0 drivable_area_left_bound_offset [m] double left offset length to expand drivable area 5.0 drivable_area_types_to_skip [-] string linestring types (as defined in the lanelet map) that will not be expanded road_border"},{"location":"planning/behavior_path_planner/behavior_path_planner_drivable_area_design/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

This section gives details of the generation of the drivable area (left_bound and right_bound).

"},{"location":"planning/behavior_path_planner/behavior_path_planner_drivable_area_design/#drivable-lanes-generation","title":"Drivable Lanes Generation","text":"

Before generating drivable areas, drivable lanes need to be sorted. Drivable Lanes are selected in each module (Lane Follow, Avoidance, Lane Change, Pull Over, Pull Out and etc.), so more details about selection of drivable lanes can be found in each module's document. We use the following structure to define the drivable lanes.

struct DrivalbleLanes\n{\nlanelet::ConstLanelet right_lanelet; // right most lane\nlanelet::ConstLanelet left_lanelet; // left most lane\nlanelet::ConstLanelets middle_lanelets; // middle lanes\n};\n

The image of the sorted drivable lanes is depicted in the following picture.

Note that, the order of drivable lanes become

drivable_lanes = {DrivableLane1, DrivableLanes2, DrivableLanes3, DrivableLanes4, DrivableLanes5}\n
"},{"location":"planning/behavior_path_planner/behavior_path_planner_drivable_area_design/#drivable-area-generation","title":"Drivable Area Generation","text":"

In this section, a drivable area is created using drivable lanes arranged in the order in which vehicles pass by. We created left_bound from left boundary of the leftmost lanelet and right_bound from right boundary of the rightmost lanelet. The image of the created drivable area will be the following blue lines. Note that the drivable area is defined in the Path and PathWithLaneId messages as

std::vector<geometry_msgs::msg::Point> left_bound;\nstd::vector<geometry_msgs::msg::Point> right_bound;\n

and each point of right bound and left bound has a position in the absolute coordinate system.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_drivable_area_design/#drivable-area-expansion","title":"Drivable Area Expansion","text":""},{"location":"planning/behavior_path_planner/behavior_path_planner_drivable_area_design/#static-expansion","title":"Static Expansion","text":"

Each module can statically expand the left and right bounds of the target lanes by the parameter defined values. This enables large vehicles to pass narrow curve. The image of this process can be described as

Note that we only expand right bound of the rightmost lane and left bound of the leftmost lane.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_drivable_area_design/#dynamic-expansion","title":"Dynamic Expansion","text":"

The drivable area can also be expanded dynamically by considering the ego vehicle footprint projected on each path point. This expansion can be summarized with the following steps:

  1. Build the ego path footprint.
  2. Build the dynamic objects' predicted footprints (optional).
  3. Build \"uncrossable\" lines.
  4. Remove the footprints from step 2 and the lines from step 3 from the ego path footprint from step 1.
  5. Expand the drivable area with the result of step 4.
Inputs Footprints and uncrossable lines Expanded drivable area

Please note that the dynamic expansion can only increase the size of the drivable area and cannot remove any part from the original drivable area.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_drivable_area_design/#visualizing-maximum-drivable-area-debug","title":"Visualizing maximum drivable area (Debug)","text":"

Sometimes, the developers might get a different result between two maps that may look identical during visual inspection.

For example, in the same area, one can perform avoidance and another cannot. This might be related to the maximum drivable area issues due to the non-compliance vector map design from the user.

To debug the issue, the maximum drivable area boundary can be visualized.

The maximum drivable area can be visualize by adding the marker from /planning/scenario_planning/lane_driving/behavior_planning/behavior_path_planner/maximum_drivable_area

"},{"location":"planning/behavior_path_planner/behavior_path_planner_lane_change_design/","title":"Lane Change design","text":"

The Lane Change module is activated when lane change is needed and can be safely executed.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_lane_change_design/#lane-change-requirement","title":"Lane Change Requirement","text":""},{"location":"planning/behavior_path_planner/behavior_path_planner_lane_change_design/#generating-lane-change-candidate-path","title":"Generating Lane Change Candidate Path","text":"

The lane change candidate path is divided into two phases: preparation and lane-changing. The following figure illustrates each phase of the lane change candidate path.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_lane_change_design/#preparation-phase","title":"Preparation phase","text":"

The preparation trajectory is the candidate path's first and the straight portion generated along the ego vehicle's current lane. The length of the preparation trajectory is computed as follows.

lane_change_prepare_distance = max(current_speed * lane_change_prepare_duration + 0.5 * deceleration * lane_change_prepare_duration^2, minimum_lane_change_prepare_distance)\n

During the preparation phase, the turn signal will be activated when the remaining distance is equal to or less than lane_change_search_distance.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_lane_change_design/#lane-changing-phase","title":"Lane-changing phase","text":"

The lane-changing phase consist of the shifted path that moves ego from current lane to the target lane. Total distance of lane-changing phase is as follows.

lane_change_prepare_velocity = current_speed + deceleration * lane_change_prepare_duration\nlane_changing_distance = max(lane_change_prepare_velocity * lane_changing_duration + 0.5 * deceleration * lane_changing_duration^2, minimum_lane_change_length + backward_length_buffer_for_end_of_lane)\n

The backward_length_buffer_for_end_of_lane is added to allow some window for any possible delay, such as control or mechanical delay during brake lag.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_lane_change_design/#multiple-candidate-path-samples","title":"Multiple candidate path samples","text":"

Lane change velocity is affected by the ego vehicle's current velocity. High velocity requires longer preparation and lane changing distance. However we also need to plan lane changing trajectories in case ego vehicle slows down. Computing candidate paths that assumes ego vehicle's slows down is performed by substituting predetermined deceleration value into lane_change_prepare_distance, lane_change_prepare_velocity and lane_changing_distance equation.

The predetermined deceleration are a set of value that starts from deceleration = 0.0, and decrease by acceleration_resolution until it reachesdeceleration = -maximum_deceleration. The acceleration_resolution is determine by the following

acceleration_resolution = maximum_deceleration / lane_change_sampling_num\n

The following figure illustrates when lane_change_sampling_num = 4. Assuming that maximum_deceleration = 1.0 then a0 == 0.0 == no deceleration, a1 == 0.25, a2 == 0.5, a3 == 0.75 and a4 == 1.0 == maximum_deceleration. a0 is the expected lane change trajectories should ego vehicle do not decelerate, and a1's path is the expected lane change trajectories should ego vehicle decelerate at 0.25 m/s^2.

Which path will be chosen will depend on validity and collision check.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_lane_change_design/#candidate-paths-validity-check","title":"Candidate Path's validity check","text":"

A candidate path is valid if the total lane change distance is less than

  1. distance to the end of current lane
  2. distance to the next intersection
  3. distance from current pose to the goal.
  4. distance to the crosswalk.

The goal must also be in the list of the preferred lane.

The following flow chart illustrates the validity check.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_lane_change_design/#candidate-paths-safety-check","title":"Candidate Path's Safety check","text":"

Valid candidate path is evaluated for safety before is was selected as the output candidate path. The flow of the process is as follows.

If all valid candidate path is unsafe, then the operator will have the option to perform force lane change by using the front-most candidate path as the output. The force lane change will ignore all safety checks.

A candidate path's is safe if it satisfies the following lateral distance criteria,

lateral distance > lateral_distance_threshold\n

However, suppose the lateral distance is insufficient. In that case, longitudinal distance will be evaluated. The candidate path is safe only when the longitudinal gap between the ego vehicle and the dynamic object is wide enough.

The following charts illustrate the flow of the safety checks

"},{"location":"planning/behavior_path_planner/behavior_path_planner_lane_change_design/#calculating-and-evaluating-longitudinal-distance","title":"Calculating and evaluating longitudinal distance","text":"

A sufficient longitudinal gap between vehicles will prevent any rear-end collision from happening. This includes an emergency stop or sudden braking scenarios.

The following information is required to evaluate the longitudinal distance between vehicles

  1. estimated speed of the dynamic objects
  2. predicted path of dynamic objects
  3. ego vehicle's current speed
  4. ego vehicle's predicted path (converted/estimated from candidate path)

The following figure illustrates how the safety check is performed on ego vs. dynamic objects.

Let v_front and a_front be the front vehicle's velocity and deceleration, respectively, and v_rear and a_rear be the rear vehicle's velocity and deceleration, respectively. Front vehicle and rear vehicle assignment will depend on which predicted path's pose is currently being evaluated.

The following figure illustrates front and rear vehicle velocity assignment.

Assuming the front vehicle brakes, then d_front is the distance the front vehicle will travel until it comes to a complete stop. The distance is computed from the equation of motion, which yield.

d_front = -std::pow(v_front,2) / (2*a_front)\n

Using the same formula to evaluate the rear vehicle's stopping distance d_rear is insufficient. This is because as the rear vehicle's driver saw the front vehicle's sudden brake, it will take some time for the driver to process the information and react by pressing the brake. We call this delay the reaction time.

The reaction time is considered from the duration starting from the driver seeing the front vehicle brake light until the brake is pressed. As the brake is pressed, the time margin (which might be caused by mechanical or control delay) also needs to be considered. With these two parameters included, the formula for d_rear will be as follows.

d_rear = v_rear * rear_vehicle_reaction_time + v_rear * rear_vehicle_safety_time_margin + (-std::pow(v_front,2) / 2 * a_rear)\n

Since there is no absolute value for the decelerationa_front and a_rear, both of the values are parameterized (expected_front_deceleration and expected_rear_deceleration, respectively) with the estimation of how much deceleration will occur if the brake is pressed.

The longitudinal distance is evaluated as follows

d_rear < d_front + d_inter\n

where d_inter is the relative longitudinal distance obtained at each evaluated predicted pose.

Finally minimum longitudinal distance for d_rear is added to compensate for object to near to each other when d_rear = 0.0. This yields

std::max(longitudinal_distance_min_threshold, d_rear) < d_front + d_inter\n
"},{"location":"planning/behavior_path_planner/behavior_path_planner_lane_change_design/#collision-check-in-prepare-phase","title":"Collision check in prepare phase","text":"

The ego vehicle may need to secure ample inter-vehicle distance ahead of the target vehicle before attempting a lane change. The flag enable_collision_check_at_prepare_phase can be enabled to gain this behavior. The following image illustrates the differences between the false and true cases.

The parameter prepare_phase_ignore_target_speed_thresh can be configured to ignore the prepare phase collision check for targets whose speeds are less than a specific threshold, such as stationary or very slow-moving objects.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_lane_change_design/#if-the-lane-is-blocked-and-multiple-lane-changes","title":"If the lane is blocked and multiple lane changes","text":"

When driving on the public road with other vehicles, there exist scenarios where lane changes cannot be executed. Suppose the candidate path is evaluated as unsafe, for example, due to incoming vehicles in the adjacent lane. In that case, the ego vehicle can't change lanes, and it is impossible to reach the goal. Therefore, the ego vehicle must stop earlier at a certain distance and wait for the adjacent lane to be evaluated as safe. The minimum stopping distance computation is as follows.

minimum_lane_change_distance = num_of_lane_changes * (minimum_lane_change_length + backward_length_buffer_for_end_of_lane)\n

The following figure illustrates when the lane is blocked in multiple lane changes cases.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_lane_change_design/#intersection","title":"Intersection","text":"

Lane change in the intersection is prohibited following traffic regulation. Therefore, if the goal is place close passed the intersection, the lane change needs to be completed before ego vehicle enters the intersection region. Similar to the lane blocked case, in case of the path is unsafe, ego vehicle will stop and waits for the dynamic object to pass by.

The following figure illustrate the intersection case.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_lane_change_design/#aborting-lane-change","title":"Aborting lane change","text":"

The abort process may result in three different outcome; Cancel, Abort and Stop/Cruise.

The following depicts the flow of the abort lane change check.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_lane_change_design/#cancel","title":"Cancel","text":"

Suppose the lane change trajectory is evaluated as unsafe. In that case, if the ego vehicle has not departed from the current lane yet, the trajectory will be reset, and the ego vehicle will resume the lane following the maneuver.

The function can be enabled by setting enable_cancel_lane_change to true.

The following image illustrates the cancel process.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_lane_change_design/#abort","title":"Abort","text":"

Assume the ego vehicle has already departed from the current lane. In that case, it is dangerous to cancel the path, and it will cause the ego vehicle to change the heading direction abruptly. In this case, planning a trajectory that allows the ego vehicle to return to the current path while minimizing the heading changes is necessary. In this case, the lane change module will generate an abort path. The following images show an example of the abort path. Do note that the function DOESN'T GUARANTEE a safe abort process, as it didn't check the presence of the surrounding objects and/or their reactions. The function can be enable manually by setting both enable_cancel_lane_change and enable_abort_lane_change to true. The parameter abort_max_lateral_jerk need to be set to a high value in order for it to work.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_lane_change_design/#stopcruise","title":"Stop/Cruise","text":"

The last behavior will also occur if the ego vehicle has departed from the current lane. If the abort function is disabled or the abort is no longer possible, the ego vehicle will attempt to stop or transition to the obstacle cruise mode. Do note that the module DOESN'T GUARANTEE safe maneuver due to the unexpected behavior that might've occurred during these critical scenarios. The following images illustrate the situation.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_lane_change_design/#parameters","title":"Parameters","text":""},{"location":"planning/behavior_path_planner/behavior_path_planner_lane_change_design/#essential-lane-change-parameters","title":"Essential lane change parameters","text":"

The following parameters are configurable in lane_change.param.yaml.

Name Unit Type Description Default value lane_change_prepare_duration [m] double The preparation time for the ego vehicle to be ready to perform lane change. 4.0 lane_changing_safety_check_duration [m] double The total time that is taken to complete the lane-changing task. 8.0 minimum_lane_change_prepare_distance [m] double Minimum prepare distance for lane change 2.0 minimum_lane_change_length [m] double The minimum distance needed for changing lanes. 16.5 backward_length_buffer_for_end_of_lane [m] double The end of lane buffer to ensure ego vehicle has enough distance to start lane change 2.0 lane_change_finish_judge_buffer [m] double The additional buffer used to confirm lane change process completion 3.0 lane_changing_lateral_jerk [m/s3] double Lateral jerk value for lane change path generation 0.5 lane_changing_lateral_acc [m/s2] double Lateral acceleration value for lane change path generation 0.5 minimum_lane_change_velocity [m/s] double Minimum speed during lane changing process. 2.78 prediction_time_resolution [s] double Time resolution for object's path interpolation and collision check. 0.5 maximum_deceleration [m/s^2] double Ego vehicle maximum deceleration when performing lane change. 1.0 lane_change_sampling_num [-] int Number of possible lane-changing trajectories that are being influenced by deceleration 10"},{"location":"planning/behavior_path_planner/behavior_path_planner_lane_change_design/#collision-checks-during-lane-change","title":"Collision checks during lane change","text":"

The following parameters are configurable in behavior_path_planner.param.yaml.

Name Unit Type Description Default value lateral_distance_max_threshold [m] double The lateral distance threshold that is used to determine whether lateral distance between two object is enough and whether lane change is safe. 2.0 longitudinal_distance_min_threshold [m] double The longitudinal distance threshold that is used to determine whether longitudinal distance between two object is enough and whether lane change is safe. 3.0 expected_front_deceleration [m/s^2] double The front object's maximum deceleration when the front vehicle perform sudden braking. (*1) -1.0 expected_rear_deceleration [m/s^2] double The rear object's maximum deceleration when the rear vehicle perform sudden braking. (*1) -1.0 rear_vehicle_reaction_time [s] double The reaction time of the rear vehicle driver which starts from the driver noticing the sudden braking of the front vehicle until the driver step on the brake. 2.0 rear_vehicle_safety_time_margin [s] double The time buffer for the rear vehicle to come into complete stop when its driver perform sudden braking. 2.0 enable_collision_check_at_prepare_phase [-] boolean Perform collision check starting from prepare phase. If false, collision check only evaluated for lane changing phase. true prepare_phase_ignore_target_speed_thresh [m/s] double Ignore collision check in prepare phase of object speed that is lesser that the configured value. enable_collision_check_at_prepare_phase must be true 0.1 use_predicted_path_outside_lanelet [-] boolean If true, include collision check for predicted path that is out of lanelet (freespace). false use_all_predicted_path [-] boolean If false, use only the predicted path that has the maximum confidence. true

(*1) the value must be negative.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_lane_change_design/#abort-lane-change","title":"Abort lane change","text":"

The following parameters are configurable in lane_change.param.yaml.

Name Unit Type Description Default value enable_cancel_lane_change [-] boolean Enable cancel lane change true enable_abort_lane_change [-] boolean Enable abort lane change. false abort_delta_time [s] double The time taken to start aborting. 3.0 abort_max_lateral_jerk [s] double The maximum lateral jerk for abort path 1000.0"},{"location":"planning/behavior_path_planner/behavior_path_planner_lane_change_design/#debug","title":"Debug","text":"

The following parameters are configurable in lane_change.param.yaml.

Name Unit Type Description Default value publish_debug_marker [-] boolean Flag to publish debug marker false"},{"location":"planning/behavior_path_planner/behavior_path_planner_lane_change_design/#debug-marker-visualization","title":"Debug Marker & Visualization","text":"

To enable the debug marker, execute ros2 param set /planning/scenario_planning/lane_driving/behavior_planning/behavior_path_planner lane_change.publish_debug_marker true (no restart is needed) or simply set the publish_debug_marker to true in the lane_change.param.yaml for permanent effect (restart is needed). Then add the marker /planning/scenario_planning/lane_driving/behavior_planning/behavior_path_planner/debug/lanechange in rviz2.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_limitations/","title":"Limitations","text":"

The document describes the limitations that are currently present in the behavior_path_planner module.

The following items (but not limited to) fall in the scope of limitation:

"},{"location":"planning/behavior_path_planner/behavior_path_planner_limitations/#limitation-multiple-connected-opposite-lanes-require-linestring-with-shared-id","title":"Limitation: Multiple connected opposite lanes require Linestring with shared ID","text":"

To fully utilize the Lanelet2's API, the design of the vector map (.osm) needs to follow all the criteria described in Lanelet2 documentation. Specifically, in the case of 2 or more lanes, the Linestrings that divide the current lane with the opposite/adjacent lane need to have a matching Linestring ID. Assume the following ideal case.

In the image, Linestring ID51 is shared by Lanelet A and Lanelet B. Hence we can directly use the available left, adjacentLeft, right, adjacentRight and findUsages method within Lanelet2's API to directly query the direction and opposite lane availability.

const auto right_lane = routing_graph_ptr_->right(lanelet);\nconst auto adjacent_right_lane = routing_graph_ptr_->adjacentRight(lanelet);\nconst auto opposite_right_lane = lanelet_map_ptr_->laneletLayer.findUsages(lanelet.rightBound().invert());\n

The following images show the situation where these API does not work directly. This means that we cannot use them straight away, and several assumptions and logical instruction are needed to make these APIs work.

In this example (multiple linestring issues), Lanelet C contains Linestring ID61 and ID62, while Lanelet D contains Linestring ID63 and ID 64. Although the Linestring ID62 and ID64 have identical point IDs and seem visually connected, the API will treat these Linestring as though they are separated. When it searches for any Lanelet that is connected via Linestring ID62, it will return NULL, since ID62 only connects to Lanelet C and not other Lanelet.

Although, in this case, it is possible to forcefully search the lanelet availability by checking the lanelet that contains the points, usinggetLaneletFromPoint method. But, the implementation requires complex rules for it to work. Take the following images as an example.

Assume Object X is in Lanelet F. We can forcefully search Lanelet E via Point 7, and it will work if Point 7 is utilized by only 2 lanelet. However, the complexity increases when we want to start searching for the direction of the opposite lane. We can infer the direction of the lanelet by using mathematical operations (dot product of vector V_ID72 (Point 6 minus Point 9), and V_ID74 (Point 7 minus Point 8). But, notice that we did not use Point 7 in V_ID72. This is because searching it requires an iteration, adding additional non-beneficial computation.

Suppose the points are used by more than 2 lanelets. In that case, we have to find the differences for all lanelet, and the result might be undefined. The reason is that the differences between the coordinates do not reflect the actual shape of the lanelet. The following image demonstrates this point.

There are many other available solutions to try. However, further attempt to solve this might cause issues in the future, especially for maintaining or scaling up the software.

In conclusion, the multiple Linestring issues will not be supported. Covering these scenarios might give the user an \"everything is possible\" impression. This is dangerous since any attempt to create a non-standardized vector map is not compliant with safety regulations.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_limitations/#limitation-avoidance-at-corners-and-intersections","title":"Limitation: Avoidance at Corners and Intersections","text":"

Currently, the implementation doesn't cover avoidance at corners and intersections. The reason is similar to here. However, this case can still be supported in the future (assuming the vector map is defined correctly).

"},{"location":"planning/behavior_path_planner/behavior_path_planner_limitations/#limitation-chattering-shifts","title":"Limitation: Chattering shifts","text":"

There are possibilities that the shifted path chatters as a result of various factors. For example, bounded box shape or position from the perception input. Sometimes, it is difficult for the perception to get complete information about the object's size. As the object size is updated, the object length will also be updated. This might cause shifts point to be re-calculated, therefore resulting in chattering shift points.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_path_generation_design/","title":"Path Generation design","text":"

This document explains how the path is generated for lane change and avoidance, etc. The implementation can be found in path_shifter.hpp.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_path_generation_design/#overview","title":"Overview","text":"

The base idea of the path generation in lane change and avoidance is to smoothly shift the reference path, such as the center line, in the lateral direction. This is achieved by using a constant-jerk profile as in the figure below. More details on how it is used can be found in README. It is assumed that the reference path is smooth enough for this algorithm.

The figure below explains how the application of a constant lateral jerk l^{'''}(s) can be used to induce lateral shifting. In order to comply with the limits on lateral acceleration and velocity, zero-jerk time is employed in the figure ( T_a and T_v ). In each interval where constant jerk is applied, the shift position l(s) can be characterized by a third-degree polynomial. Therefore the shift length from the reference path can then be calculated by combining spline curves.

Note that, due to the rarity of the T_v in almost all cases of lane change and avoidance, T_v is not considered in the current implementation.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_path_generation_design/#mathematical-derivation","title":"Mathematical Derivation","text":"

By applying simple integral operations, the following analytical equations can be derived to describe the shift distance l(t) at each time under lateral jerk, acceleration, and velocity constraints.

\\begin{align}\nl_1&= \\frac{1}{6}jT_j^3\\\\[10pt]\nl_2&= \\frac{1}{6}j T_j^3 + \\frac{1}{2} j T_a T_j^2 + \\frac{1}{2} j T_a^2 T_j\\\\[10pt]\nl_3&= j  T_j^3 + \\frac{3}{2} j T_a T_j^2 + \\frac{1}{2} j T_a^2 T_j\\\\[10pt]\nl_4&= j T_j^3 + \\frac{3}{2} j T_a T_j^2 + \\frac{1}{2} j T_a^2 T_j + j(T_a + T_j)T_j T_v\\\\[10pt]\nl_5&= \\frac{11}{6} j T_j^3 + \\frac{5}{2} j T_a T_j^2 + \\frac{1}{2} j T_a^2 T_j + j(T_a + T_j)T_j T_v \\\\[10pt]\nl_6&= \\frac{11}{6} j T_j^3 + 3 j T_a T_j^2 + j T_a^2 T_j + j(T_a + T_j)T_j T_v\\\\[10pt]\nl_7&= 2 j T_j^3 + 3 j T_a T_j^2 + j T_a^2 T_j + j(T_a + T_j)T_j T_v\n\\end{align}\n

These equations are used to determine the shape of a path. Additionally, by applying further mathematical operations to these basic equations, the expressions of the following subsections can be derived.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_path_generation_design/#calculation-of-maximum-acceleration-from-transition-time-and-final-shift-length","title":"Calculation of Maximum Acceleration from transition time and final shift length","text":"

In the case where there are no limitations on lateral velocity and lateral acceleration, the maximum lateral acceleration during the shifting can be calculated as follows. The constant-jerk time is given by T_j = T_{\\rm total}/4 because of its symmetric property. Since T_a=T_v=0, the final shift length L=l_7=2jT_j^3 can be determined using the above equation. The maximum lateral acceleration is then given by a_{\\rm max} =jT_j. This results in the following expression for the maximum lateral acceleration:

\\begin{align}\na_{\\rm max}  = \\frac{8L}{T_{\\rm total}^2}\n\\end{align}\n
"},{"location":"planning/behavior_path_planner/behavior_path_planner_path_generation_design/#calculation-of-ta-tj-and-jerk-from-acceleration-limit","title":"Calculation of Ta, Tj and jerk from acceleration limit","text":"

In the case where there are no limitations on lateral velocity, the constant-jerk and acceleration times, as well as the required jerk can be calculated from the acceleration limit, total time, and final shift length as follows. Since T_v=0, the final shift length is given by:

\\begin{align}\nL = l_7 = 2 j T_j^3 + 3 j T_a T_j^2 + j T_a^2 T_j\n\\end{align}\n

Additionally, the velocity profile reveals the following relations:

\\begin{align}\na_{\\rm lim} &= j T_j\\\\\nT_{\\rm total} &= 4T_j + 2T_a\n\\end{align}\n

By solving these three equations, the following can be obtained:

\\begin{align}\nT_j&=\\frac{T_{\\rm total}}{2} - \\frac{2L}{a_{\\rm lim} T_{\\rm total}}\\\\[10pt]\nT_a&=\\frac{4L}{a_{\\rm lim} T_{\\rm total}} - \\frac{T_{\\rm total}}{2}\\\\[10pt]\njerk&=\\frac{2a_{\\rm lim} ^2T_{\\rm total}}{a_{\\rm lim} T_{\\rm total}^2-4L}\n\\end{align}\n

where T_j is the constant-jerk time, T_a is the constant acceleration time, j is the required jerk, a_{\\rm lim} is the acceleration limit, and L is the final shift length.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_path_generation_design/#calculation-of-required-time-from-jerk-and-acceleration-constraint","title":"Calculation of Required Time from Jerk and Acceleration Constraint","text":"

In the case where there are no limitations on lateral velocity, the total time required for shifting can be calculated from the jerk and acceleration limits and the final shift length as follows. By solving the two equations given above:

L = l_7 = 2 j T_j^3 + 3 j T_a T_j^2 + j T_a^2 T_j,\\quad a_{\\rm lim} = j T_j\n

we obtain the following expressions:

\\begin{align}\nT_j &= \\frac{a_{\\rm lim}}{j}\\\\[10pt]\nT_a &= \\frac{1}{2}\\sqrt{\\frac{a_{\\rm lim}}{j}^2 + \\frac{4L}{a_{\\rm lim}}} - \\frac{3a_{\\rm lim}}{2j}\n\\end{align}\n

The total time required for shifting can then be calculated as T_{\\rm total}=4T_j+2T_a.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_path_generation_design/#limitation","title":"Limitation","text":""},{"location":"planning/behavior_path_planner/behavior_path_planner_pull_out_design/","title":"Pull Out design","text":""},{"location":"planning/behavior_path_planner/behavior_path_planner_pull_out_design/#purpose-role","title":"Purpose / Role","text":"

Pull out from the shoulder lane without colliding with objects.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_pull_out_design/#design","title":"Design","text":""},{"location":"planning/behavior_path_planner/behavior_path_planner_pull_out_design/#general-parameters-for-pull_out","title":"General parameters for pull_out","text":"Name Unit Type Description Default value th_arrived_distance_m [m] double distance threshold for arrival of path termination 1.0 th_stopped_velocity_mps [m/s] double velocity threshold for arrival of path termination 0.01 th_stopped_time_sec [s] double time threshold for arrival of path termination 1.0 collision_check_margin [m] double Obstacle collision check margin 1.0 collision_check_distance_from_end [m] double collision check distance from end point. currently only for pull out 15.0"},{"location":"planning/behavior_path_planner/behavior_path_planner_pull_out_design/#safe-check-with-obstacles-in-shoulder-lane","title":"Safe check with obstacles in shoulder lane","text":"
  1. Calculate ego-vehicle's footprint on pull out path between from current position to pull out end point. (Illustrated by blue frame)
  2. Calculate object's polygon which is located in shoulder lane
  3. If a distance between the footprint and the polygon is lower than the threshold (default: 1.0 m), that is judged as a unsafe path
"},{"location":"planning/behavior_path_planner/behavior_path_planner_pull_out_design/#path-generation","title":"Path Generation","text":"

There are two path generation methods.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_pull_out_design/#shift-pull-out","title":"shift pull out","text":"

Pull out distance is calculated by the speed, lateral deviation, and the lateral jerk. The lateral jerk is searched for among the predetermined minimum and maximum values, and the one that generates a safe path is selected.

shift pull out video

"},{"location":"planning/behavior_path_planner/behavior_path_planner_pull_out_design/#parameters-for-shift-pull-out","title":"parameters for shift pull out","text":"Name Unit Type Description Default value enable_shift_pull_out [-] bool flag whether to enable shift pull out true shift_pull_out_velocity [m/s] double velocity of shift pull out 2.0 pull_out_sampling_num [-] int Number of samplings in the minimum to maximum range of lateral_jerk 4 maximum_lateral_jerk [m/s3] double maximum lateral jerk 2.0 minimum_lateral_jerk [m/s3] double minimum lateral jerk 0.5 minimum_shift_pull_out_distance [m] double minimum shift pull out distance. if calculated pull out distance is shorter than this, use this for path generation. 20.0"},{"location":"planning/behavior_path_planner/behavior_path_planner_pull_out_design/#geometric-pull-out","title":"geometric pull out","text":"

Generate two arc paths with discontinuous curvature. Ego-vehicle stops once in the middle of the path to control the steer on the spot. See also [1] for details of the algorithm.

geometric pull out video

"},{"location":"planning/behavior_path_planner/behavior_path_planner_pull_out_design/#parameters-for-geometric-pull-out","title":"parameters for geometric pull out","text":"Name Unit Type Description Default value enable_geometric_pull_out [-] bool flag whether to enable geometric pull out true divide_pull_out_path [-] bool flag whether to divide arc paths.\u3000The path is assumed to be divided because the curvature is not continuous. But it requires a stop during the departure. false geometric_pull_out_velocity [m/s] double velocity of geometric pull out 1.0 arc_path_interval [m] double path points interval of arc paths of geometric pull out 1.0 lane_departure_margin [m] double margin of deviation to lane right 0.2 pull_out_max_steer_angle [rad] double maximum steer angle for path generation 0.26"},{"location":"planning/behavior_path_planner/behavior_path_planner_pull_out_design/#backward-pull-out-start-point-search","title":"backward pull out start point search","text":"

If a safe path cannot be generated from the current position, search backwards for a pull out start point at regular intervals(default: 2.0).

pull out after backward driving video

"},{"location":"planning/behavior_path_planner/behavior_path_planner_pull_out_design/#parameters-for-backward-pull-out-start-point-search","title":"parameters for backward pull out start point search","text":"Name Unit Type Description Default value enable_back [-] bool flag whether to search backward for start_point true search_priority [-] string In the case of efficient_path, use efficient paths even if the back distance is longer. In case of short_back_distance, use a path with as short a back distance efficient_path max_back_distance [m] double maximum back distance 30.0 backward_search_resolution [m] double distance interval for searching backward pull out start point 2.0 backward_path_update_duration [s] double time interval for searching backward pull out start point. this prevents chattering between back driving and pull_out 3.0 ignore_distance_from_lane_end [m] double distance from end of pull out lanes for ignoring start candidates 15.0"},{"location":"planning/behavior_path_planner/behavior_path_planner_pull_over_design/","title":"Pull Over design","text":""},{"location":"planning/behavior_path_planner/behavior_path_planner_pull_over_design/#purpose-role","title":"Purpose / Role","text":"

Search for a space where there are no objects and pull over there.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_pull_over_design/#design","title":"Design","text":""},{"location":"planning/behavior_path_planner/behavior_path_planner_pull_over_design/#general-parameters-for-pull_over","title":"General parameters for pull_over","text":"Name Unit Type Description Default value request_length [m] double when the ego-vehicle approaches the goal by this distance, the module is activated. 200.0 th_arrived_distance [m] double distance threshold for arrival of path termination 1.0 th_stopped_velocity [m/s] double velocity threshold for arrival of path termination 0.01 th_stopped_time [s] double time threshold for arrival of path termination 2.0 pull_over_velocity [m/s] double decelerate to this speed by the goal search area 2.0 pull_over_minimum_velocity [m/s] double speed of pull_over after stopping once. this prevents excessive acceleration. 1.38 margin_from_boundary [m] double distance margin from edge of the shoulder lane 0.5 decide_path_distance [m] double decide path if it approaches this distance relative to the parking position. after that, no path planning and goal search are performed 10.0 maximum_deceleration [m/s2] double maximum deceleration. it prevents sudden deceleration when a parking path cannot be found suddenly 1.0"},{"location":"planning/behavior_path_planner/behavior_path_planner_pull_over_design/#collision-check","title":"collision check","text":""},{"location":"planning/behavior_path_planner/behavior_path_planner_pull_over_design/#occupancy-grid-based-collision-check","title":"occupancy grid based collision check","text":"

Generate footprints from ego-vehicle path points and determine obstacle collision from the value of occupancy_grid of the corresponding cell.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_pull_over_design/#parameters-for-occupancy-grid-based-collision-check","title":"Parameters for occupancy grid based collision check","text":"Name Unit Type Description Default value use_occupancy_grid [-] bool flag whether to use occupancy grid for collision check true use_occupancy_grid_for_longitudinal_margin [-] bool flag whether to use occupancy grid for keeping longitudinal margin false occupancy_grid_collision_check_margin [m] double margin to calculate ego-vehicle cells from footprint. 0.0 theta_size [-] int size of theta angle to be considered. angular resolution for collision check will be 2\\pi / theta_size [rad]. 360 obstacle_threshold [-] int threshold of cell values to be considered as obstacles 60"},{"location":"planning/behavior_path_planner/behavior_path_planner_pull_over_design/#onject-recognition-based-collision-check","title":"onject recognition based collision check","text":""},{"location":"planning/behavior_path_planner/behavior_path_planner_pull_over_design/#parameters-for-object-recognition-based-collision-check","title":"Parameters for object recognition based collision check","text":"Name Unit Type Description Default value use_object_recognition [-] bool flag whether to use object recognition for collision check true object_recognition_collision_check_margin [m] double margin to calculate ego-vehicle cells from footprint. 1.0"},{"location":"planning/behavior_path_planner/behavior_path_planner_pull_over_design/#goal-search","title":"Goal Search","text":"

If it is not possible to park safely at a given goal, /planning/scenario_planning/modified_goal is searched for in certain range of the shoulder lane.

goal search video

"},{"location":"planning/behavior_path_planner/behavior_path_planner_pull_over_design/#parameters-for-goal-search","title":"Parameters for goal search","text":"Name Unit Type Description Default value search_priority [-] string In case efficient_path use a goal that can generate an efficient path( priority is shift_parking -> arc_forward_parking -> arc_backward_parking). In case close_goal use the closest goal to the original one. efficient_path enable_goal_research [-] double flag whether to search goal true forward_goal_search_length [m] double length of forward range to be explored from the original goal 20.0 backward_goal_search_length [m] double length of backward range to be explored from the original goal 20.0 goal_search_interval [m] double distance interval for goal search 2.0 longitudinal_margin [m] double margin between ego-vehicle at the goal position and obstacles 3.0 max_lateral_offset [m] double maximum offset of goal search in the lateral direction 3.0 lateral_offset_interval [m] double distance interval of goal search in the lateral direction 3.0 ignore_distance_from_lane_start [m] double distance from start of pull over lanes for ignoring goal candidates 15.0"},{"location":"planning/behavior_path_planner/behavior_path_planner_pull_over_design/#path-generation","title":"Path Generation","text":"

There are three path generation methods. The path is generated with a certain margin (default: 0.5 m) from left boundary of shoulder lane.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_pull_over_design/#shift-parking","title":"shift parking","text":"

Pull over distance is calculated by the speed, lateral deviation, and the lateral jerk. The lateral jerk is searched for among the predetermined minimum and maximum values, and the one satisfies ready conditions described above is output.

  1. Apply uniform offset to centerline of shoulder lane for ensuring margin
  2. In the section between merge start and end, path is shifted by a method that is used to generate avoidance path (four segmental constant jerk polynomials)
  3. Combine this path with center line of road lane

shift_parking video

"},{"location":"planning/behavior_path_planner/behavior_path_planner_pull_over_design/#parameters-for-shift-parking","title":"Parameters for shift parking","text":"Name Unit Type Description Default value enable_shift_parking [-] bool flag whether to enable shift parking true pull_over_sampling_num [-] int Number of samplings in the minimum to maximum range of lateral_jerk 4 maximum_lateral_jerk [m/s3] double maximum lateral jerk 2.0 minimum_lateral_jerk [m/s3] double minimum lateral jerk 0.5 deceleration_interval [m] double distance of deceleration section 15.0 after_pull_over_straight_distance [m] double straight line distance after pull over end point 5.0"},{"location":"planning/behavior_path_planner/behavior_path_planner_pull_over_design/#geometric-parallel-parking","title":"geometric parallel parking","text":"

Generate two arc paths with discontinuous curvature. It stops twice in the middle of the path to control the steer on the spot. There are two path generation methods: forward and backward. See also [1] for details of the algorithm. There is also a simple python implementation.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_pull_over_design/#parameters-geometric-parallel-parking","title":"Parameters geometric parallel parking","text":"Name Unit Type Description Default value arc_path_interval [m] double interval between arc path points 1.0 pull_over_max_steer_rad [rad] double maximum steer angle for path generation. it may not be possible to control steer up to max_steer_angle in vehicle_info when stopped 0.35"},{"location":"planning/behavior_path_planner/behavior_path_planner_pull_over_design/#arc-forward-parking","title":"arc forward parking","text":"

Generate two forward arc paths.

how arc_forward_parking video

"},{"location":"planning/behavior_path_planner/behavior_path_planner_pull_over_design/#parameters-arc-forward-parking","title":"Parameters arc forward parking","text":"Name Unit Type Description Default value enable_arc_forward_parking [-] bool flag whether to enable arc forward parking true after_forward_parking_straight_distance [m] double straight line distance after pull over end point 2.0 forward_parking_velocity [m/s] double velocity when forward parking 1.38 forward_parking_lane_departure_margin [m/s] double lane departure margin for front left corner of ego-vehicle when forward parking 0.0"},{"location":"planning/behavior_path_planner/behavior_path_planner_pull_over_design/#arc-backward-parking","title":"arc backward parking","text":"

Generate two backward arc paths.

.

arc_forward_parking video

"},{"location":"planning/behavior_path_planner/behavior_path_planner_pull_over_design/#parameters-arc-backward-parking","title":"Parameters arc backward parking","text":"Name Unit Type Description Default value enable_arc_backward_parking [-] bool flag whether to enable arc backward parking true after_backward_parking_straight_distance [m] double straight line distance after pull over end point 2.0 backward_parking_velocity [m/s] double velocity when backward parking -1.38 backward_parking_lane_departure_margin [m/s] double lane departure margin for front right corner of ego-vehicle when backward 0.0"},{"location":"planning/behavior_path_planner/behavior_path_planner_pull_over_design/#freespace-parking","title":"freespace parking","text":"

If the vehicle gets stuck with lane_parking, run freespace_parking. To run this feature, you need to set parking_lot to the map, activate_by_scenario of costmap_generator to false and enable_freespace_parking to true

*Series execution with avoidance_module in the flowchart is under development.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_pull_over_design/#unimplemented-parts-limitations-for-freespace-parking","title":"Unimplemented parts / limitations for freespace parking","text":""},{"location":"planning/behavior_path_planner/behavior_path_planner_pull_over_design/#parameters-freespace-parking","title":"Parameters freespace parking","text":"Name Unit Type Description Default value enable_freespace_parking [-] bool This flag enables freespace parking, which runs when the vehicle is stuck due to e.g. obstacles in the parking area. true

See freespace_planner for other parameters.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_side_shift_design/","title":"Side Shift design","text":"

(For remote control) Shift the path to left or right according to an external instruction.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_side_shift_design/#flowchart","title":"Flowchart","text":"

```plantuml --> @startuml skinparam monochrome true skinparam defaultTextAlignment center skinparam noteTextAlignment left

title path generation

start partition plan { if (lateral_offset_change_request_ == true \\n && \\n (shifting_status_ == BEFORE_SHIFT \\n || \\n shifting_status_ == AFTER_SHIFT)) then ( true) partition replace-shift-line { if ( shift line is inserted in the path ) then ( yes) :erase left shift line; else ( no) endif :calcShiftLines; :add new shift lines; :inserted_lateral_offset_ = requested_lateral_offset_ \\n inserted_shift_lines_ = new_shift_line; } else( false) endif stop @enduml ```

"},{"location":"planning/behavior_path_planner/behavior_path_planner_turn_signal_design/","title":"Turn Signal design","text":"

Turn Signal decider determines necessary blinkers.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_turn_signal_design/#purpose-role","title":"Purpose / Role","text":"

This module is responsible for activating a necessary blinker during driving. It uses rule-based algorithm to determine blinkers, and the details of this algorithm are described in the following sections. Note that this algorithm is strictly based on the Japanese Road Traffic Raw.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_turn_signal_design/#assumptions","title":"Assumptions","text":"

Autoware has following order of priorities for turn signals.

  1. Activate turn signal to safely navigate ego vehicle and protect other road participants
  2. Follow traffic laws
  3. Follow human driving practices
"},{"location":"planning/behavior_path_planner/behavior_path_planner_turn_signal_design/#limitations","title":"Limitations","text":"

Currently, this algorithm can sometimes give unnatural (not wrong) blinkers in a complicated situations. This is because it tries to follow the road traffic raw and cannot solve blinker conflicts clearly in that environment.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_turn_signal_design/#parameters-for-turn-signal-decider","title":"Parameters for turn signal decider","text":"Name Unit Type Description Default value turn_signal_intersection_search_distance [m] double constant search distance to decide activation of blinkers at intersections 30 turn_signal_intersection_angle_threshold_degree deg double angle threshold to determined the end point of intersection required section 15 turn_signal_minimum_search_distance [m] double minimum search distance for avoidance and lane change 10 turn_signal_search_time [s] double search time for to decide activation of blinkers 3.0 turn_signal_shift_length_threshold [m] double shift length threshold to decide activation of blinkers 0.3 turn_signal_on_swerving [-] bool flag to activate blinkers when swerving true

Note that the default values for turn_signal_intersection_search_distance and turn_signal_search_time is strictly followed by Japanese Road Traffic Laws. So if your country does not allow to use these default values, you should change these values in configuration files.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_turn_signal_design/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

In this algorithm, it assumes that each blinker has two sections, which are desired section and required section. The image of these two sections are depicted in the following diagram.

These two sections have the following meanings.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_turn_signal_design/#-desired-section","title":"- Desired Section","text":"
- This section is defined by road traffic laws. It cannot be longer or shorter than the designated length defined by the law.\n- In this section, you do not have to activate the designated blinkers if it is dangerous to do so.\n
"},{"location":"planning/behavior_path_planner/behavior_path_planner_turn_signal_design/#-required-section","title":"- Required Section","text":"
- In this section, ego vehicle must activate designated blinkers. However, if there are blinker conflicts, it must solve them based on the algorithm we mention later in this document.\n- Required section cannot be longer than desired section.\n

For left turn, right turn, avoidance, lane change, pull over and pull out, we define these two sections, which are elaborated in the following part.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_turn_signal_design/#1-left-and-right-turn","title":"1. Left and Right turn","text":"

Turn signal decider checks each lanelet on the map if it has turn_direction information. If a lanelet has this information, it activates necessary blinker based on this information.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_turn_signal_design/#2-avoidance","title":"2. Avoidance","text":"

Avoidance can be separated into two sections, first section and second section. The first section is from the start point of the path shift to the end of the path shift. The second section is from the end of shift point to the end of avoidance. Note that avoidance module will not activate turn signal when its shift length is below turn_signal_shift_length_threshold.

First section

Second section

"},{"location":"planning/behavior_path_planner/behavior_path_planner_turn_signal_design/#3-lane-change","title":"3. Lane Change","text":" "},{"location":"planning/behavior_path_planner/behavior_path_planner_turn_signal_design/#4-pull-out","title":"4. Pull out","text":" "},{"location":"planning/behavior_path_planner/behavior_path_planner_turn_signal_design/#5-pull-over","title":"5. Pull over","text":"

When it comes to handle several blinkers, it gives priority to the first blinker that comes first. However, this rule sometimes activate unnatural blinkers, so turn signal decider uses the following five rules to decide the necessary turn signal.

Based on these five rules, turn signal decider can solve blinker conflicts. The following pictures show some examples of this kind of conflicts.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_turn_signal_design/#-several-right-and-left-turns-on-short-sections","title":"- Several right and left turns on short sections","text":"

In this scenario, ego vehicle has to pass several turns that are close each other. Since this pattern can be solved by the pattern1 rule, the overall result is depicted in the following picture.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_turn_signal_design/#-avoidance-with-left-turn-1","title":"- Avoidance with left turn (1)","text":"

In this scene, ego vehicle has to deal with the obstacle that is on its original path as well as make a left turn. The overall result can be varied by the position of the obstacle, but the image of the result is described in the following picture.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_turn_signal_design/#-avoidance-with-left-turn-2","title":"- Avoidance with left turn (2)","text":"

Same as the previous scenario, ego vehicle has to avoid the obstacle as well as make a turn left. However, in this scene, the obstacle is parked after the intersection. Similar to the previous one, the overall result can be varied by the position of the obstacle, but the image of the result is described in the following picture.

"},{"location":"planning/behavior_path_planner/behavior_path_planner_turn_signal_design/#-lane-change-and-left-turn","title":"- Lane change and left turn","text":"

In this scenario, ego vehicle has to do lane change before making a left turn. In the following example, ego vehicle does not activate left turn signal until it reaches the end point of the lane change path.

"},{"location":"planning/behavior_velocity_planner/","title":"Behavior Velocity Planner","text":""},{"location":"planning/behavior_velocity_planner/#overview","title":"Overview","text":"

behavior_velocity_planner is a planner that adjust velocity based on the traffic rules. It consists of several modules. Please refer to the links listed below for detail on each module.

When each module plans velocity, it considers based on base_link(center of rear-wheel axis) pose. So for example, in order to stop at a stop line with the vehicles' front on the stop line, it calculates base_link position from the distance between base_link to front and modifies path velocity from the base_link position.

"},{"location":"planning/behavior_velocity_planner/#input-topics","title":"Input topics","text":"Name Type Description ~input/path_with_lane_id autoware_auto_planning_msgs::msg::PathWithLaneId path with lane_id ~input/vector_map autoware_auto_mapping_msgs::msg::HADMapBin vector map ~input/vehicle_odometry nav_msgs::msg::Odometry vehicle velocity ~input/dynamic_objects autoware_auto_perception_msgs::msg::PredictedObjects dynamic objects ~input/no_ground_pointcloud sensor_msgs::msg::PointCloud2 obstacle pointcloud ~/input/compare_map_filtered_pointcloud sensor_msgs::msg::PointCloud2 obstacle pointcloud filtered by compare map. Note that this is used only when the detection method of run out module is Points. ~input/traffic_signals autoware_auto_perception_msgs::msg::TrafficSignalArray traffic light states"},{"location":"planning/behavior_velocity_planner/#output-topics","title":"Output topics","text":"Name Type Description ~output/path autoware_auto_planning_msgs::msg::Path path to be followed ~output/stop_reasons tier4_planning_msgs::msg::StopReasonArray reasons that cause the vehicle to stop"},{"location":"planning/behavior_velocity_planner/#node-parameters","title":"Node parameters","text":"Parameter Type Description launch_blind_spot bool whether to launch blind_spot module launch_crosswalk bool whether to launch crosswalk module launch_detection_area bool whether to launch detection_area module launch_intersection bool whether to launch intersection module launch_traffic_light bool whether to launch traffic light module launch_stop_line bool whether to launch stop_line module launch_occlusion_spot bool whether to launch occlusion_spot module launch_run_out bool whether to launch run_out module launch_speed_bump bool whether to launch speed_bump module forward_path_length double forward path length backward_path_length double backward path length max_accel double (to be a global parameter) max acceleration of the vehicle system_delay double (to be a global parameter) delay time until output control command delay_response_time double (to be a global parameter) delay time of the vehicle's response to control commands"},{"location":"planning/behavior_velocity_planner/blind-spot-design/","title":"Blind spot design","text":""},{"location":"planning/behavior_velocity_planner/blind-spot-design/#blind-spot","title":"Blind Spot","text":""},{"location":"planning/behavior_velocity_planner/blind-spot-design/#role","title":"Role","text":"

Blind spot check while turning right/left by a dynamic object information and planning a velocity of the start/stop.

"},{"location":"planning/behavior_velocity_planner/blind-spot-design/#activation-timing","title":"Activation Timing","text":"

This function is activated when the lane id of the target path has an intersection label (i.e. the turn_direction attribute is left or right).

"},{"location":"planning/behavior_velocity_planner/blind-spot-design/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

Sets a stop line, a pass judge line, a detection area and conflict area based on a map information and a self position.

Stop/Go state: When both conditions are met for any of each object, this module state is transited to the \"stop\" state and insert zero velocity to stop the vehicle.

In order to avoid a rapid stop, the \u201cstop\u201d judgement is not executed after the judgment line is passed.

Once a \"stop\" is judged, it will not transit to the \"go\" state until the \"go\" judgment continues for a certain period in order to prevent chattering of the state (e.g. 2 seconds).

"},{"location":"planning/behavior_velocity_planner/blind-spot-design/#module-parameters","title":"Module Parameters","text":"Parameter Type Description stop_line_margin double [m] a margin that the vehicle tries to stop before stop_line backward_length double [m] distance from closest path point to the edge of beginning point. ignore_width_from_center_line double [m] ignore threshold that vehicle behind is collide with ego vehicle or not max_future_movement_time double [s] maximum time for considering future movement of object"},{"location":"planning/behavior_velocity_planner/blind-spot-design/#flowchart","title":"Flowchart","text":""},{"location":"planning/behavior_velocity_planner/crosswalk-design/","title":"Crosswalk design","text":""},{"location":"planning/behavior_velocity_planner/crosswalk-design/#crosswalk","title":"Crosswalk","text":""},{"location":"planning/behavior_velocity_planner/crosswalk-design/#role","title":"Role","text":"

This module judges whether the ego should stop in front of the crosswalk in order to provide safe passage of pedestrians and bicycles based on object's behavior and surround traffic.

crosswalk module"},{"location":"planning/behavior_velocity_planner/crosswalk-design/#activation-timing","title":"Activation Timing","text":"

The manager launch crosswalk scene modules when the reference path conflicts crosswalk lanelets.

"},{"location":"planning/behavior_velocity_planner/crosswalk-design/#module-parameters","title":"Module Parameters","text":""},{"location":"planning/behavior_velocity_planner/crosswalk-design/#common-parameters","title":"Common parameters","text":"Parameter Type Description show_processing_time bool whether to show processing time"},{"location":"planning/behavior_velocity_planner/crosswalk-design/#parameters-for-stop-position","title":"Parameters for stop position","text":"

The crosswalk module determines a stop position at least stop_margin away from the object.

stop margin

The stop line is the reference point for the stopping position of the vehicle, but if there is no stop line in front of the crosswalk, the position stop_line_distance meters before the crosswalk is the virtual stop line for the vehicle. Then, if the stop position determined from stop_margin exists in front of the stop line determined from the HDMap or stop_line_distance, the actual stop position is determined according to stop_margin in principle, and vice versa.

explicit stop line

virtual stop point

On the other hand, if pedestrian (bicycle) is crossing wide crosswalks seen in scramble intersections, and the pedestrian position is more than stop_line_margin meters away from the stop line, the actual stop position is determined to be stop_margin and pedestrian position, not at the stop line.

stop at wide crosswalk

See the workflow in algorithms section.

Parameter Type Description stop_margin double [m] the vehicle decelerates to be able to stop in front of object with margin stop_line_distance double [m] make stop line away from crosswalk when no explicit stop line exists stop_line_margin double [m] if objects cross X meters behind the stop line, the stop position is determined according to the object position (stop_margin meters before the object) stop_position_threshold double [m] threshold for check whether the vehicle stop in front of crosswalk"},{"location":"planning/behavior_velocity_planner/crosswalk-design/#parameters-for-ego-velocity","title":"Parameters for ego velocity","text":"Parameter Type Description slow_velocity double [m/s] target vehicle velocity when module receive slow down command from FOA max_slow_down_jerk double [m/sss] minimum jerk deceleration for safe brake max_slow_down_accel double [m/ss] minimum accel deceleration for safe brake no_relax_velocity double [m/s] if the current velocity is less than X m/s, ego always stops at the stop position(not relax deceleration constraints)"},{"location":"planning/behavior_velocity_planner/crosswalk-design/#parameters-for-stuck-vehicle","title":"Parameters for stuck vehicle","text":"

If there are low speed or stop vehicle ahead of the crosswalk, and there is not enough space between the crosswalk and the vehicle (see following figure), closing the distance to that vehicle could cause Ego to be stuck on the crosswalk. So, in this situation, this module plans to stop before the crosswalk and wait until the vehicles move away, even if there are no pedestrians or bicycles.

stuck vehicle attention range Parameter Type Description stuck_vehicle_velocity double [m/s] maximum velocity threshold whether the vehicle is stuck max_lateral_offset double [m] maximum lateral offset for stuck vehicle position should be looked stuck_vehicle_attention_range double [m] the detection area is defined as X meters behind the crosswalk"},{"location":"planning/behavior_velocity_planner/crosswalk-design/#parameters-for-pass-judge-logic","title":"Parameters for pass judge logic","text":"

Also see algorithm section.

Parameter Type Description ego_pass_first_margin double [s] time margin for ego pass first situation ego_pass_later_margin double [s] time margin for object pass first situation stop_object_velocity_threshold double [m/s] velocity threshold for the module to judge whether the objects is stopped min_object_velocity double [m/s] minimum object velocity (compare the estimated velocity by perception module with this parameter and adopt the larger one to calculate TTV.) max_yield_timeout double [s] if the pedestrian does not move for X seconds after stopping before the crosswalk, the module judge that ego is able to pass first. ego_yield_query_stop_duration double [s] the amount of time which ego should be stopping to query whether it yields or not."},{"location":"planning/behavior_velocity_planner/crosswalk-design/#parameters-for-input-data","title":"Parameters for input data","text":"Parameter Type Description tl_state_timeout double [s] timeout threshold for traffic light signal"},{"location":"planning/behavior_velocity_planner/crosswalk-design/#parameters-for-target-area-object","title":"Parameters for target area & object","text":"

As a countermeasure against pedestrians attempting to cross outside the crosswalk area, this module watches not only the crosswalk zebra area but also in front of and behind space of the crosswalk, and if there are pedestrians or bicycles attempting to pass through the watch area, this module judges whether ego should pass or stop.

crosswalk attention range

This module mainly looks the following objects as target objects. There are also optional flags that enables the pass/stop decision for motorcycle and unknown objects.

Parameter Type Description crosswalk_attention_range double [m] the detection area is defined as -X meters before the crosswalk to +X meters behind the crosswalk target/unknown bool whether to look and stop by UNKNOWN objects target/bicycle bool whether to look and stop by BICYCLE objects target/motorcycle bool whether to look and stop MOTORCYCLE objects target/pedestrian bool whether to look and stop PEDESTRIAN objects"},{"location":"planning/behavior_velocity_planner/crosswalk-design/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"planning/behavior_velocity_planner/crosswalk-design/#stop-position","title":"Stop position","text":"

The stop position is determined by the existence of the stop line defined by the HDMap, the positional relationship between the stop line and the pedestrians and bicycles, and each parameter.

"},{"location":"planning/behavior_velocity_planner/crosswalk-design/#pass-judge-logic","title":"Pass judge logic","text":"

At first, this module determines whether the pedestrians or bicycles are likely to cross the crosswalk based on the color of the pedestrian traffic light signal related to the crosswalk. Only when the pedestrian traffic signal is RED, this module judges that the objects will not cross the crosswalk and skip the pass judge logic.

Secondly, this module makes a decision as to whether ego should stop in front of the crosswalk or pass through based on the relative relationship between TTC(Time-To-Collision) and TTV(Time-To-Vehicle). The TTC is the time it takes for ego to reach the virtual collision point, and the TTV is the time it takes for the object to reach the virtual collision point.

virtual collision point

Depending on the relative relationship between TTC and TTV, the ego's behavior at crosswalks can be classified into three categories.

  1. TTC >> TTV: The objects have enough time to cross first before ego reaches the crosswalk. (Type-A)
  2. TTC \u2252 TTV: There is a risk of a near miss and collision between ego and objects at the virtual collision point. (Type-B)
  3. TTC << TTV: Ego has enough time to path through the crosswalk before the objects reach the virtual collision point. (Type-C)

This module judges that ego is able to pass through the crosswalk without collision risk when the relative relationship between TTC and TTV is Type-A and Type-C. On the other hand, this module judges that ego needs to stop in front of the crosswalk prevent collision with objects in Type-B condition. The time margin can be set by parameters ego_pass_first_margin and ego_pass_later_margin. This logic is designed based on [1].

time-to-collision vs time-to-vehicle

This module uses the larger value of estimated object velocity and min_object_velocity in calculating TTV in order to avoid division by zero.

"},{"location":"planning/behavior_velocity_planner/crosswalk-design/#dead-lock-prevention","title":"Dead lock prevention","text":"

If there are objects stop within a radius of min_object_velocity * ego_pass_later_margin meters from virtual collision point, this module judges that ego should stop based on the pass judge logic described above at all times. In such a situation, even if the pedestrian has no intention of crossing, ego continues the stop decision on the spot. So, this module has another logic for dead lock prevention, and if the object continues to stop for more than max_yield_timeout seconds after ego stops in front of the crosswalk, this module judges that the object has no intention of crossing and switches from STOP state to PASS state. The parameter stop_object_velocity_threshold is used to judge whether the objects are stopped or not. In addition, if the object starts to move after the module judges that the object has no intention of crossing, this module judges whether ego should stop or not once again.

dead lock situation"},{"location":"planning/behavior_velocity_planner/crosswalk-design/#safety-slow-down-behavior","title":"Safety Slow Down Behavior","text":"

In current autoware implementation if there are no target objects around a crosswalk, ego vehicle will not slow down for the crosswalk. However, if ego vehicle to slow down to a certain speed in such cases is wanted then it is possible by adding some tags to the related crosswalk definition as it is instructed in lanelet2_format_extension.md document.

"},{"location":"planning/behavior_velocity_planner/crosswalk-design/#limitations","title":"Limitations","text":"

When multiple crosswalks are nearby (such as intersection), this module may make a stop decision even at crosswalks where the object has no intention of crossing.

design limits"},{"location":"planning/behavior_velocity_planner/crosswalk-design/#known-issues","title":"Known Issues","text":""},{"location":"planning/behavior_velocity_planner/crosswalk-design/#referencesexternal-links","title":"References/External links","text":"

[1] \u4f50\u85e4 \u307f\u306a\u307f, \u65e9\u5742 \u7965\u4e00, \u6e05\u6c34 \u653f\u884c, \u6751\u91ce \u9686\u5f66, \u6a2a\u65ad\u6b69\u884c\u8005\u306b\u5bfe\u3059\u308b\u30c9\u30e9\u30a4\u30d0\u306e\u30ea\u30b9\u30af\u56de\u907f\u884c\u52d5\u306e\u30e2\u30c7\u30eb\u5316, \u81ea\u52d5\u8eca\u6280\u8853\u4f1a\u8ad6\u6587\u96c6, 2013, 44 \u5dfb, 3 \u53f7, p. 931-936.

"},{"location":"planning/behavior_velocity_planner/detection-area-design/","title":"Detection area design","text":""},{"location":"planning/behavior_velocity_planner/detection-area-design/#detection-area","title":"Detection Area","text":""},{"location":"planning/behavior_velocity_planner/detection-area-design/#role","title":"Role","text":"

If pointcloud is detected in a detection area defined on a map, the stop planning will be executed at the predetermined point.

"},{"location":"planning/behavior_velocity_planner/detection-area-design/#activation-timing","title":"Activation Timing","text":"

This module is activated when there is a detection area on the target lane.

"},{"location":"planning/behavior_velocity_planner/detection-area-design/#module-parameters","title":"Module Parameters","text":"Parameter Type Description use_dead_line bool [-] weather to use dead line or not use_pass_judge_line bool [-] weather to use pass judge line or not state_clear_time double [s] when the vehicle is stopping for certain time without incoming obstacle, move to STOPPED state stop_margin double [m] a margin that the vehicle tries to stop before stop_line dead_line_margin double [m] ignore threshold that vehicle behind is collide with ego vehicle or not hold_stop_margin_distance double [m] parameter for restart prevention (See Algorithm section)"},{"location":"planning/behavior_velocity_planner/detection-area-design/#inner-workings-algorithm","title":"Inner-workings / Algorithm","text":"
  1. Gets a detection area and stop line from map information and confirms if there is pointcloud in the detection area
  2. Inserts stop point l[m] in front of the stop line
  3. Inserts a pass judge point to a point where the vehicle can stop with a max deceleration
  4. Sets velocity as zero behind the stop line when the ego-vehicle is in front of the pass judge point
  5. If the ego vehicle has passed the pass judge point already, it doesn\u2019t stop and pass through.
"},{"location":"planning/behavior_velocity_planner/detection-area-design/#flowchart","title":"Flowchart","text":""},{"location":"planning/behavior_velocity_planner/detection-area-design/#restart-prevention","title":"Restart prevention","text":"

If it needs X meters (e.g. 0.5 meters) to stop once the vehicle starts moving due to the poor vehicle control performance, the vehicle goes over the stopping position that should be strictly observed when the vehicle starts to moving in order to approach the near stop point (e.g. 0.3 meters away).

This module has parameter hold_stop_margin_distance in order to prevent from these redundant restart. If the vehicle is stopped within hold_stop_margin_distance meters from stop point of the module (_front_to_stop_line < hold_stop_margin_distance), the module judges that the vehicle has already stopped for the module's stop point and plans to keep stopping current position even if the vehicle is stopped due to other factors.

parameters

outside the hold_stop_margin_distance

inside the hold_stop_margin_distance"},{"location":"planning/behavior_velocity_planner/intersection-design/","title":"Intersection design","text":""},{"location":"planning/behavior_velocity_planner/intersection-design/#intersection","title":"Intersection","text":""},{"location":"planning/behavior_velocity_planner/intersection-design/#role","title":"Role","text":"

Judgement whether a vehicle can go into an intersection or not by a dynamic object information, and planning a velocity of the low-down/stop. This module is designed for rule-based intersection velocity decision that is easy for developers to design its behavior. It generates proper velocity for intersection scene.

In addition, the external users / modules (e.g. remote operation) to can intervene the STOP/GO decision for the vehicle behavior. The override interface is expected to be used, for example, for remote intervention in emergency situations or gathering information on operator decisions during development.

"},{"location":"planning/behavior_velocity_planner/intersection-design/#activation-timing","title":"Activation Timing","text":"

This function is activated when the attention lane conflicts with the ego vehicle's lane.

"},{"location":"planning/behavior_velocity_planner/intersection-design/#limitations","title":"Limitations","text":"

This module allows developers to design vehicle velocity in intersection module using specific rules. This module is affected by object detection and prediction accuracy considering as stuck vehicle in this intersection module.

"},{"location":"planning/behavior_velocity_planner/intersection-design/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"planning/behavior_velocity_planner/intersection-design/#how-to-select-attention-target-objects","title":"How To Select Attention Target Objects","text":"

Objects that satisfy all of the following conditions are considered as target objects (possible collision objects):

"},{"location":"planning/behavior_velocity_planner/intersection-design/#how-to-define-attention-lanes","title":"How to Define Attention Lanes","text":"

Target objects are limited by lanelets to prevent unexpected behavior. A lane that satisfies the following conditions is defined as an \"Attention Lane\" and used to define the target object.

See the following figures to know how to create an attention area and its rationale.

Note: the case traffic light, turn right only is not currently implemented.

"},{"location":"planning/behavior_velocity_planner/intersection-design/#collision-check-and-crossing-judgement","title":"Collision Check and Crossing Judgement","text":"

The following process is performed for the attention targets to determine whether the ego vehicle can cross the intersection safely. If it is judged that the ego vehicle cannot pass through the intersection with enough margin, it will insert the stopping speed on the stop line of the intersection.

  1. calculate the passing time and the time that the ego vehicle is in the intersection. This time is set as t_s ~ t_e
  2. extract the predicted path of the target object whose confidence is greater than min_predicted_path_confidence.
  3. detect collision between the extracted predicted path and ego's predicted path in the following process.
    1. obtain the passing area of the ego vehicle A_{ego} in t_s ~ t_e.
    2. calculate the passing area of the target object A_{target} at t_s - collision_start_margin_time ~ t_e + collision_end_margin_time for each predicted path (*1).
    3. check if A_{ego} and A_{target} regions are overlapped (has collision).
  4. when a collision is detected, the module inserts a stop velocity in front of the intersection. Note that there is a time margin for the stop release (*2).
  5. If ego is over the pass_judge_line, collision checking is not processed to avoid sudden braking. However if ego velocity is lower than the threshold keep_detection_vel_thr then this module continues collision checking.

(*1) The parameters collision_start_margin_time and collision_end_margin_time can be interpreted as follows:

(*2) If the collision is detected, the state transits to \"stop\" immediately. On the other hand, the collision judgment must be clear for a certain period (default : 2.0[s]) to transit from \"stop\" to \"go\" to prevent to prevent chattering of decisions.

"},{"location":"planning/behavior_velocity_planner/intersection-design/#stop-line-automatic-generation","title":"Stop Line Automatic Generation","text":"

The driving lane is complemented at a certain intervals (default : 20 [cm]), and the line which is a margin distance (default : 100cm) in front of the attention lane is defined as a stop line. (Also the length of the vehicle is considered and the stop point is set at the base_link point in front of the stop lane.)

"},{"location":"planning/behavior_velocity_planner/intersection-design/#pass-judge-line","title":"Pass Judge Line","text":"

To avoid a rapid braking, in case that a deceleration more than a threshold (default : 0.5[G]) is needed, the ego vehicle doesn\u2019t stop. In order to judge this condition, pass judge line is set a certain distance (default : 0.5 * v_current^2 / a_max) in front of the stop line. To prevent a chattering, once the ego vehicle passes this line, \u201cstop\u201d decision in the intersection won\u2019t be done any more. To prevent going over the pass judge line before the traffic light stop line, the distance between stop line and pass judge line become 0m in case that there is a stop line between the ego vehicle and an intersection stop line.

"},{"location":"planning/behavior_velocity_planner/intersection-design/#stuck-vehicle-detection","title":"Stuck Vehicle Detection","text":"

If there is any object in a certain distance (default : 5[m]) from the end point of the intersection lane on the driving lane and the object velocity is less than a threshold (default 3.0[km/h]), the object is regarded as a stuck vehicle. If the stuck vehicle exists, the ego vehicle cannot enter the intersection.

As a related case, if the object in front of the ego vehicle is turning the same direction, this module predicts the stopping point that the object will reach at a certain deceleration (default: -1.0[m/s^2]). If the predicted position is in stuck vehicle detection area AND the position which vehicle length [m] behind the predicted position is in detection area, the ego vehicle will also stop.

"},{"location":"planning/behavior_velocity_planner/intersection-design/#module-parameters","title":"Module Parameters","text":"Parameter Type Description intersection/state_transit_margin_time double [m] time margin to change state intersection/path_expand_width bool [m] path area to see with expansion intersection/stop_line_margin double [m] margin before stop line intersection/stuck_vehicle_detect_dist double [m] this should be the length between cars when they are stopped. intersection/stuck_vehicle_ignore_dist double [m] obstacle stop max distance(5.0[m]) + stuck vehicle size / 2.0[m]) intersection/stuck_vehicle_vel_thr double [m/s] velocity below 3[km/h] is ignored by default intersection/intersection_velocity double [m/s] velocity to pass intersection. 10[km/h] is by default intersection/intersection_max_accel double [m/s^2] acceleration in intersection intersection/detection_area_margin double [m] range for expanding detection area intersection/detection_area_length double [m] range for lidar detection 200[m] is by default intersection/detection_area_angle_threshold double [rad] threshold of angle difference between the detection object and lane intersection/min_predicted_path_confidence double [-] minimum confidence value of predicted path to use for collision detection merge_from_private_road/stop_duration_sec double [s] duration to stop assumed_front_car_decel: 1.0 double [m/s^2] deceleration of front car used to check if it could stop in the stuck area at the exit keep_detection_vel_threshold double [m/s] the threshold for ego vehicle for keeping detection after passing pass_judge_line"},{"location":"planning/behavior_velocity_planner/intersection-design/#how-to-tune-parameters","title":"How To Tune Parameters","text":""},{"location":"planning/behavior_velocity_planner/intersection-design/#flowchart","title":"Flowchart","text":"

NOTE current state is treated as STOP if is_entry_prohibited = true else GO

"},{"location":"planning/behavior_velocity_planner/intersection-design/#known-limits","title":"Known Limits","text":""},{"location":"planning/behavior_velocity_planner/intersection-design/#how-to-set-lanelet-map-fot-intersection","title":"How to Set Lanelet Map fot Intersection","text":""},{"location":"planning/behavior_velocity_planner/intersection-design/#set-a-turn_direction-tag-fig-1","title":"Set a turn_direction tag (Fig. 1)","text":"

IntersectionModule will be activated by this tag. If this tag is not set, ego-vehicle don\u2019t recognize the lane as an intersection. Even if it\u2019s a straight lane, this tag is mandatory if it is located within intersection.

Set a value in turn_direction tag to light up turn signals.

Values of turn_direction must be one of \u201cstraight\u201d(no turn signal), \u201cright\u201d or \u201cleft\u201d. Autoware will light up respective turn signals 30[m] before entering the specified lane. You may also set optional tag \u201cturn_signal_distance\u201d to modify the distance to start lighting up turn signals.

Lanes within intersections must be defined as a single Lanelet. For example, blue lane in Fig.3 cannot be split into 2 Lanelets.

"},{"location":"planning/behavior_velocity_planner/intersection-design/#explicitly-describe-a-stop-position-roadmarking-optional-fig-2","title":"Explicitly describe a stop position [RoadMarking] (Optional) (Fig. 2)","text":"

As a default, IntersectionModule estimates a stop position by the crossing point of driving lane and attention lane. But there are some cases like Fig.2 in which we would like to set a stop position explicitly. When a stop_line is defined as a RoadMarking item in the intersection lane, it overwrites the stop position. (Not only creating stop_line, but also defining as a RoadMaking item are needed.)

"},{"location":"planning/behavior_velocity_planner/intersection-design/#exclusion-setting-of-attention-lanes-rightofway-fig3","title":"Exclusion setting of attention lanes [RightOfWay] (Fig.3)","text":"

By default, IntersectionModule treats all lanes crossing with the registered lane as attention targets (yellow and green lanelets). But in some cases (e.g. when driving lane is priority lane or traffic light is green for the driving lane), we want to ignore some of the yield lanes. By setting RightOfWay of the RegulatoryElement item, we can define lanes to be ignored. Register ignored lanes as \u201cyield\u201d and register the attention lanes and driving lane as \u201cright_of_way\u201d lanelets in RightOfWay RegulatoryElement (For an intersection with traffic lights, we need to create items for each lane in the intersection. Please note that it needs a lot of man-hours.)

"},{"location":"planning/behavior_velocity_planner/merge-from-private-design/","title":"Merge from private design","text":""},{"location":"planning/behavior_velocity_planner/merge-from-private-design/#merge-from-private","title":"Merge From Private","text":""},{"location":"planning/behavior_velocity_planner/merge-from-private-design/#role","title":"Role","text":"

When an ego vehicle enters a public road from a private road (e.g. a parking lot), it needs to face and stop before entering the public road to make sure it is safe.

This module is activated when there is an intersection at the private area from which the vehicle enters the public road. The stop line is generated both when the goal is in the intersection lane and when the path goes beyond the intersection lane. The basic behavior is the same as the intersection module, but the ego vehicle must stop once at the stop line.

"},{"location":"planning/behavior_velocity_planner/merge-from-private-design/#activation-timing","title":"Activation Timing","text":"

This module is activated when the following conditions are met:

"},{"location":"planning/behavior_velocity_planner/merge-from-private-design/#module-parameters","title":"Module Parameters","text":"Parameter Type Description merge_from_private_road/stop_duration_sec double [m] time margin to change state"},{"location":"planning/behavior_velocity_planner/merge-from-private-design/#known-issue","title":"Known Issue","text":"

If ego vehicle go over the stop line for a certain distance, then ego vehicle will not transit from STOP.

"},{"location":"planning/behavior_velocity_planner/no-stopping-area-design/","title":"No stopping area design","text":""},{"location":"planning/behavior_velocity_planner/no-stopping-area-design/#no-stopping-area","title":"No Stopping Area","text":""},{"location":"planning/behavior_velocity_planner/no-stopping-area-design/#role","title":"Role","text":"

This module plans to avoid stop in 'no stopping area`.

"},{"location":"planning/behavior_velocity_planner/no-stopping-area-design/#limitation","title":"Limitation","text":"

This module allows developers to design vehicle velocity in no_stopping_area module using specific rules. Once ego vehicle go through pass through point, ego vehicle does't insert stop velocity and does't change decision from GO. Also this module only considers dynamic object in order to avoid unnecessarily stop.

"},{"location":"planning/behavior_velocity_planner/no-stopping-area-design/#modelparameter","title":"ModelParameter","text":"Parameter Type Description state_clear_time double [s] time to clear stop state stuck_vehicle_vel_thr double [m/s] vehicles below this velocity are considered as stuck vehicle. stop_margin double [m] margin to stop line at no stopping area dead_line_margin double [m] if ego pass this position GO stop_line_margin double [m] margin to auto-gen stop line at no stopping area detection_area_length double [m] length of searching polygon stuck_vehicle_front_margin double [m] obstacle stop max distance"},{"location":"planning/behavior_velocity_planner/no-stopping-area-design/#flowchart","title":"Flowchart","text":""},{"location":"planning/behavior_velocity_planner/occlusion-spot-design/","title":"Occlusion spot design","text":""},{"location":"planning/behavior_velocity_planner/occlusion-spot-design/#occlusion-spot","title":"Occlusion Spot","text":""},{"location":"planning/behavior_velocity_planner/occlusion-spot-design/#role","title":"Role","text":"

This module plans safe velocity to slow down before reaching collision point that hidden object is darting out from occlusion spot where driver can't see clearly because of obstacles.

"},{"location":"planning/behavior_velocity_planner/occlusion-spot-design/#activation-timing","title":"Activation Timing","text":"

This module is activated if launch_occlusion_spot becomes true. To make pedestrian first zone map tag is one of the TODOs.

"},{"location":"planning/behavior_velocity_planner/occlusion-spot-design/#limitation-and-todos","title":"Limitation and TODOs","text":"

This module is prototype implementation to care occlusion spot. To solve the excessive deceleration due to false positive of the perception, the logic of detection method can be selectable. This point has not been discussed in detail and needs to be improved.

TODOs are written in each Inner-workings / Algorithms (see the description below).

"},{"location":"planning/behavior_velocity_planner/occlusion-spot-design/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"planning/behavior_velocity_planner/occlusion-spot-design/#logics-working","title":"Logics Working","text":"

There are several types of occlusions, such as \"occlusions generated by parked vehicles\" and \"occlusions caused by obstructions\". In situations such as driving on road with obstacles, where people jump out of the way frequently, all possible occlusion spots must be taken into account. This module considers all occlusion spots calculated from the occupancy grid, but it is not reasonable to take into account all occlusion spots for example, people jumping out from behind a guardrail, or behind cruising vehicle. Therefore currently detection area will be limited to to use predicted object information.

Note that this decision logic is still under development and needs to be improved.

"},{"location":"planning/behavior_velocity_planner/occlusion-spot-design/#detectionarea-polygon","title":"DetectionArea Polygon","text":"

This module considers TTV from pedestrian velocity and lateral distance to occlusion spot. TTC is calculated from ego velocity and acceleration and longitudinal distance until collision point using motion velocity smoother. To compute fast this module only consider occlusion spot whose TTV is less than TTC and only consider area within \"max lateral distance\".

"},{"location":"planning/behavior_velocity_planner/occlusion-spot-design/#occlusion-spot-occupancy-grid-base","title":"Occlusion Spot Occupancy Grid Base","text":"

This module considers any occlusion spot around ego path computed from the occupancy grid. Due to the computational cost occupancy grid is not high resolution and this will make occupancy grid noisy so this module add information of occupancy to occupancy grid map.

TODO: consider hight of obstacle point cloud to generate occupancy grid.

"},{"location":"planning/behavior_velocity_planner/occlusion-spot-design/#collision-free-judgement","title":"Collision Free Judgement","text":"

obstacle that can run out from occlusion should have free space until intersection from ego vehicle

"},{"location":"planning/behavior_velocity_planner/occlusion-spot-design/#partition-lanelet","title":"Partition Lanelet","text":"

By using lanelet information of \"guard_rail\", \"fence\", \"wall\" tag, it's possible to remove unwanted occlusion spot.

By using static object information, it is possible to make occupancy grid more accurate.

To make occupancy grid for planning is one of the TODOs.

"},{"location":"planning/behavior_velocity_planner/occlusion-spot-design/#possible-collision","title":"Possible Collision","text":"

obstacle that can run out from occlusion is interrupted by moving vehicle.

"},{"location":"planning/behavior_velocity_planner/occlusion-spot-design/#about-safe-motion","title":"About safe motion","text":""},{"location":"planning/behavior_velocity_planner/occlusion-spot-design/#the-concept-of-safe-velocity-and-margin","title":"The Concept of Safe Velocity and Margin","text":"

The safe slowdown velocity is calculated from the below parameters of ego emergency braking system and time to collision. Below calculation is included but change velocity dynamically is not recommended for planner.

This module defines safe margin to consider ego distance to stop and collision path point geometrically. While ego is cruising from safe margin to collision path point, ego vehicle keeps the same velocity as occlusion spot safe velocity.

Note: This logic assumes high-precision vehicle speed tracking and margin for decel point might not be the best solution, and override with manual driver is considered if pedestrian really run out from occlusion spot.

TODO: consider one of the best choices

  1. stop in front of occlusion spot
  2. insert 1km/h velocity in front of occlusion spot
  3. slowdown this way
  4. etc... .
"},{"location":"planning/behavior_velocity_planner/occlusion-spot-design/#maximum-slowdown-velocity","title":"Maximum Slowdown Velocity","text":"

The maximum slowdown velocity is calculated from the below parameters of ego current velocity and acceleration with maximum slowdown jerk and maximum slowdown acceleration in order not to slowdown too much.

"},{"location":"planning/behavior_velocity_planner/occlusion-spot-design/#module-parameters","title":"Module Parameters","text":"Parameter Type Description pedestrian_vel double [m/s] maximum velocity assumed pedestrian coming out from occlusion point. pedestrian_radius double [m] assumed pedestrian radius which fits in occlusion spot. Parameter Type Description use_object_info bool [-] whether to reflect object info to occupancy grid map or not. use_partition_lanelet bool [-] whether to use partition lanelet map data. Parameter /debug Type Description is_show_occlusion bool [-] whether to show occlusion point markers.\u3000 is_show_cv_window bool [-] whether to show open_cv debug window. is_show_processing_time bool [-] whether to show processing time. Parameter /threshold Type Description detection_area_length double [m] the length of path to consider occlusion spot stuck_vehicle_vel double [m/s] velocity below this value is assumed to stop lateral_distance double [m] maximum lateral distance to consider hidden collision Parameter /motion Type Description safety_ratio double [-] safety ratio for jerk and acceleration max_slow_down_jerk double [m/s^3] jerk for safe brake max_slow_down_accel double [m/s^2] deceleration for safe brake non_effective_jerk double [m/s^3] weak jerk for velocity planning. non_effective_acceleration double [m/s^2] weak deceleration for velocity planning. min_allowed_velocity double [m/s] minimum velocity allowed safe_margin double [m] maximum error to stop with emergency braking system. Parameter /detection_area Type Description min_occlusion_spot_size double [m] the length of path to consider occlusion spot slice_length double [m] the distance of divided detection area max_lateral_distance double [m] buffer around the ego path used to build the detection_area area. Parameter /grid Type Description free_space_max double [-] maximum value of a free space cell in the occupancy grid occupied_min double [-] buffer around the ego path used to build the detection_area area."},{"location":"planning/behavior_velocity_planner/occlusion-spot-design/#flowchart","title":"Flowchart","text":""},{"location":"planning/behavior_velocity_planner/occlusion-spot-design/#rough-overview-of-the-whole-process","title":"Rough overview of the whole process","text":""},{"location":"planning/behavior_velocity_planner/occlusion-spot-design/#detail-process-for-predicted-objectnot-updated","title":"Detail process for predicted object(not updated)","text":""},{"location":"planning/behavior_velocity_planner/occlusion-spot-design/#detail-process-for-occupancy-grid-base","title":"Detail process for Occupancy grid base","text":""},{"location":"planning/behavior_velocity_planner/run-out-design/","title":"Run out design","text":""},{"location":"planning/behavior_velocity_planner/run-out-design/#run-out","title":"Run Out","text":""},{"location":"planning/behavior_velocity_planner/run-out-design/#role","title":"Role","text":"

run_out is the module that decelerates and stops for dynamic obstacles such as pedestrians and bicycles.

"},{"location":"planning/behavior_velocity_planner/run-out-design/#activation-timing","title":"Activation Timing","text":"

This module is activated if launch_run_out becomes true

"},{"location":"planning/behavior_velocity_planner/run-out-design/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"planning/behavior_velocity_planner/run-out-design/#flow-chart","title":"Flow chart","text":""},{"location":"planning/behavior_velocity_planner/run-out-design/#preprocess-path","title":"Preprocess path","text":""},{"location":"planning/behavior_velocity_planner/run-out-design/#calculate-the-expected-target-velocity-for-ego-vehicle","title":"Calculate the expected target velocity for ego vehicle","text":"

Calculate the expected target velocity for the ego vehicle path to calculate time to collision with obstacles more precisely. The expected target velocity is calculated with motion velocity smoother module by using current velocity, current acceleration and velocity limits directed by the map and external API.

"},{"location":"planning/behavior_velocity_planner/run-out-design/#extend-the-path","title":"Extend the path","text":"

The path is extended by the length of base link to front to consider obstacles after the goal.

"},{"location":"planning/behavior_velocity_planner/run-out-design/#trim-path-from-ego-position","title":"Trim path from ego position","text":"

The path is trimmed from ego position to a certain distance to reduce calculation time. Trimmed distance is specified by parameter of detection_distance.

"},{"location":"planning/behavior_velocity_planner/run-out-design/#preprocess-obstacles","title":"Preprocess obstacles","text":""},{"location":"planning/behavior_velocity_planner/run-out-design/#create-data-of-abstracted-dynamic-obstacle","title":"Create data of abstracted dynamic obstacle","text":"

This module can handle multiple types of obstacles by creating abstracted dynamic obstacle data layer. Currently we have 3 types of detection method (Object, ObjectWithoutPath, Points) to create abstracted obstacle data.

"},{"location":"planning/behavior_velocity_planner/run-out-design/#abstracted-dynamic-obstacle","title":"Abstracted dynamic obstacle","text":"

Abstracted obstacle data has following information.

Name Type Description pose geometry_msgs::msg::Pose pose of the obstacle classifications std::vector<autoware_auto_perception_msgs::msg::ObjectClassification> classifications with probability shape autoware_auto_perception_msgs::msg::Shape shape of the obstacle predicted_paths std::vector<DynamicObstacle::PredictedPath> predicted paths with confidence. this data doesn't have time step because we use minimum and maximum velocity instead. min_velocity_mps float minimum velocity of the obstacle. specified by parameter of dynamic_obstacle.min_vel_kmph max_velocity_mps float maximum velocity of the obstacle. specified by parameter of dynamic_obstacle.max_vel_kmph

Enter the maximum/minimum velocity of the object as a parameter, adding enough margin to the expected velocity. This parameter is used to create polygons for collision detection.

Future work: Determine the maximum/minimum velocity from the estimated velocity with covariance of the object

"},{"location":"planning/behavior_velocity_planner/run-out-design/#3-types-of-detection-method","title":"3 types of detection method","text":"

We have 3 types of detection method to meet different safety and availability requirements. The characteristics of them are shown in the table below. Method of Object has high availability (less false positive) because it detects only objects whose predicted path is on the lane. However, sometimes it is not safe because perception may fail to detect obstacles or generate incorrect predicted paths. On the other hand, method of Points has high safety (less false negative) because it uses pointcloud as input. Since points don't have a predicted path, the path that moves in the direction normal to the path of ego vehicle is considered to be the predicted path of abstracted dynamic obstacle data. However, without proper adjustment of filter of points, it may detect a lot of points and it will result in very low availability. Method of ObjectWithoutPath has the characteristics of an intermediate of Object and Points.

Method Description Object use an object with the predicted path for collision detection. ObjectWithoutPath use an object but not use the predicted path for collision detection. replace the path assuming that an object jumps out to the lane at specified velocity. Points use filtered points for collision detection. the path is created assuming that points jump out to the lane. points are regarded as an small circular shaped obstacle.

"},{"location":"planning/behavior_velocity_planner/run-out-design/#exclude-obstacles-outside-of-partition","title":"Exclude obstacles outside of partition","text":"

This module can exclude the obstacles outside of partition such as guardrail, fence, and wall. We need lanelet map that has the information of partition to use this feature. By this feature, we can reduce unnecessary deceleration by obstacles that are unlikely to jump out to the lane. You can choose whether to use this feature by parameter of use_partition_lanelet.

"},{"location":"planning/behavior_velocity_planner/run-out-design/#collision-detection","title":"Collision detection","text":""},{"location":"planning/behavior_velocity_planner/run-out-design/#detect-collision-with-dynamic-obstacles","title":"Detect collision with dynamic obstacles","text":"

Along the ego vehicle path, determine the points where collision detection is to be performed for each detection_span.

The travel times to the each points are calculated from the expected target velocity.

For the each points, collision detection is performed using the footprint polygon of the ego vehicle and the polygon of the predicted location of the obstacles. The predicted location of the obstacles is described as rectangle or polygon that has the range calculated by min velocity, max velocity and the ego vehicle's travel time to the point. If the input type of the dynamic obstacle is Points, the obstacle shape is defined as a small cylinder.

Multiple points are detected as collision points because collision detection is calculated between two polygons. So we select the point that is on the same side as the obstacle and close to ego vehicle as the collision point.

"},{"location":"planning/behavior_velocity_planner/run-out-design/#insert-velocity","title":"Insert velocity","text":""},{"location":"planning/behavior_velocity_planner/run-out-design/#insert-velocity-to-decelerate-for-obstacles","title":"Insert velocity to decelerate for obstacles","text":"

If the collision is detected, stop point is inserted on distance of base link to front + stop margin from the selected collision point. The base link to front means the distance between base_link (center of rear-wheel axis) and front of the car. Stop margin is determined by the parameter of stop_margin.

"},{"location":"planning/behavior_velocity_planner/run-out-design/#insert-velocity-to-approach-the-obstacles","title":"Insert velocity to approach the obstacles","text":"

If you select the method of Points or ObjectWithoutPath, sometimes ego keeps stopping in front of the obstacle. To avoid this problem, This feature has option to approach the obstacle with slow velocity after stopping. If the parameter of approaching.enable is set to true, ego will approach the obstacle after ego stopped for state.stop_time_thresh seconds. The maximum velocity of approaching can be specified by the parameter of approaching.limit_vel_kmph. The decision to approach the obstacle is determined by a simple state transition as following image.

"},{"location":"planning/behavior_velocity_planner/run-out-design/#limit-velocity-with-specified-jerk-and-acc-limit","title":"Limit velocity with specified jerk and acc limit","text":"

The maximum slowdown velocity is calculated in order not to slowdown too much. See the Occlusion Spot document for more details. You can choose whether to use this feature by parameter of slow_down_limit.enable.

"},{"location":"planning/behavior_velocity_planner/run-out-design/#module-parameters","title":"Module Parameters","text":"Parameter Type Description detection_method string [-] candidate: Object, ObjectWithoutPath, Points use_partition_lanelet bool [-] whether to use partition lanelet map data specify_decel_jerk bool [-] whether to specify jerk when ego decelerates stop_margin double [m] the vehicle decelerates to be able to stop with this margin passing_margin double [m] the vehicle begins to accelerate if the vehicle's front in predicted position is ahead of the obstacle + this margin deceleration_jerk double [m/s^3] ego decelerates with this jerk when stopping for obstacles detection_distance double [m] ahead distance from ego to detect the obstacles detection_span double [m] calculate collision with this span to reduce calculation time min_vel_ego_kmph double [km/h] min velocity to calculate time to collision Parameter /detection_area Type Description margin_ahead double [m] ahead margin for detection area polygon margin_behind double [m] behind margin for detection area polygon Parameter /dynamic_obstacle Type Description min_vel_kmph double [km/h] minimum velocity for dynamic obstacles max_vel_kmph double [km/h] maximum velocity for dynamic obstacles diameter double [m] diameter of obstacles. used for creating dynamic obstacles from points height double [m] height of obstacles. used for creating dynamic obstacles from points max_prediction_time double [sec] create predicted path until this time time_step double [sec] time step for each path step. used for creating dynamic obstacles from points or objects without path points_interval double [m] divide obstacle points into groups with this interval, and detect only lateral nearest point. used only for Points method Parameter /approaching Type Description enable bool [-] whether to enable approaching after stopping margin double [m] distance on how close ego approaches the obstacle limit_vel_kmph double [km/h] limit velocity for approaching after stopping Parameter /state Type Description stop_thresh double [m/s] threshold to decide if ego is stopping stop_time_thresh double [sec] threshold for stopping time to transit to approaching state disable_approach_dist double [m] end the approaching state if distance to the obstacle is longer than this value keep_approach_duration double [sec] keep approach state for this duration to avoid chattering of state transition Parameter /slow_down_limit Type Description enable bool [-] whether to enable to limit velocity with max jerk and acc max_jerk double [m/s^3] minimum jerk deceleration for safe brake. max_acc double [m/s^2] minimum accel deceleration for safe brake."},{"location":"planning/behavior_velocity_planner/run-out-design/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":""},{"location":"planning/behavior_velocity_planner/speed-bump-design/","title":"Speed bump design","text":""},{"location":"planning/behavior_velocity_planner/speed-bump-design/#speed-bump","title":"Speed Bump","text":""},{"location":"planning/behavior_velocity_planner/speed-bump-design/#role","title":"Role","text":"

This module plans the velocity of the related part of the path in case there is speed bump regulatory element referring to it.

"},{"location":"planning/behavior_velocity_planner/speed-bump-design/#activation-timing","title":"Activation Timing","text":"

The manager launch speed bump scene module when there is speed bump regulatory element referring to the reference path.

"},{"location":"planning/behavior_velocity_planner/speed-bump-design/#module-parameters","title":"Module Parameters","text":"Parameter Type Description slow_start_margin double [m] margin for ego vehicle to slow down before speed_bump slow_end_margin double [m] margin for ego vehicle to accelerate after speed_bump print_debug_info bool whether debug info will be printed or not"},{"location":"planning/behavior_velocity_planner/speed-bump-design/#speed-calculation","title":"Speed Calculation","text":" Parameter Type Description min_height double [m] minimum height assumption of the speed bump max_height double [m] maximum height assumption of the speed bump min_speed double [m/s] minimum speed assumption of slow down speed max_speed double [m/s] maximum speed assumption of slow down speed"},{"location":"planning/behavior_velocity_planner/speed-bump-design/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

Note: If in speed bump annotation slow_down_speed tag is used then calculating the speed wrt the speed bump height will be ignored. In such case, specified slow_down_speed value in [kph] is being used.

"},{"location":"planning/behavior_velocity_planner/speed-bump-design/#future-work","title":"Future Work","text":""},{"location":"planning/behavior_velocity_planner/stop-line-design/","title":"Stop line design","text":""},{"location":"planning/behavior_velocity_planner/stop-line-design/#stop-line","title":"Stop Line","text":""},{"location":"planning/behavior_velocity_planner/stop-line-design/#role","title":"Role","text":"

This module plans velocity so that the vehicle can stop right before stop lines and restart driving after stopped.

"},{"location":"planning/behavior_velocity_planner/stop-line-design/#activation-timing","title":"Activation Timing","text":"

This module is activated when there is a stop line in a target lane.

"},{"location":"planning/behavior_velocity_planner/stop-line-design/#module-parameters","title":"Module Parameters","text":"Parameter Type Description stop_margin double a margin that the vehicle tries to stop before stop_line stop_check_dist double when the vehicle is within stop_check_dist from stop_line and stopped, move to STOPPED state hold_stop_margin_distance double [m] parameter for restart prevention (See Algorithm section)"},{"location":"planning/behavior_velocity_planner/stop-line-design/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"planning/behavior_velocity_planner/stop-line-design/#flowchart","title":"Flowchart","text":"

This algorithm is based on segment. segment consists of two node points. It's useful for removing boundary conditions because if segment(i) exists we can assume node(i) and node(i+1) exist.

First, this algorithm finds a collision between reference path and stop line. Then, we can get collision segment and collision point.

Next, based on collision point, it finds offset segment by iterating backward points up to a specific offset length. The offset length is stop_margin(parameter) + base_link to front(to adjust head pose to stop line). Then, we can get offset segment and offset from segment start.

After that, we can calculate a offset point from offset segment and offset. This will be stop_pose.

"},{"location":"planning/behavior_velocity_planner/stop-line-design/#restart-prevention","title":"Restart prevention","text":"

If it needs X meters (e.g. 0.5 meters) to stop once the vehicle starts moving due to the poor vehicle control performance, the vehicle goes over the stopping position that should be strictly observed when the vehicle starts to moving in order to approach the near stop point (e.g. 0.3 meters away).

This module has parameter hold_stop_margin_distance in order to prevent from these redundant restart. If the vehicle is stopped within hold_stop_margin_distance meters from stop point of the module (_front_to_stop_line < hold_stop_margin_distance), the module judges that the vehicle has already stopped for the module's stop point and plans to keep stopping current position even if the vehicle is stopped due to other factors.

parameters

outside the hold_stop_margin_distance

inside the hold_stop_margin_distance"},{"location":"planning/behavior_velocity_planner/traffic-light-design/","title":"Traffic light design","text":""},{"location":"planning/behavior_velocity_planner/traffic-light-design/#traffic-light","title":"Traffic Light","text":""},{"location":"planning/behavior_velocity_planner/traffic-light-design/#role","title":"Role","text":"

Judgement whether a vehicle can go into an intersection or not by internal and external traffic light status, and planning a velocity of the stop if necessary. This module is designed for rule-based velocity decision that is easy for developers to design its behavior. It generates proper velocity for traffic light scene.

In addition, the STOP/GO interface of behavior_velocity_planner allows external users / modules (e.g. remote operation) to intervene the decision of the internal perception. This function is expected to be used, for example, for remote intervention in detection failure or gathering information on operator decisions during development.

"},{"location":"planning/behavior_velocity_planner/traffic-light-design/#limitations","title":"Limitations","text":"

This module allows developers to design STOP/GO in traffic light module using specific rules. Due to the property of rule-based planning, the algorithm is greatly depends on object detection and perception accuracy considering traffic light. Also, this module only handles STOP/Go at traffic light scene, so rushing or quick decision according to traffic condition is future work.

"},{"location":"planning/behavior_velocity_planner/traffic-light-design/#activation-timing","title":"Activation Timing","text":"

This module is activated when there is traffic light in ego lane.

"},{"location":"planning/behavior_velocity_planner/traffic-light-design/#algorithm","title":"Algorithm","text":"
  1. Obtains a traffic light mapped to the route and a stop line correspond to the traffic light from a map information.

  2. Uses the highest reliability one of the traffic light recognition result and if the color of that was red, generates a stop point.

  3. When vehicle current velocity is

    • higher than 2.0m/s \u21d2 pass judge(using next slide formula)
    • lower than 2.0m/s \u21d2 stop
  4. When it to be judged that vehicle can\u2019t stop before stop line, autoware chooses one of the following behaviors

    • \"can pass through\" stop line during yellow lamp => pass
    • \"can\u2019t pass through\" stop line during yellow lamp => emergency stop
"},{"location":"planning/behavior_velocity_planner/traffic-light-design/#dilemma-zone","title":"Dilemma Zone","text":" "},{"location":"planning/behavior_velocity_planner/traffic-light-design/#module-parameters","title":"Module Parameters","text":"Parameter Type Description stop_margin double [m] margin before stop point tl_state_timeout double [s] time out for detected traffic light result. external_tl_state_timeout double [s] time out for external traffic input yellow_lamp_period double [s] time for yellow lamp enable_pass_judge bool [-] whether to use pass judge"},{"location":"planning/behavior_velocity_planner/traffic-light-design/#flowchart","title":"Flowchart","text":""},{"location":"planning/behavior_velocity_planner/traffic-light-design/#known-limits","title":"Known Limits","text":""},{"location":"planning/behavior_velocity_planner/virtual-traffic-light-design/","title":"Virtual traffic light design","text":""},{"location":"planning/behavior_velocity_planner/virtual-traffic-light-design/#virtual-traffic-light","title":"Virtual Traffic Light","text":""},{"location":"planning/behavior_velocity_planner/virtual-traffic-light-design/#role","title":"Role","text":"

Autonomous vehicles have to cooperate with the infrastructures such as:

The following items are example cases:

  1. Traffic control by traffic lights with V2X support

  2. Intersection coordination of multiple vehicles by FMS.

It's possible to make each function individually, however, the use cases can be generalized with these three elements.

  1. start: Start a cooperation procedure after the vehicle enters a certain zone.
  2. stop: Stop at a defined stop line according to the status received from infrastructures.
  3. end: Finalize the cooperation procedure after the vehicle reaches the exit zone. This should be done within the range of stable communication.

This module sends/receives status from infrastructures and plans the velocity of the cooperation result.

"},{"location":"planning/behavior_velocity_planner/virtual-traffic-light-design/#system-configuration-diagram","title":"System Configuration Diagram","text":"

Planner and each infrastructure communicate with each other using common abstracted messages.

FMS: Intersection coordination when multiple vehicles are in operation and the relevant lane is occupied

Support different communication methods for different infrastructures

Have different meta-information for each geographic location

FMS: Fleet Management System

"},{"location":"planning/behavior_velocity_planner/virtual-traffic-light-design/#module-parameters","title":"Module Parameters","text":"Parameter Type Description max_delay_sec double [s] maximum allowed delay for command near_line_distance double [m] threshold distance to stop line to check ego stop. dead_line_margin double [m] threshold distance that this module continue to insert stop line. hold_stop_margin_distance double [m] parameter for restart prevention (See following section) check_timeout_after_stop_line bool [-] check timeout to stop when linkage is disconnected"},{"location":"planning/behavior_velocity_planner/virtual-traffic-light-design/#restart-prevention","title":"Restart prevention","text":"

If it needs X meters (e.g. 0.5 meters) to stop once the vehicle starts moving due to the poor vehicle control performance, the vehicle goes over the stopping position that should be strictly observed when the vehicle starts to moving in order to approach the near stop point (e.g. 0.3 meters away).

This module has parameter hold_stop_margin_distance in order to prevent from these redundant restart. If the vehicle is stopped within hold_stop_margin_distance meters from stop point of the module (_front_to_stop_line < hold_stop_margin_distance), the module judges that the vehicle has already stopped for the module's stop point and plans to keep stopping current position even if the vehicle is stopped due to other factors.

parameters

outside the hold_stop_margin_distance

inside the hold_stop_margin_distance"},{"location":"planning/behavior_velocity_planner/virtual-traffic-light-design/#flowchart","title":"Flowchart","text":""},{"location":"planning/behavior_velocity_planner/virtual-traffic-light-design/#map-format","title":"Map Format","text":" \\begin{align} l_{\\mathrm{min}} = -\\frac{v_0^2}{2 a_{\\mathrm{min}}} \\end{align}"},{"location":"planning/behavior_velocity_planner/virtual-traffic-light-design/#known-limits","title":"Known Limits","text":""},{"location":"planning/costmap_generator/","title":"costmap_generator","text":""},{"location":"planning/costmap_generator/#costmap_generator_node","title":"costmap_generator_node","text":"

This node reads PointCloud and/or DynamicObjectArray and creates an OccupancyGrid and GridMap. VectorMap(Lanelet2) is optional.

"},{"location":"planning/costmap_generator/#input-topics","title":"Input topics","text":"Name Type Description ~input/objects autoware_auto_perception_msgs::PredictedObjects predicted objects, for obstacles areas ~input/points_no_ground sensor_msgs::PointCloud2 ground-removed points, for obstacle areas which can't be detected as objects ~input/vector_map autoware_auto_mapping_msgs::HADMapBin vector map, for drivable areas ~input/scenario tier4_planning_msgs::Scenario scenarios to be activated, for node activation"},{"location":"planning/costmap_generator/#output-topics","title":"Output topics","text":"Name Type Description ~output/grid_map grid_map_msgs::GridMap costmap as GridMap, values are from 0.0 to 1.0 ~output/occupancy_grid nav_msgs::OccupancyGrid costmap as OccupancyGrid, values are from 0 to 100"},{"location":"planning/costmap_generator/#output-tfs","title":"Output TFs","text":"

None

"},{"location":"planning/costmap_generator/#how-to-launch","title":"How to launch","text":"
  1. Write your remapping info in costmap_generator.launch or add args when executing roslaunch
  2. Run roslaunch costmap_generator costmap_generator.launch
"},{"location":"planning/costmap_generator/#parameters","title":"Parameters","text":"Name Type Description update_rate double timer's update rate activate_by_scenario bool if true, activate by scenario = parking. Otherwise, activate if vehicle is inside parking lot. use_objects bool whether using ~input/objects or not use_points bool whether using ~input/points_no_ground or not use_wayarea bool whether using wayarea from ~input/vector_map or not use_parkinglot bool whether using parkinglot from ~input/vector_map or not costmap_frame string created costmap's coordinate vehicle_frame string vehicle's coordinate map_frame string map's coordinate grid_min_value double minimum cost for gridmap grid_max_value double maximum cost for gridmap grid_resolution double resolution for gridmap grid_length_x int size of gridmap for x direction grid_length_y int size of gridmap for y direction grid_position_x int offset from coordinate in x direction grid_position_y int offset from coordinate in y direction maximum_lidar_height_thres double maximum height threshold for pointcloud data minimum_lidar_height_thres double minimum height threshold for pointcloud data expand_rectangle_size double expand object's rectangle with this value size_of_expansion_kernel int kernel size for blurring effect on object's costmap"},{"location":"planning/costmap_generator/#flowchart","title":"Flowchart","text":""},{"location":"planning/external_velocity_limit_selector/","title":"External Velocity Limit Selector","text":""},{"location":"planning/external_velocity_limit_selector/#purpose","title":"Purpose","text":"

The external_velocity_limit_selector_node is a node that keeps consistency of external velocity limits. This module subscribes

  1. velocity limit command sent by API,
  2. velocity limit command sent by Autoware internal modules.

VelocityLimit.msg contains not only max velocity but also information about the acceleration/jerk constraints on deceleration. The external_velocity_limit_selector_node integrates the lowest velocity limit and the highest jerk constraint to calculate the hardest velocity limit that protects all the deceleration points and max velocities sent by API and Autoware internal modules.

"},{"location":"planning/external_velocity_limit_selector/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

WIP

"},{"location":"planning/external_velocity_limit_selector/#inputs","title":"Inputs","text":"Name Type Description ~input/velocity_limit_from_api tier4_planning_msgs::VelocityLimit velocity limit from api ~input/velocity_limit_from_internal tier4_planning_msgs::VelocityLimit velocity limit from autoware internal modules ~input/velocity_limit_clear_command_from_internal tier4_planning_msgs::VelocityLimitClearCommand velocity limit clear command"},{"location":"planning/external_velocity_limit_selector/#outputs","title":"Outputs","text":"Name Type Description ~output/max_velocity tier4_planning_msgs::VelocityLimit current information of the hardest velocity limit"},{"location":"planning/external_velocity_limit_selector/#parameters","title":"Parameters","text":"Parameter Type Description max_velocity double default max velocity [m/s] normal.min_acc double minimum acceleration [m/ss] normal.max_acc double maximum acceleration [m/ss] normal.min_jerk double minimum jerk [m/sss] normal.max_jerk double maximum jerk [m/sss] limit.min_acc double minimum acceleration to be observed [m/ss] limit.max_acc double maximum acceleration to be observed [m/ss] limit.min_jerk double minimum jerk to be observed [m/sss] limit.max_jerk double maximum jerk to be observed [m/sss]"},{"location":"planning/external_velocity_limit_selector/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"planning/external_velocity_limit_selector/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"planning/external_velocity_limit_selector/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"planning/external_velocity_limit_selector/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"planning/external_velocity_limit_selector/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"planning/freespace_planner/","title":"The freespace_planner","text":""},{"location":"planning/freespace_planner/#freespace_planner_node","title":"freespace_planner_node","text":"

freespace_planner_node is a global path planner node that plans trajectory in the space having static/dynamic obstacles. This node is currently based on Hybrid A* search algorithm in freespace_planning_algorithms package. Other algorithms such as rrt* will be also added and selectable in the future.

Note Due to the constraint of trajectory following, the output trajectory will be split to include only the single direction path. In other words, the output trajectory doesn't include both forward and backward trajectories at once.

"},{"location":"planning/freespace_planner/#input-topics","title":"Input topics","text":"Name Type Description ~input/route autoware_auto_planning_msgs::Route route and goal pose ~input/occupancy_grid nav_msgs::OccupancyGrid costmap, for drivable areas ~input/odometry nav_msgs::Odometry vehicle velocity, for checking whether vehicle is stopped ~input/scenario tier4_planning_msgs::Scenario scenarios to be activated, for node activation"},{"location":"planning/freespace_planner/#output-topics","title":"Output topics","text":"Name Type Description ~output/trajectory autoware_auto_planning_msgs::Trajectory trajectory to be followed is_completed bool (implemented as rosparam) whether all split trajectory are published"},{"location":"planning/freespace_planner/#output-tfs","title":"Output TFs","text":"

None

"},{"location":"planning/freespace_planner/#how-to-launch","title":"How to launch","text":"
  1. Write your remapping info in freespace_planner.launch or add args when executing roslaunch
  2. roslaunch freespace_planner freespace_planner.launch
"},{"location":"planning/freespace_planner/#parameters","title":"Parameters","text":""},{"location":"planning/freespace_planner/#node-parameters","title":"Node parameters","text":"Parameter Type Description planning_algorithms string algorithms used in the node vehicle_shape_margin_m float collision margin in planning algorithm update_rate double timer's update rate waypoints_velocity double velocity in output trajectory (currently, only constant velocity is supported) th_arrived_distance_m double threshold distance to check if vehicle has arrived at the trajectory's endpoint th_stopped_time_sec double threshold time to check if vehicle is stopped th_stopped_velocity_mps double threshold velocity to check if vehicle is stopped th_course_out_distance_m double threshold distance to check if vehicle is out of course vehicle_shape_margin_m double vehicle margin replan_when_obstacle_found bool whether replanning when obstacle has found on the trajectory replan_when_course_out bool whether replanning when vehicle is out of course"},{"location":"planning/freespace_planner/#planner-common-parameters","title":"Planner common parameters","text":"Parameter Type Description time_limit double time limit of planning minimum_turning_radius double minimum turning radius of robot maximum_turning_radius double maximum turning radius of robot theta_size double the number of angle's discretization lateral_goal_range double goal range of lateral position longitudinal_goal_range double goal range of longitudinal position angle_goal_range double goal range of angle curve_weight double additional cost factor for curve actions reverse_weight double additional cost factor for reverse actions obstacle_threshold double threshold for regarding a certain grid as obstacle"},{"location":"planning/freespace_planner/#a-search-parameters","title":"A* search parameters","text":"Parameter Type Description only_behind_solutions bool whether restricting the solutions to be behind the goal use_back bool whether using backward trajectory distance_heuristic_weight double heuristic weight for estimating node's cost"},{"location":"planning/freespace_planner/#rrt-search-parameters","title":"RRT* search parameters","text":"Parameter Type Description max planning time double maximum planning time [msec] (used only when enable_update is set true) enable_update bool whether update after feasible solution found until max_planning time elapse use_informed_sampling bool Use informed RRT* (of Gammell et al.) neighbor_radius double neighbor radius of RRT* algorithm margin double safety margin ensured in path's collision checking in RRT* algorithm"},{"location":"planning/freespace_planner/#flowchart","title":"Flowchart","text":""},{"location":"planning/freespace_planning_algorithms/","title":"freespace planning algorithms","text":""},{"location":"planning/freespace_planning_algorithms/#role","title":"Role","text":"

This package is for development of path planning algorithms in free space.

"},{"location":"planning/freespace_planning_algorithms/#implemented-algorithms","title":"Implemented algorithms","text":"

Please see rrtstar.md for a note on the implementation for informed-RRT*.

NOTE: As for RRT*, one can choose whether update after feasible solution found in RRT*. If not doing so, the algorithm is the almost (but exactly because of rewiring procedure) same as vanilla RRT. If you choose update, then you have option if the sampling after feasible solution found is \"informed\". If set true, then the algorithm is equivalent to informed RRT\\* of Gammell et al. 2014.

"},{"location":"planning/freespace_planning_algorithms/#algorithm-selection","title":"Algorithm selection","text":"

There is a trade-off between algorithm speed and resulting solution quality. When we sort the algorithms by the spectrum of (high quality solution/ slow) -> (low quality solution / fast) it would be A* -> informed RRT* -> RRT. Note that in almost all case informed RRT* is better than RRT* for solution quality given the same computational time budget. So, RRT* is omitted in the comparison.

Some selection criteria would be:

"},{"location":"planning/freespace_planning_algorithms/#guide-to-implement-a-new-algorithm","title":"Guide to implement a new algorithm","text":""},{"location":"planning/freespace_planning_algorithms/#running-the-standalone-tests-and-visualization","title":"Running the standalone tests and visualization","text":"

Building the package with ros-test and run tests:

colcon build --packages-select freespace_planning_algorithms\ncolcon test --packages-select freespace_planning_algorithms\n

Inside the test, simulation results are stored in /tmp/fpalgos-{algorithm_type}-case{scenario_number} as a rosbag. Loading these resulting files, by using test/debug_plot.py, one can create plots visualizing the path and obstacles as shown in the figures below. The created figures are then again saved in /tmp.

"},{"location":"planning/freespace_planning_algorithms/#a-single-curvature-case","title":"A* (single curvature case)","text":""},{"location":"planning/freespace_planning_algorithms/#informed-rrt-with-200-msec-time-budget","title":"informed RRT* with 200 msec time budget","text":""},{"location":"planning/freespace_planning_algorithms/#rrt-without-update-almost-same-as-rrt","title":"RRT* without update (almost same as RRT)","text":"

The black cells, green box, and red box, respectively, indicate obstacles, start configuration, and goal configuration. The sequence of the blue boxes indicate the solution path.

"},{"location":"planning/freespace_planning_algorithms/#license-notice","title":"License notice","text":"

Files src/reeds_shepp.cpp and include/astar_search/reeds_shepp.h are fetched from pyReedsShepp. Note that the implementation in pyReedsShepp is also heavily based on the code in ompl. Both pyReedsShepp and ompl are distributed under 3-clause BSD license.

"},{"location":"planning/freespace_planning_algorithms/rrtstar/","title":"Rrtstar","text":""},{"location":"planning/freespace_planning_algorithms/rrtstar/#note-on-implementation-of-informed-rrt","title":"Note on implementation of informed RRT*","text":""},{"location":"planning/freespace_planning_algorithms/rrtstar/#preliminary-knowledge-on-informed-rrt","title":"Preliminary knowledge on informed-RRT*","text":"

Let us define f(x) as minimum cost of the path when path is constrained to pass through x (so path will be x_{\\mathrm{start}} \\to \\mathrm{x} \\to \\mathrm{x_{\\mathrm{goal}}}). Also, let us define c_{\\mathrm{best}} as the current minimum cost of the feasible paths. Let us define a set $ X(f) = \\left{ x \\in X | f(x) < c*{\\mathrm{best}} \\right} $. If we could sample a new point from X_f instead of X as in vanilla RRT*, chance that c*{\\mathrm{best}} is updated is increased, thus the convergence rate is improved.

In most case, f(x) is unknown, thus it is straightforward to approximate the function f by a heuristic function \\hat{f}. A heuristic function is admissible if \\forall x \\in X, \\hat{f}(x) < f(x), which is sufficient condition of conversion to optimal path. The good heuristic function \\hat{f} has two properties: 1) it is an admissible tight lower bound of f and 2) sampling from X(\\hat{f}) is easy.

According to Gammell et al [1], a good heuristic function when path is always straight is \\hat{f}(x) = ||x_{\\mathrm{start}} - x|| + ||x - x_{\\mathrm{goal}}||. If we don't assume any obstacle information the heuristic is tightest. Also, X(\\hat{f}) is hyper-ellipsoid, and hence sampling from it can be done analytically.

"},{"location":"planning/freespace_planning_algorithms/rrtstar/#modification-to-fit-reeds-sheep-path-case","title":"Modification to fit reeds-sheep path case","text":"

In the vehicle case, state is x = (x_{1}, x_{2}, \\theta). Unlike normal informed-RRT* where we can connect path by a straight line, here we connect the vehicle path by a reeds-sheep path. So, we need some modification of the original algorithm a bit. To this end, one might first consider a heuristic function \\hat{f}_{\\mathrm{RS}}(x) = \\mathrm{RS}(x_{\\mathrm{start}}, x) + \\mathrm{RS}(x, x_{\\mathrm{goal}}) < f(x) where \\mathrm{RS} computes reeds-sheep distance. Though it is good in the sense of tightness, however, sampling from X(\\hat{f}_{RS}) is really difficult. Therefore, we use \\hat{f}_{euc} = ||\\mathrm{pos}(x_{\\mathrm{start}}) - \\mathrm{pos}(x)|| + ||\\mathrm{pos}(x)- \\mathrm{pos}(x_{\\mathrm{goal}})||, which is admissible because \\forall x \\in X, \\hat{f}_{euc}(x) < \\hat{f}_{\\mathrm{RS}}(x) < f(x). Here, \\mathrm{pos} function returns position (x_{1}, x_{2}) of the vehicle.

Sampling from X(\\hat{f}_{\\mathrm{euc}}) is easy because X(\\hat{f}_{\\mathrm{euc}}) = \\mathrm{Ellipse} \\times (-\\pi, \\pi]. Here \\mathrm{Ellipse}'s focal points are x_{\\mathrm{start}} and x_{\\mathrm{goal}} and conjugate diameters is $\\sqrt{c^{2}{\\mathrm{best}} - ||\\mathrm{pos}(x}) - \\mathrm{pos}(x_{\\mathrm{goal}}))|| } $ (similar to normal informed-rrtstar's ellipsoid). Please notice that \\theta can be arbitrary because \\hat{f}_{\\mathrm{euc}} is independent of \\theta.

[1] Gammell et al., \"Informed RRT*: Optimal sampling-based path planning focused via direct sampling of an admissible ellipsoidal heuristic.\" IROS (2014)

"},{"location":"planning/mission_planner/","title":"Mission Planner","text":""},{"location":"planning/mission_planner/#purpose","title":"Purpose","text":"

Mission Planner calculates a route that navigates from the current ego pose to the goal pose following the given check points. The route is made of a sequence of lanes on a static map. Dynamic objects (e.g. pedestrians and other vehicles) and dynamic map information (e.g. road construction which blocks some lanes) are not considered during route planning. Therefore, the output topic is only published when the goal pose or check points are given and will be latched until the new goal pose or check points are given.

The core implementation does not depend on a map format. In current Autoware.universe, only Lanelet2 map format is supported.

"},{"location":"planning/mission_planner/#interfaces","title":"Interfaces","text":""},{"location":"planning/mission_planner/#parameters","title":"Parameters","text":"Name Type Description map_frame string The frame name for map arrival_check_angle_deg double Angle threshold for goal check arrival_check_distance double Distance threshold for goal check arrival_check_duration double Duration threshold for goal check goal_angle_threshold double Max goal pose angle for goal approve enable_correct_goal_pose bool Enabling correction of goal pose according to the closest lanelet orientation"},{"location":"planning/mission_planner/#services","title":"Services","text":"Name Type Description /planning/routing/clear_route autoware_adapi_v1_msgs::srv::ClearRoute route clear request /planning/routing/set_route_points autoware_adapi_v1_msgs::srv::SetRoutePoints route request with pose waypoints /planning/routing/set_route autoware_planning_msgs::srv::SetRoute route request with HAD map format"},{"location":"planning/mission_planner/#subscriptions","title":"Subscriptions","text":"Name Type Description input/vector_map autoware_auto_mapping_msgs/HADMapBin vector map of Lanelet2 input/modified_goal geometry_msgs/PoseStamped goal pose for arrival check"},{"location":"planning/mission_planner/#publications","title":"Publications","text":"Name Type Description /planning/routing/route_state autoware_adapi_v1_msgs::msg::RouteState route state /planning/routing/route autoware_planning_msgs/LaneletRoute route debug/route_marker visualization_msgs::msg::MarkerArray route marker for debug debug/goal_footprint visualization_msgs::msg::MarkerArray goal footprint for debug"},{"location":"planning/mission_planner/#route-section","title":"Route section","text":"

Route section, whose type is autoware_planning_msgs/LaneletSegment, is a \"slice\" of a road that bundles lane changeable lanes. Note that the most atomic unit of route is autoware_auto_mapping_msgs/LaneletPrimitive, which has the unique id of a lane in a vector map and its type. Therefore, route message does not contain geometric information about the lane since we did not want to have planning module\u2019s message to have dependency on map data structure.

The ROS message of route section contains following three elements for each route section.

"},{"location":"planning/mission_planner/#goal-validation","title":"Goal Validation","text":"

The mission planner has control mechanism to validate the given goal pose and create a route. If goal pose angle between goal pose lanelet and goal pose' yaw is greater than goal_angle_threshold parameter, the goal is rejected. Another control mechanism is the creation of a footprint of the goal pose according to the dimensions of the vehicle and checking whether this footprint is within the lanelets. If goal footprint exceeds lanelets, then the goal is rejected.

At the image below, there are sample goal pose validation cases.

"},{"location":"planning/mission_planner/#implementation","title":"Implementation","text":""},{"location":"planning/mission_planner/#mission-planner_1","title":"Mission Planner","text":"

Two callbacks (goal and check points) are a trigger for route planning. Routing graph, which plans route in Lanelet2, must be created before those callbacks, and this routing graph is created in vector map callback.

plan route is explained in detail in the following section.

"},{"location":"planning/mission_planner/#route-planner","title":"Route Planner","text":"

plan route is executed with check points including current ego pose and goal pose.

plan path between each check points firstly calculates closest lanes to start and goal pose. Then routing graph of Lanelet2 plans the shortest path from start and goal pose.

initialize route lanelets initializes route handler, and calculates route_lanelets. route_lanelets, all of which will be registered in route sections, are lanelets next to the lanelets in the planned path, and used when planning lane change. To calculate route_lanelets,

  1. All the neighbor (right and left) lanes for the planned path which is lane-changeable is memorized as route_lanelets.
  2. All the neighbor (right and left) lanes for the planned path which is not lane-changeable is memorized as candidate_lanelets.
  3. If the following and previous lanelets of each candidate_lanelets are route_lanelets, the candidate_lanelet is registered as route_lanelets
    • This is because even though candidate_lanelet (an adjacent lane) is not lane-changeable, we can pass the candidate_lanelet without lane change if the following and previous lanelets of the candidate_lanelet are route_lanelets

get preferred lanelets extracts preferred_primitive from route_lanelets with the route handler.

create route sections extracts primitives from route_lanelets for each route section with the route handler, and creates route sections.

"},{"location":"planning/mission_planner/#limitations","title":"Limitations","text":""},{"location":"planning/motion_velocity_smoother/","title":"Motion Velocity Smoother","text":""},{"location":"planning/motion_velocity_smoother/#purpose","title":"Purpose","text":"

motion_velocity_smoother outputs a desired velocity profile on a reference trajectory. This module plans a velocity profile within the limitations of the velocity, the acceleration and the jerk to realize both the maximization of velocity and the ride quality. We call this module motion_velocity_smoother because the limitations of the acceleration and the jerk means the smoothness of the velocity profile.

"},{"location":"planning/motion_velocity_smoother/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"planning/motion_velocity_smoother/#flow-chart","title":"Flow chart","text":""},{"location":"planning/motion_velocity_smoother/#extract-trajectory","title":"Extract trajectory","text":"

For the point on the reference trajectory closest to the center of the rear wheel axle of the vehicle, it extracts the reference path between extract_behind_dist behind and extract_ahead_dist ahead.

"},{"location":"planning/motion_velocity_smoother/#apply-external-velocity-limit","title":"Apply external velocity limit","text":"

It applies the velocity limit input from the external of motion_velocity_smoother. Remark that the external velocity limit is different from the velocity limit already set on the map and the reference trajectory. The external velocity is applied at the position that it is able to reach the velocity limit with the deceleration and the jerk constraints set as the parameter.

"},{"location":"planning/motion_velocity_smoother/#apply-stop-approaching-velocity","title":"Apply stop approaching velocity","text":"

It applies the velocity limit near the stopping point. This function is used to approach near the obstacle or improve the accuracy of stopping.

"},{"location":"planning/motion_velocity_smoother/#apply-lateral-acceleration-limit","title":"Apply lateral acceleration limit","text":"

It applies the velocity limit to decelerate at the curve. It calculates the velocity limit from the curvature of the reference trajectory and the maximum lateral acceleration max_lateral_accel. The velocity limit is set as not to fall under min_curve_velocity.

Note: velocity limit that requests larger than nominal.jerk is not applied. In other words, even if a sharp curve is planned just in front of the ego, no deceleration is performed.

"},{"location":"planning/motion_velocity_smoother/#apply-steering-rate-limit","title":"Apply steering rate limit","text":"

It calculates the desired steering angles of trajectory points. and it applies the steering rate limit. If the (steering_angle_rate > max_steering_angle_rate), it decreases the velocity of the trajectory point to acceptable velocity.

"},{"location":"planning/motion_velocity_smoother/#resample-trajectory","title":"Resample trajectory","text":"

It resamples the points on the reference trajectory with designated time interval. Note that the range of the length of the trajectory is set between min_trajectory_length and max_trajectory_length, and the distance between two points is longer than min_trajectory_interval_distance. It samples densely up to the distance traveled between resample_time with the current velocity, then samples sparsely after that. By sampling according to the velocity, both calculation load and accuracy are achieved since it samples finely at low velocity and coarsely at high velocity.

"},{"location":"planning/motion_velocity_smoother/#calculate-initial-state","title":"Calculate initial state","text":"

Calculate initial values for velocity planning. The initial values are calculated according to the situation as shown in the following table.

Situation Initial velocity Initial acceleration First calculation Current velocity 0.0 Engaging engage_velocity engage_acceleration Deviate between the planned velocity and the current velocity Current velocity Previous planned value Normal Previous planned value Previous planned value"},{"location":"planning/motion_velocity_smoother/#smooth-velocity","title":"Smooth velocity","text":"

It plans the velocity. The algorithm of velocity planning is chosen from JerkFiltered, L2 and Linf, and it is set in the launch file. In these algorithms, they use OSQP[1] as the solver of the optimization.

"},{"location":"planning/motion_velocity_smoother/#jerkfiltered","title":"JerkFiltered","text":"

It minimizes the sum of the minus of the square of the velocity and the square of the violation of the velocity limit, the acceleration limit and the jerk limit.

"},{"location":"planning/motion_velocity_smoother/#l2","title":"L2","text":"

It minimizes the sum of the minus of the square of the velocity, the square of the the pseudo-jerk[2] and the square of the violation of the velocity limit and the acceleration limit.

"},{"location":"planning/motion_velocity_smoother/#linf","title":"Linf","text":"

It minimizes the sum of the minus of the square of the velocity, the maximum absolute value of the the pseudo-jerk[2] and the square of the violation of the velocity limit and the acceleration limit.

"},{"location":"planning/motion_velocity_smoother/#post-process","title":"Post process","text":"

It performs the post-process of the planned velocity.

After the optimization, a resampling called post resampling is performed before passing the optimized trajectory to the next node. Since the required path interval from optimization may be different from the one for the next module, post resampling helps to fill this gap. Therefore, in post resampling, it is necessary to check the path specification of the following module to determine the parameters. Note that if the computational load of the optimization algorithm is high and the path interval is sparser than the path specification of the following module in the first resampling, post resampling would resample the trajectory densely. On the other hand, if the computational load of the optimization algorithm is small and the path interval is denser than the path specification of the following module in the first resampling, the path is sparsely resampled according to the specification of the following module.

"},{"location":"planning/motion_velocity_smoother/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"planning/motion_velocity_smoother/#input","title":"Input","text":"Name Type Description ~/input/trajectory autoware_auto_planning_msgs/Trajectory Reference trajectory /planning/scenario_planning/max_velocity std_msgs/Float32 External velocity limit [m/s] /localization/kinematic_state nav_msgs/Odometry Current odometry /tf tf2_msgs/TFMessage TF /tf_static tf2_msgs/TFMessage TF static"},{"location":"planning/motion_velocity_smoother/#output","title":"Output","text":"Name Type Description ~/output/trajectory autoware_auto_planning_msgs/Trajectory Modified trajectory /planning/scenario_planning/current_max_velocity std_msgs/Float32 Current external velocity limit [m/s] ~/closest_velocity std_msgs/Float32 Planned velocity closest to ego base_link (for debug) ~/closest_acceleration std_msgs/Float32 Planned acceleration closest to ego base_link (for debug) ~/closest_jerk std_msgs/Float32 Planned jerk closest to ego base_link (for debug) ~/debug/trajectory_raw autoware_auto_planning_msgs/Trajectory Extracted trajectory (for debug) ~/debug/trajectory_external_velocity_limited autoware_auto_planning_msgs/Trajectory External velocity limited trajectory (for debug) ~/debug/trajectory_lateral_acc_filtered autoware_auto_planning_msgs/Trajectory Lateral acceleration limit filtered trajectory (for debug) ~/debug/trajectory_steering_rate_limited autoware_auto_planning_msgs/Trajectory Steering angle rate limit filtered trajectory (for debug) ~/debug/trajectory_time_resampled autoware_auto_planning_msgs/Trajectory Time resampled trajectory (for debug) ~/distance_to_stopline std_msgs/Float32 Distance to stop line from current ego pose (max 50 m) (for debug) ~/stop_speed_exceeded std_msgs/Bool It publishes true if planned velocity on the point which the maximum velocity is zero is over threshold"},{"location":"planning/motion_velocity_smoother/#parameters","title":"Parameters","text":""},{"location":"planning/motion_velocity_smoother/#constraint-parameters","title":"Constraint parameters","text":"Name Type Description Default value max_velocity double Max velocity limit [m/s] 20.0 max_accel double Max acceleration limit [m/ss] 1.0 min_decel double Min deceleration limit [m/ss] -0.5 stop_decel double Stop deceleration value at a stop point [m/ss] 0.0 max_jerk double Max jerk limit [m/sss] 1.0 min_jerk double Min jerk limit [m/sss] -0.5"},{"location":"planning/motion_velocity_smoother/#external-velocity-limit-parameter","title":"External velocity limit parameter","text":"Name Type Description Default value margin_to_insert_external_velocity_limit double margin distance to insert external velocity limit [m] 0.3"},{"location":"planning/motion_velocity_smoother/#curve-parameters","title":"Curve parameters","text":"Name Type Description Default value max_lateral_accel double Max lateral acceleration limit [m/ss] 0.5 min_curve_velocity double Min velocity at lateral acceleration limit [m/ss] 2.74 decel_distance_before_curve double Distance to slowdown before a curve for lateral acceleration limit [m] 3.5 decel_distance_after_curve double Distance to slowdown after a curve for lateral acceleration limit [m] 2.0 min_decel_for_lateral_acc_lim_filter double Deceleration limit to avoid sudden braking by the lateral acceleration filter [m/ss]. Strong limitation degrades the deceleration response to the appearance of sharp curves due to obstacle avoidance, etc. -2.5"},{"location":"planning/motion_velocity_smoother/#engage-replan-parameters","title":"Engage & replan parameters","text":"Name Type Description Default value replan_vel_deviation double Velocity deviation to replan initial velocity [m/s] 5.53 engage_velocity double Engage velocity threshold [m/s] (if the trajectory velocity is higher than this value, use this velocity for engage vehicle speed) 0.25 engage_acceleration double Engage acceleration [m/ss] (use this acceleration when engagement) 0.1 engage_exit_ratio double Exit engage sequence to normal velocity planning when the velocity exceeds engage_exit_ratio x engage_velocity. 0.5 stop_dist_to_prohibit_engage double If the stop point is in this distance, the speed is set to 0 not to move the vehicle [m] 0.5"},{"location":"planning/motion_velocity_smoother/#stopping-velocity-parameters","title":"Stopping velocity parameters","text":"Name Type Description Default value stopping_velocity double change target velocity to this value before v=0 point [m/s] 2.778 stopping_distance double distance for the stopping_velocity [m]. 0 means the stopping velocity is not applied. 0.0"},{"location":"planning/motion_velocity_smoother/#extraction-parameters","title":"Extraction parameters","text":"Name Type Description Default value extract_ahead_dist double Forward trajectory distance used for planning [m] 200.0 extract_behind_dist double backward trajectory distance used for planning [m] 5.0 delta_yaw_threshold double Allowed delta yaw between ego pose and trajectory pose [radian] 1.0472"},{"location":"planning/motion_velocity_smoother/#resampling-parameters","title":"Resampling parameters","text":"Name Type Description Default value max_trajectory_length double Max trajectory length for resampling [m] 200.0 min_trajectory_length double Min trajectory length for resampling [m] 30.0 resample_time double Resample total time [s] 10.0 dense_dt double resample time interval for dense sampling [s] 0.1 dense_min_interval_distance double minimum points-interval length for dense sampling [m] 0.1 sparse_dt double resample time interval for sparse sampling [s] 0.5 sparse_min_interval_distance double minimum points-interval length for sparse sampling [m] 4.0"},{"location":"planning/motion_velocity_smoother/#resampling-parameters-for-post-process","title":"Resampling parameters for post process","text":"Name Type Description Default value post_max_trajectory_length double max trajectory length for resampling [m] 300.0 post_min_trajectory_length double min trajectory length for resampling [m] 30.0 post_resample_time double resample total time for dense sampling [s] 10.0 post_dense_dt double resample time interval for dense sampling [s] 0.1 post_dense_min_interval_distance double minimum points-interval length for dense sampling [m] 0.1 post_sparse_dt double resample time interval for sparse sampling [s] 0.1 post_sparse_min_interval_distance double minimum points-interval length for sparse sampling [m] 1.0"},{"location":"planning/motion_velocity_smoother/#limit-steering-angle-rate-parameters","title":"Limit steering angle rate parameters","text":"Name Type Description Default value max_steering_angle_rate double Maximum steering angle rate [degree/s] 40.0 resample_ds double Distance between trajectory points [m] 0.1 curvature_threshold double If curvature > curvature_threshold, steeringRateLimit is triggered [1/m] 0.02 curvature_calculation_distance double Distance of points while curvature is calculating [m] 1.0"},{"location":"planning/motion_velocity_smoother/#weights-for-optimization","title":"Weights for optimization","text":""},{"location":"planning/motion_velocity_smoother/#jerkfiltered_1","title":"JerkFiltered","text":"Name Type Description Default value jerk_weight double Weight for \"smoothness\" cost for jerk 10.0 over_v_weight double Weight for \"over speed limit\" cost 100000.0 over_a_weight double Weight for \"over accel limit\" cost 5000.0 over_j_weight double Weight for \"over jerk limit\" cost 1000.0"},{"location":"planning/motion_velocity_smoother/#l2_1","title":"L2","text":"Name Type Description Default value pseudo_jerk_weight double Weight for \"smoothness\" cost 100.0 over_v_weight double Weight for \"over speed limit\" cost 100000.0 over_a_weight double Weight for \"over accel limit\" cost 1000.0"},{"location":"planning/motion_velocity_smoother/#linf_1","title":"Linf","text":"Name Type Description Default value pseudo_jerk_weight double Weight for \"smoothness\" cost 100.0 over_v_weight double Weight for \"over speed limit\" cost 100000.0 over_a_weight double Weight for \"over accel limit\" cost 1000.0"},{"location":"planning/motion_velocity_smoother/#others","title":"Others","text":"Name Type Description Default value over_stop_velocity_warn_thr double Threshold to judge that the optimized velocity exceeds the input velocity on the stop point [m/s] 1.389"},{"location":"planning/motion_velocity_smoother/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"planning/motion_velocity_smoother/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"planning/motion_velocity_smoother/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"planning/motion_velocity_smoother/#optional-referencesexternal-links","title":"(Optional) References/External links","text":"

[1] B. Stellato, et al., \"OSQP: an operator splitting solver for quadratic programs\", Mathematical Programming Computation, 2020, 10.1007/s12532-020-00179-2.

[2] Y. Zhang, et al., \"Toward a More Complete, Flexible, and Safer Speed Planning for Autonomous Driving via Convex Optimization\", Sensors, vol. 18, no. 7, p. 2185, 2018, 10.3390/s18072185

"},{"location":"planning/motion_velocity_smoother/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"planning/motion_velocity_smoother/README.ja/","title":"Motion Velocity Smoother","text":""},{"location":"planning/motion_velocity_smoother/README.ja/#purpose","title":"Purpose","text":"

motion_velocity_smoother\u306f\u76ee\u6a19\u8ecc\u9053\u4e0a\u306e\u5404\u70b9\u306b\u304a\u3051\u308b\u671b\u307e\u3057\u3044\u8eca\u901f\u3092\u8a08\u753b\u3057\u3066\u51fa\u529b\u3059\u308b\u30e2\u30b8\u30e5\u30fc\u30eb\u3067\u3042\u308b\u3002 \u3053\u306e\u30e2\u30b8\u30e5\u30fc\u30eb\u306f\u3001\u901f\u5ea6\u306e\u6700\u5927\u5316\u3068\u4e57\u308a\u5fc3\u5730\u306e\u826f\u3055\u3092\u4e21\u7acb\u3059\u308b\u305f\u3081\u306b\u3001\u4e8b\u524d\u306b\u6307\u5b9a\u3055\u308c\u305f\u5236\u9650\u901f\u5ea6\u3001\u5236\u9650\u52a0\u901f\u5ea6\u304a\u3088\u3073\u5236\u9650\u8e8d\u5ea6\u306e\u7bc4\u56f2\u3067\u8eca\u901f\u3092\u8a08\u753b\u3059\u308b\u3002 \u52a0\u901f\u5ea6\u3084\u8e8d\u5ea6\u306e\u5236\u9650\u3092\u4e0e\u3048\u308b\u3053\u3068\u306f\u8eca\u901f\u306e\u5909\u5316\u3092\u6ed1\u3089\u304b\u306b\u3059\u308b\u3053\u3068\u306b\u5bfe\u5fdc\u3059\u308b\u305f\u3081\u3001\u3053\u306e\u30e2\u30b8\u30e5\u30fc\u30eb\u3092motion_velocity_smoother\u3068\u547c\u3093\u3067\u3044\u308b\u3002

"},{"location":"planning/motion_velocity_smoother/README.ja/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"planning/motion_velocity_smoother/README.ja/#flow-chart","title":"Flow chart","text":""},{"location":"planning/motion_velocity_smoother/README.ja/#extract-trajectory","title":"Extract trajectory","text":"

\u81ea\u8eca\u5f8c\u8f2a\u8ef8\u4e2d\u5fc3\u4f4d\u7f6e\u306b\u6700\u3082\u8fd1\u3044\u53c2\u7167\u7d4c\u8def\u4e0a\u306e\u70b9\u306b\u5bfe\u3057\u3001extract_behind_dist\u3060\u3051\u623b\u3063\u305f\u70b9\u304b\u3089extract_ahead_dist\u3060\u3051\u9032\u3093\u3060\u70b9\u307e\u3067\u306e\u53c2\u7167\u7d4c\u8def\u3092\u629c\u304d\u51fa\u3059\u3002

"},{"location":"planning/motion_velocity_smoother/README.ja/#apply-external-velocity-limit","title":"Apply external velocity limit","text":"

\u30e2\u30b8\u30e5\u30fc\u30eb\u5916\u90e8\u304b\u3089\u6307\u5b9a\u3055\u308c\u305f\u901f\u5ea6\u5236\u9650\u3092\u9069\u7528\u3059\u308b\u3002 \u3053\u3053\u3067\u6271\u3046\u5916\u90e8\u306e\u901f\u5ea6\u5236\u9650\u306f/planning/scenario_planning/max_velocity\u306e topic \u3067\u6e21\u3055\u308c\u308b\u3082\u306e\u3067\u3001\u5730\u56f3\u4e0a\u3067\u8a2d\u5b9a\u3055\u308c\u305f\u901f\u5ea6\u5236\u9650\u306a\u3069\u3001\u53c2\u7167\u7d4c\u8def\u306b\u3059\u3067\u306b\u8a2d\u5b9a\u3055\u308c\u3066\u3044\u308b\u5236\u9650\u901f\u5ea6\u3068\u306f\u5225\u3067\u3042\u308b\u3002 \u5916\u90e8\u304b\u3089\u6307\u5b9a\u3055\u308c\u308b\u901f\u5ea6\u5236\u9650\u306f\u3001\u30d1\u30e9\u30e1\u30fc\u30bf\u3067\u6307\u5b9a\u3055\u308c\u3066\u3044\u308b\u6e1b\u901f\u5ea6\u304a\u3088\u3073\u8e8d\u5ea6\u306e\u5236\u9650\u306e\u7bc4\u56f2\u3067\u6e1b\u901f\u53ef\u80fd\u306a\u4f4d\u7f6e\u304b\u3089\u901f\u5ea6\u5236\u9650\u3092\u9069\u7528\u3059\u308b\u3002

"},{"location":"planning/motion_velocity_smoother/README.ja/#apply-stop-approaching-velocity","title":"Apply stop approaching velocity","text":"

\u505c\u6b62\u70b9\u306b\u8fd1\u3065\u3044\u305f\u3068\u304d\u306e\u901f\u5ea6\u3092\u8a2d\u5b9a\u3059\u308b\u3002\u969c\u5bb3\u7269\u8fd1\u508d\u307e\u3067\u8fd1\u3065\u304f\u5834\u5408\u3084\u3001\u6b63\u7740\u7cbe\u5ea6\u5411\u4e0a\u306a\u3069\u306e\u76ee\u7684\u306b\u7528\u3044\u308b\u3002

"},{"location":"planning/motion_velocity_smoother/README.ja/#apply-lateral-acceleration-limit","title":"Apply lateral acceleration limit","text":"

\u7d4c\u8def\u306e\u66f2\u7387\u306b\u5fdc\u3058\u3066\u3001\u6307\u5b9a\u3055\u308c\u305f\u6700\u5927\u6a2a\u52a0\u901f\u5ea6max_lateral_accel\u3092\u8d85\u3048\u306a\u3044\u901f\u5ea6\u3092\u5236\u9650\u901f\u5ea6\u3068\u3057\u3066\u8a2d\u5b9a\u3059\u308b\u3002\u305f\u3060\u3057\u3001\u5236\u9650\u901f\u5ea6\u306fmin_curve_velocity\u3092\u4e0b\u56de\u3089\u306a\u3044\u3088\u3046\u306b\u8a2d\u5b9a\u3059\u308b\u3002

"},{"location":"planning/motion_velocity_smoother/README.ja/#resample-trajectory","title":"Resample trajectory","text":"

\u6307\u5b9a\u3055\u308c\u305f\u6642\u9593\u9593\u9694\u3067\u7d4c\u8def\u306e\u70b9\u3092\u518d\u30b5\u30f3\u30d7\u30eb\u3059\u308b\u3002\u305f\u3060\u3057\u3001\u7d4c\u8def\u5168\u4f53\u306e\u9577\u3055\u306fmin_trajectory_length\u304b\u3089max_trajectory_length\u306e\u9593\u3068\u306a\u308b\u3088\u3046\u306b\u518d\u30b5\u30f3\u30d7\u30eb\u3092\u884c\u3044\u3001\u70b9\u306e\u9593\u9694\u306fmin_trajectory_interval_distance\u3088\u308a\u5c0f\u3055\u304f\u306a\u3089\u306a\u3044\u3088\u3046\u306b\u3059\u308b\u3002 \u73fe\u5728\u8eca\u901f\u3067resample_time\u306e\u9593\u9032\u3080\u8ddd\u96e2\u307e\u3067\u306f\u5bc6\u306b\u30b5\u30f3\u30d7\u30ea\u30f3\u30b0\u3057\u3001\u305d\u308c\u4ee5\u964d\u306f\u758e\u306b\u30b5\u30f3\u30d7\u30ea\u30f3\u30b0\u3059\u308b\u3002 \u3053\u306e\u65b9\u6cd5\u3067\u30b5\u30f3\u30d7\u30ea\u30f3\u30b0\u3059\u308b\u3053\u3068\u3067\u3001\u4f4e\u901f\u6642\u306f\u5bc6\u306b\u3001\u9ad8\u901f\u6642\u306f\u758e\u306b\u30b5\u30f3\u30d7\u30eb\u3055\u308c\u308b\u305f\u3081\u3001\u505c\u6b62\u7cbe\u5ea6\u3068\u8a08\u7b97\u8ca0\u8377\u8efd\u6e1b\u306e\u4e21\u7acb\u3092\u56f3\u3063\u3066\u3044\u308b\u3002

"},{"location":"planning/motion_velocity_smoother/README.ja/#calculate-initial-state","title":"Calculate initial state","text":"

\u901f\u5ea6\u8a08\u753b\u306e\u305f\u3081\u306e\u521d\u671f\u5024\u3092\u8a08\u7b97\u3059\u308b\u3002\u521d\u671f\u5024\u306f\u72b6\u6cc1\u306b\u5fdc\u3058\u3066\u4e0b\u8868\u306e\u3088\u3046\u306b\u8a08\u7b97\u3059\u308b\u3002

\u72b6\u6cc1 \u521d\u671f\u901f\u5ea6 \u521d\u671f\u52a0\u901f\u5ea6 \u6700\u521d\u306e\u8a08\u7b97\u6642 \u73fe\u5728\u8eca\u901f 0.0 \u767a\u9032\u6642 engage_velocity engage_acceleration \u73fe\u5728\u8eca\u901f\u3068\u8a08\u753b\u8eca\u901f\u304c\u4e56\u96e2 \u73fe\u5728\u8eca\u901f \u524d\u56de\u8a08\u753b\u5024 \u901a\u5e38\u6642 \u524d\u56de\u8a08\u753b\u5024 \u524d\u56de\u8a08\u753b\u5024"},{"location":"planning/motion_velocity_smoother/README.ja/#smooth-velocity","title":"Smooth velocity","text":"

\u901f\u5ea6\u306e\u8a08\u753b\u3092\u884c\u3046\u3002\u901f\u5ea6\u8a08\u753b\u306e\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u306fJerkFiltered, L2, Linf\u306e 3 \u7a2e\u985e\u306e\u3046\u3061\u304b\u3089\u30b3\u30f3\u30d5\u30a3\u30b0\u3067\u6307\u5b9a\u3059\u308b\u3002 \u6700\u9069\u5316\u306e\u30bd\u30eb\u30d0\u306f OSQP[1]\u3092\u5229\u7528\u3059\u308b\u3002

"},{"location":"planning/motion_velocity_smoother/README.ja/#jerkfiltered","title":"JerkFiltered","text":"

\u901f\u5ea6\u306e 2 \u4e57\uff08\u6700\u5c0f\u5316\u3067\u8868\u3059\u305f\u3081\u8ca0\u5024\u3067\u8868\u73fe\uff09\u3001\u5236\u9650\u901f\u5ea6\u9038\u8131\u91cf\u306e 2 \u4e57\u3001\u5236\u9650\u52a0\u5ea6\u9038\u8131\u91cf\u306e 2 \u4e57\u3001\u5236\u9650\u30b8\u30e3\u30fc\u30af\u9038\u8131\u91cf\u306e 2 \u4e57\u3001\u30b8\u30e3\u30fc\u30af\u306e 2 \u4e57\u306e\u7dcf\u548c\u3092\u6700\u5c0f\u5316\u3059\u308b\u3002

"},{"location":"planning/motion_velocity_smoother/README.ja/#l2","title":"L2","text":"

\u901f\u5ea6\u306e 2 \u4e57\uff08\u6700\u5c0f\u5316\u3067\u8868\u3059\u305f\u3081\u8ca0\u5024\u3067\u8868\u73fe\uff09\u3001\u5236\u9650\u901f\u5ea6\u9038\u8131\u91cf\u306e 2 \u4e57\u3001\u5236\u9650\u52a0\u5ea6\u9038\u8131\u91cf\u306e 2 \u4e57\u3001\u7591\u4f3c\u30b8\u30e3\u30fc\u30af[2]\u306e 2 \u4e57\u306e\u7dcf\u548c\u3092\u6700\u5c0f\u5316\u3059\u308b\u3002

"},{"location":"planning/motion_velocity_smoother/README.ja/#linf","title":"Linf","text":"

\u901f\u5ea6\u306e 2 \u4e57\uff08\u6700\u5c0f\u5316\u3067\u8868\u3059\u305f\u3081\u8ca0\u5024\u3067\u8868\u73fe\uff09\u3001\u5236\u9650\u901f\u5ea6\u9038\u8131\u91cf\u306e 2 \u4e57\u3001\u5236\u9650\u52a0\u5ea6\u9038\u8131\u91cf\u306e 2 \u4e57\u306e\u7dcf\u548c\u3068\u7591\u4f3c\u30b8\u30e3\u30fc\u30af[2]\u306e\u7d76\u5bfe\u6700\u5927\u5024\u306e\u548c\u306e\u6700\u5c0f\u5316\u3059\u308b\u3002

"},{"location":"planning/motion_velocity_smoother/README.ja/#post-process","title":"Post process","text":"

\u8a08\u753b\u3055\u308c\u305f\u8ecc\u9053\u306e\u5f8c\u51e6\u7406\u3092\u884c\u3046\u3002

\u6700\u9069\u5316\u306e\u8a08\u7b97\u304c\u7d42\u308f\u3063\u305f\u3042\u3068\u3001\u6b21\u306e\u30ce\u30fc\u30c9\u306b\u7d4c\u8def\u3092\u6e21\u3059\u524d\u306bpost resampling\u3068\u547c\u3070\u308c\u308b\u30ea\u30b5\u30f3\u30d7\u30ea\u30f3\u30b0\u3092\u884c\u3046\u3002\u3053\u3053\u3067\u518d\u5ea6\u30ea\u30b5\u30f3\u30d7\u30ea\u30f3\u30b0\u3092\u884c\u3063\u3066\u3044\u308b\u7406\u7531\u3068\u3057\u3066\u306f\u3001\u6700\u9069\u5316\u524d\u3067\u5fc5\u8981\u306a\u7d4c\u8def\u9593\u9694\u3068\u5f8c\u6bb5\u306e\u30e2\u30b8\u30e5\u30fc\u30eb\u306b\u6e21\u3059\u7d4c\u8def\u9593\u9694\u304c\u5fc5\u305a\u3057\u3082\u4e00\u81f4\u3057\u3066\u3044\u306a\u3044\u304b\u3089\u3067\u3042\u308a\u3001\u305d\u306e\u5dee\u3092\u57cb\u3081\u308b\u305f\u3081\u306b\u518d\u5ea6\u30b5\u30f3\u30d7\u30ea\u30f3\u30b0\u3092\u884c\u3063\u3066\u3044\u308b\u3002\u305d\u306e\u305f\u3081\u3001post resampling\u3067\u306f\u5f8c\u6bb5\u30e2\u30b8\u30e5\u30fc\u30eb\u306e\u7d4c\u8def\u4ed5\u69d8\u3092\u78ba\u8a8d\u3057\u3066\u30d1\u30e9\u30e1\u30fc\u30bf\u3092\u6c7a\u3081\u308b\u5fc5\u8981\u304c\u3042\u308b\u3002\u306a\u304a\u3001\u6700\u9069\u5316\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u306e\u8a08\u7b97\u8ca0\u8377\u304c\u9ad8\u304f\u3001\u6700\u521d\u306e\u30ea\u30b5\u30f3\u30d7\u30ea\u30f3\u30b0\u3067\u7d4c\u8def\u9593\u9694\u304c\u5f8c\u6bb5\u30e2\u30b8\u30e5\u30fc\u30eb\u306e\u7d4c\u8def\u4ed5\u69d8\u3088\u308a\u758e\u306b\u306a\u3063\u3066\u3044\u308b\u5834\u5408\u3001post resampling\u3067\u7d4c\u8def\u3092\u871c\u306b\u30ea\u30b5\u30f3\u30d7\u30ea\u30f3\u30b0\u3059\u308b\u3002\u9006\u306b\u6700\u9069\u5316\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u306e\u8a08\u7b97\u8ca0\u8377\u304c\u5c0f\u3055\u304f\u3001\u6700\u521d\u306e\u30ea\u30b5\u30f3\u30d7\u30ea\u30f3\u30b0\u3067\u7d4c\u8def\u9593\u9694\u304c\u5f8c\u6bb5\u306e\u7d4c\u8def\u4ed5\u69d8\u3088\u308a\u871c\u306b\u306a\u3063\u3066\u3044\u308b\u5834\u5408\u306f\u3001post resampling\u3067\u7d4c\u8def\u3092\u305d\u306e\u4ed5\u69d8\u306b\u5408\u308f\u305b\u3066\u758e\u306b\u30ea\u30b5\u30f3\u30d7\u30ea\u30f3\u30b0\u3059\u308b\u3002

"},{"location":"planning/motion_velocity_smoother/README.ja/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"planning/motion_velocity_smoother/README.ja/#input","title":"Input","text":"Name Type Description ~/input/trajectory autoware_auto_planning_msgs/Trajectory Reference trajectory /planning/scenario_planning/max_velocity std_msgs/Float32 External velocity limit [m/s] /localization/kinematic_state nav_msgs/Odometry Current odometry /tf tf2_msgs/TFMessage TF /tf_static tf2_msgs/TFMessage TF static"},{"location":"planning/motion_velocity_smoother/README.ja/#output","title":"Output","text":"Name Type Description ~/output/trajectory autoware_auto_planning_msgs/Trajectory Modified trajectory /planning/scenario_planning/current_max_velocity std_msgs/Float32 Current external velocity limit [m/s] ~/closest_velocity std_msgs/Float32 Planned velocity closest to ego base_link (for debug) ~/closest_acceleration std_msgs/Float32 Planned acceleration closest to ego base_link (for debug) ~/closest_jerk std_msgs/Float32 Planned jerk closest to ego base_link (for debug) ~/debug/trajectory_raw autoware_auto_planning_msgs/Trajectory Extracted trajectory (for debug) ~/debug/trajectory_external_velocity_limited autoware_auto_planning_msgs/Trajectory External velocity limited trajectory (for debug) ~/debug/trajectory_lateral_acc_filtered autoware_auto_planning_msgs/Trajectory Lateral acceleration limit filtered trajectory (for debug) ~/debug/trajectory_time_resampled autoware_auto_planning_msgs/Trajectory Time resampled trajectory (for debug) ~/distance_to_stopline std_msgs/Float32 Distance to stop line from current ego pose (max 50 m) (for debug) ~/stop_speed_exceeded std_msgs/Bool It publishes true if planned velocity on the point which the maximum velocity is zero is over threshold"},{"location":"planning/motion_velocity_smoother/README.ja/#parameters","title":"Parameters","text":""},{"location":"planning/motion_velocity_smoother/README.ja/#constraint-parameters","title":"Constraint parameters","text":"Name Type Description Default value max_velocity double Max velocity limit [m/s] 20.0 max_accel double Max acceleration limit [m/ss] 1.0 min_decel double Min deceleration limit [m/ss] -0.5 stop_decel double Stop deceleration value at a stop point [m/ss] 0.0 max_jerk double Max jerk limit [m/sss] 1.0 min_jerk double Min jerk limit [m/sss] -0.5"},{"location":"planning/motion_velocity_smoother/README.ja/#external-velocity-limit-parameter","title":"External velocity limit parameter","text":"Name Type Description Default value margin_to_insert_external_velocity_limit double margin distance to insert external velocity limit [m] 0.3"},{"location":"planning/motion_velocity_smoother/README.ja/#curve-parameters","title":"Curve parameters","text":"Name Type Description Default value max_lateral_accel double Max lateral acceleration limit [m/ss] 0.5 min_curve_velocity double Min velocity at lateral acceleration limit [m/ss] 2.74 decel_distance_before_curve double Distance to slowdown before a curve for lateral acceleration limit [m] 3.5 decel_distance_after_curve double Distance to slowdown after a curve for lateral acceleration limit [m] 2.0"},{"location":"planning/motion_velocity_smoother/README.ja/#engage-replan-parameters","title":"Engage & replan parameters","text":"Name Type Description Default value replan_vel_deviation double Velocity deviation to replan initial velocity [m/s] 5.53 engage_velocity double Engage velocity threshold [m/s] (if the trajectory velocity is higher than this value, use this velocity for engage vehicle speed) 0.25 engage_acceleration double Engage acceleration [m/ss] (use this acceleration when engagement) 0.1 engage_exit_ratio double Exit engage sequence to normal velocity planning when the velocity exceeds engage_exit_ratio x engage_velocity. 0.5 stop_dist_to_prohibit_engage double If the stop point is in this distance, the speed is set to 0 not to move the vehicle [m] 0.5"},{"location":"planning/motion_velocity_smoother/README.ja/#stopping-velocity-parameters","title":"Stopping velocity parameters","text":"Name Type Description Default value stopping_velocity double change target velocity to this value before v=0 point [m/s] 2.778 stopping_distance double distance for the stopping_velocity [m]. 0 means the stopping velocity is not applied. 0.0"},{"location":"planning/motion_velocity_smoother/README.ja/#extraction-parameters","title":"Extraction parameters","text":"Name Type Description Default value extract_ahead_dist double Forward trajectory distance used for planning [m] 200.0 extract_behind_dist double backward trajectory distance used for planning [m] 5.0 delta_yaw_threshold double Allowed delta yaw between ego pose and trajectory pose [radian] 1.0472"},{"location":"planning/motion_velocity_smoother/README.ja/#resampling-parameters","title":"Resampling parameters","text":"Name Type Description Default value max_trajectory_length double Max trajectory length for resampling [m] 200.0 min_trajectory_length double Min trajectory length for resampling [m] 30.0 resample_time double Resample total time [s] 10.0 dense_resample_dt double resample time interval for dense sampling [s] 0.1 dense_min_interval_distance double minimum points-interval length for dense sampling [m] 0.1 sparse_resample_dt double resample time interval for sparse sampling [s] 0.5 sparse_min_interval_distance double minimum points-interval length for sparse sampling [m] 4.0"},{"location":"planning/motion_velocity_smoother/README.ja/#resampling-parameters-for-post-process","title":"Resampling parameters for post process","text":"Name Type Description Default value post_max_trajectory_length double max trajectory length for resampling [m] 300.0 post_min_trajectory_length double min trajectory length for resampling [m] 30.0 post_resample_time double resample total time for dense sampling [s] 10.0 post_dense_resample_dt double resample time interval for dense sampling [s] 0.1 post_dense_min_interval_distance double minimum points-interval length for dense sampling [m] 0.1 post_sparse_resample_dt double resample time interval for sparse sampling [s] 0.1 post_sparse_min_interval_distance double minimum points-interval length for sparse sampling [m] 1.0"},{"location":"planning/motion_velocity_smoother/README.ja/#weights-for-optimization","title":"Weights for optimization","text":""},{"location":"planning/motion_velocity_smoother/README.ja/#jerkfiltered_1","title":"JerkFiltered","text":"Name Type Description Default value jerk_weight double Weight for \"smoothness\" cost for jerk 10.0 over_v_weight double Weight for \"over speed limit\" cost 100000.0 over_a_weight double Weight for \"over accel limit\" cost 5000.0 over_j_weight double Weight for \"over jerk limit\" cost 1000.0"},{"location":"planning/motion_velocity_smoother/README.ja/#l2_1","title":"L2","text":"Name Type Description Default value pseudo_jerk_weight double Weight for \"smoothness\" cost 100.0 over_v_weight double Weight for \"over speed limit\" cost 100000.0 over_a_weight double Weight for \"over accel limit\" cost 1000.0"},{"location":"planning/motion_velocity_smoother/README.ja/#linf_1","title":"Linf","text":"Name Type Description Default value pseudo_jerk_weight double Weight for \"smoothness\" cost 100.0 over_v_weight double Weight for \"over speed limit\" cost 100000.0 over_a_weight double Weight for \"over accel limit\" cost 1000.0"},{"location":"planning/motion_velocity_smoother/README.ja/#others","title":"Others","text":"Name Type Description Default value over_stop_velocity_warn_thr double Threshold to judge that the optimized velocity exceeds the input velocity on the stop point [m/s] 1.389"},{"location":"planning/motion_velocity_smoother/README.ja/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"planning/motion_velocity_smoother/README.ja/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"planning/motion_velocity_smoother/README.ja/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"planning/motion_velocity_smoother/README.ja/#optional-referencesexternal-links","title":"(Optional) References/External links","text":"

[1] B. Stellato, et al., \"OSQP: an operator splitting solver for quadratic programs\", Mathematical Programming Computation, 2020, 10.1007/s12532-020-00179-2.

[2] Y. Zhang, et al., \"Toward a More Complete, Flexible, and Safer Speed Planning for Autonomous Driving via Convex Optimization\", Sensors, vol. 18, no. 7, p. 2185, 2018, 10.3390/s18072185

"},{"location":"planning/motion_velocity_smoother/README.ja/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"planning/obstacle_avoidance_planner/","title":"Index","text":""},{"location":"planning/obstacle_avoidance_planner/#purpose","title":"Purpose","text":"

This package generates a trajectory that is kinematically-feasible to drive and collision-free based on the input path, drivable area. Only position and orientation of trajectory are updated in this module, and velocity is just taken over from the one in the input path.

"},{"location":"planning/obstacle_avoidance_planner/#feature","title":"Feature","text":"

This package is able to

Note that the velocity is just taken over from the input path.

"},{"location":"planning/obstacle_avoidance_planner/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"planning/obstacle_avoidance_planner/#input","title":"input","text":"Name Type Description ~/input/path autoware_auto_planning_msgs/msg/Path Reference path and the corresponding drivable area ~/input/odometry nav_msgs/msg/Odometry Current Velocity of ego vehicle"},{"location":"planning/obstacle_avoidance_planner/#output","title":"output","text":"Name Type Description ~/output/trajectory autoware_auto_planning_msgs/msg/Trajectory Optimized trajectory that is feasible to drive and collision-free"},{"location":"planning/obstacle_avoidance_planner/#flowchart","title":"Flowchart","text":"

Flowchart of functions is explained here.

"},{"location":"planning/obstacle_avoidance_planner/#createplannerdata","title":"createPlannerData","text":"

The following data for planning is created.

struct PlannerData\n{\n// input\nHeader header;\nstd::vector<TrajectoryPoint> traj_points; // converted from the input path\nstd::vector<geometry_msgs::msg::Point> left_bound;\nstd::vector<geometry_msgs::msg::Point> right_bound;\n\n// ego\ngeometry_msgs::msg::Pose ego_pose;\ndouble ego_vel;\n};\n
"},{"location":"planning/obstacle_avoidance_planner/#check-replan","title":"check replan","text":"

When one of the following conditions are met, trajectory optimization will be executed. Otherwise, previously optimized trajectory is used with updating the velocity from the latest input path.

max_path_shape_around_ego_lat_dist

"},{"location":"planning/obstacle_avoidance_planner/#getebtrajectory","title":"getEBTrajectory","text":"

The latter optimization (model predictive trajectory) assumes that the reference path is smooth enough. This function makes the input path smooth by elastic band.

More details about elastic band can be seen here.

"},{"location":"planning/obstacle_avoidance_planner/#getmodelpredictivetrajectory","title":"getModelPredictiveTrajectory","text":"

This module makes the trajectory kinematically-feasible and collision-free. We define vehicle pose in the frenet coordinate, and minimize tracking errors by optimization. This optimization considers vehicle kinematics and collision checking with road boundary and obstacles. To decrease the computation cost, the optimization is applied to the shorter trajectory (default: 50 [m]) than the whole trajectory, and concatenate the remained trajectory with the optimized one at last.

The trajectory just in front of the ego must not be changed a lot so that the steering wheel will be stable. Therefore, we use the previously generated trajectory in front of the ego.

Optimization center on the vehicle, that tries to locate just on the trajectory, can be tuned along side the vehicle vertical axis. This parameter mpt.kinematics.optimization center offset is defined as the signed length from the back-wheel center to the optimization center. Some examples are shown in the following figure, and it is shown that the trajectory of vehicle shape differs according to the optimization center even if the reference trajectory (green one) is the same.

More details can be seen here.

"},{"location":"planning/obstacle_avoidance_planner/#insertzerovelocityoutsidedrivablearea","title":"insertZeroVelocityOutsideDrivableArea","text":"

Optimized trajectory is too short for velocity planning, therefore extend the trajectory by concatenating the optimized trajectory and the behavior path considering drivability. Generated trajectory is checked if it is inside the drivable area or not, and if outside drivable area, output a trajectory inside drivable area with the behavior path or the previously generated trajectory.

As described above, the behavior path is separated into two paths: one is for optimization and the other is the remain. The first path becomes optimized trajectory, and the second path just is transformed to a trajectory. Then a trajectory inside the drivable area is calculated as follows.

Optimization failure is dealt with the same as if the optimized trajectory is outside the drivable area. The output trajectory is memorized as a previously generated trajectory for the next cycle.

Rationale In the current design, since there are some modelling errors, the constraints are considered to be soft constraints. Therefore, we have to make sure that the optimized trajectory is inside the drivable area or not after optimization.

"},{"location":"planning/obstacle_avoidance_planner/#alignvelocity","title":"alignVelocity","text":"

Velocity is assigned in the result trajectory from the velocity in the behavior path. The shapes of the trajectory and the path are different, therefore the each nearest trajectory point to the path is searched and interpolated linearly.

"},{"location":"planning/obstacle_avoidance_planner/#limitation","title":"Limitation","text":""},{"location":"planning/obstacle_avoidance_planner/#comparison-to-other-methods","title":"Comparison to other methods","text":"

Trajectory planning problem that satisfies kinematically-feasibility and collision-free has two main characteristics that makes hard to be solved: one is non-convex and the other is high dimension. Based on the characteristics, we investigate pros/cons of the typical planning methods: optimization-based, sampling-based, and learning-based method.

"},{"location":"planning/obstacle_avoidance_planner/#optimization-based-method","title":"Optimization-based method","text":""},{"location":"planning/obstacle_avoidance_planner/#sampling-based-method","title":"Sampling-based method","text":""},{"location":"planning/obstacle_avoidance_planner/#learning-based-method","title":"Learning-based method","text":"

Based on these pros/cons, we chose the optimization-based planner first. Although it has a cons to converge to the local minima, it can get a good solution by the preprocessing to approximate the problem to convex that almost equals to the original non-convex problem.

"},{"location":"planning/obstacle_avoidance_planner/#how-to-tune-parameters","title":"How to Tune Parameters","text":""},{"location":"planning/obstacle_avoidance_planner/#drivability-in-narrow-roads","title":"Drivability in narrow roads","text":""},{"location":"planning/obstacle_avoidance_planner/#computation-time","title":"Computation time","text":" "},{"location":"planning/obstacle_avoidance_planner/#robustness","title":"Robustness","text":""},{"location":"planning/obstacle_avoidance_planner/#other-options","title":"Other options","text":""},{"location":"planning/obstacle_avoidance_planner/#how-to-debug","title":"How To Debug","text":"

How to debug can be seen here.

"},{"location":"planning/obstacle_avoidance_planner/docs/debug/","title":"Debug","text":""},{"location":"planning/obstacle_avoidance_planner/docs/debug/#debug-visualization","title":"Debug visualization","text":"

Topics for debugging will be explained in this section.

"},{"location":"planning/obstacle_avoidance_planner/docs/debug/#calculation-cost","title":"Calculation cost","text":"

Obstacle avoidance planner consists of many functions such as clearance map generation, boundary search, reference path smoothing, trajectory optimization, ... We can see the calculation time for each function as follows.

"},{"location":"planning/obstacle_avoidance_planner/docs/debug/#raw-data","title":"Raw data","text":"
$ ros2 topic echo /planning/scenario_planning/lane_driving/motion_planning/obstacle_avoidance_planner/debug/calculation_time --field data\n\n      getDrivableAreaInCV:= 0.21 [ms]\ngetClearanceMap:= 1.327 [ms]\ndrawObstaclesOnImage:= 0.436 [ms]\ngetAreaWithObjects:= 0.029 [ms]\ngetClearanceMap:= 2.186 [ms]\ngetMaps:= 4.213 [ms]\ncalculateTrajectory:= 2.417 [ms]\ngetOptimizedTrajectory:= 5.203 [ms]\ngetEBTrajectory:= 5.231 [ms]\ncalcBounds:= 0.821 [ms]\ncalcVehicleBounds:= 0.27 [ms]\ngetReferencePoints:= 2.866 [ms]\ngenerateMPTMatrix:= 0.195 [ms]\ngenerateValueMatrix:= 0.019 [ms]\ngetObjectiveMatrix:= 0.559 [ms]\ngetConstraintMatrix:= 1.776 [ms]\ninitOsqp:= 9.074 [ms]\nsolveOsqp:= 3.809 [ms]\nexecuteOptimization:= 15.46 [ms]\ngetMPTPoints:= 0.049 [ms]\ngetModelPredictiveTrajectory:= 18.928 [ms]\noptimizeTrajectory:= 24.234 [ms]\ninsertZeroVelocityOutsideDrivableArea:= 0.023 [ms]\ngetDebugVisualizationMarker:= 0.446 [ms]\npublishDebugVisualizationMarker:= 2.146 [ms]\ngetDebugVisualizationWallMarker:= 0.001 [ms]\npublishDebugVisualizationWallMarker:= 0.013 [ms]\npublishDebugDataInOptimization:= 2.696 [ms]\ngetExtendedTrajectory:= 0.016 [ms]\ngenerateFineTrajectoryPoints:= 0.118 [ms]\nalignVelocity:= 1.227 [ms]\ngeneratePostProcessedTrajectory:= 1.375 [ms]\nmakePrevTrajectories:= 1.411 [ms]\ngenerateOptimizedTrajectory:= 33.284 [ms]\ngetExtendedTrajectory:= 0.018 [ms]\ngenerateFineTrajectoryPoints:= 0.084 [ms]\nalignVelocity:= 1.109 [ms]\ngeneratePostProcessedTrajectory:= 1.217 [ms]\ngetDebugCostMap * 3:= 0 [ms]\npublishDebugDataInMain:= 0.023 [ms]\npathCallback:= 34.614 [ms]\n
"},{"location":"planning/obstacle_avoidance_planner/docs/debug/#plot","title":"Plot","text":"

With a script, we can plot some of above time as follows.

python3 scripts/calclation_time_analyzer.py -f \"solveOsqp,initOsqp,pathCallback\"\n

"},{"location":"planning/obstacle_avoidance_planner/docs/eb/","title":"Elastic band","text":""},{"location":"planning/obstacle_avoidance_planner/docs/eb/#abstract","title":"Abstract","text":"

Elastic band smooths the input path. Since the latter optimization (model predictive trajectory) is calculated on the frenet frame, path smoothing is applied here so that the latter optimization will be stable.

Note that this smoothing process does not consider collision checking. Therefore the output path may have a collision with road boundaries or obstacles.

"},{"location":"planning/obstacle_avoidance_planner/docs/eb/#flowchart","title":"Flowchart","text":""},{"location":"planning/obstacle_avoidance_planner/docs/eb/#formulation","title":"Formulation","text":"

We formulate a quadratic problem minimizing the diagonal length of the rhombus on each point generated by the current point and its previous and next points, shown as the red vector's length.

Assuming that k'th point is \\boldsymbol{p}_k = (x_k, y_k), the objective function is as follows.

\\begin{align} \\ J & = \\min \\sum_{k=1}^{n-2} ||(\\boldsymbol{p}_{k+1} - \\boldsymbol{p}_{k}) - (\\boldsymbol{p}_{k} - \\boldsymbol{p}_{k-1})||^2 \\\\ \\ & = \\min \\sum_{k=1}^{n-2} ||\\boldsymbol{p}_{k+1} - 2 \\boldsymbol{p}_{k} + \\boldsymbol{p}_{k-1}||^2 \\\\ \\ & = \\min \\sum_{k=1}^{n-2} \\{(x_{k+1} - x_k + x_{k-1})^2 + (y_{k+1} - y_k + y_{k-1})^2\\} \\\\ \\ & = \\min \\begin{pmatrix} \\ x_0 \\\\ \\ x_1 \\\\ \\ x_2 \\\\ \\vdots \\\\ \\ x_{n-3}\\\\ \\ x_{n-2} \\\\ \\ x_{n-1} \\\\ \\ y_0 \\\\ \\ y_1 \\\\ \\ y_2 \\\\ \\vdots \\\\ \\ y_{n-3}\\\\ \\ y_{n-2} \\\\ \\ y_{n-1} \\\\ \\end{pmatrix}^T \\begin{pmatrix} 1 & -2 & 1 & 0 & \\dots& \\\\ -2 & 5 & -4 & 1 & 0 &\\dots \\\\ 1 & -4 & 6 & -4 & 1 & \\\\ 0 & 1 & -4 & 6 & -4 & \\\\ \\vdots & 0 & \\ddots&\\ddots& \\ddots \\\\ & \\vdots & & & \\\\ & & & 1 & -4 & 6 & -4 & 1 \\\\ & & & & 1 & -4 & 5 & -2 \\\\ & & & & & 1 & -2 & 1& \\\\ & & & & & & & &1 & -2 & 1 & 0 & \\dots& \\\\ & & & & & & & &-2 & 5 & -4 & 1 & 0 &\\dots \\\\ & & & & & & & &1 & -4 & 6 & -4 & 1 & \\\\ & & & & & & & &0 & 1 & -4 & 6 & -4 & \\\\ & & & & & & & &\\vdots & 0 & \\ddots&\\ddots& \\ddots \\\\ & & & & & & & & & \\vdots & & & \\\\ & & & & & & & & & & & 1 & -4 & 6 & -4 & 1 \\\\ & & & & & & & & & & & & 1 & -4 & 5 & -2 \\\\ & & & & & & & & & & & & & 1 & -2 & 1& \\\\ \\end{pmatrix} \\begin{pmatrix} \\ x_0 \\\\ \\ x_1 \\\\ \\ x_2 \\\\ \\vdots \\\\ \\ x_{n-3}\\\\ \\ x_{n-2} \\\\ \\ x_{n-1} \\\\ \\ y_0 \\\\ \\ y_1 \\\\ \\ y_2 \\\\ \\vdots \\\\ \\ y_{n-3}\\\\ \\ y_{n-2} \\\\ \\ y_{n-1} \\\\ \\end{pmatrix} \\end{align}

Regarding the constraint, the distance that each point can move is limited so that the path will not changed a lot but will be smoother. In detail, the longitudinal distance that each point can move is zero, and the lateral distance is parameterized as eb.clearance.clearance_for_fix, eb.clearance.clearance_for_joint and eb.clearance.clearance_for_smooth.

The following figure describes how to constrain the lateral distance to move. The red line is where the point can move. The points for the upper and lower bound are described as (x_k^u, y_k^u) and (x_k^l, y_k^l), respectively.

Based on the line equation whose slope angle is \\theta_k and that passes through (x_k, y_k), (x_k^u, y_k^u) and (x_k^l, y_k^l), the lateral constraint is formulated as follows.

C_k^l \\leq C_k \\leq C_k^u

In addition, the beginning point is fixed and the end point as well if the end point is considered as the goal. This constraint can be applied with the upper equation by changing the distance that each point can move.

"},{"location":"planning/obstacle_avoidance_planner/docs/mpt/","title":"Model predictive trajectory","text":""},{"location":"planning/obstacle_avoidance_planner/docs/mpt/#abstract","title":"Abstract","text":"

Model Predictive Trajectory (MPT) calculates the trajectory that meets the following conditions.

Conditions for collision free is considered to be not hard constraints but soft constraints. When the optimization failed or the optimized trajectory is not collision free, the output trajectory will be previously generated trajectory.

Trajectory near the ego must be stable, therefore the condition where trajectory points near the ego are the same as previously generated trajectory is considered, and this is the only hard constraints in MPT.

"},{"location":"planning/obstacle_avoidance_planner/docs/mpt/#flowchart","title":"Flowchart","text":""},{"location":"planning/obstacle_avoidance_planner/docs/mpt/#vehicle-kinematics","title":"Vehicle kinematics","text":"

As the following figure, we consider the bicycle kinematics model in the frenet frame to track the reference path. At time step k, we define lateral distance to the reference path, heading angle against the reference path, and steer angle as y_k, \\theta_k, and \\delta_k respectively.

Assuming that the commanded steer angle is \\delta_{des, k}, the kinematics model in the frenet frame is formulated as follows. We also assume that the steer angle \\delta_k is first-order lag to the commanded one.

\\begin{align} y_{k+1} & = y_{k} + v \\sin \\theta_k dt \\\\ \\theta_{k+1} & = \\theta_k + \\frac{v \\tan \\delta_k}{L}dt - \\kappa_k v \\cos \\theta_k dt \\\\ \\delta_{k+1} & = \\delta_k - \\frac{\\delta_k - \\delta_{des,k}}{\\tau}dt \\end{align}"},{"location":"planning/obstacle_avoidance_planner/docs/mpt/#linearization","title":"Linearization","text":"

Then we linearize these equations. y_k and \\theta_k are tracking errors, so we assume that those are small enough. Therefore \\sin \\theta_k \\approx \\theta_k.

Since \\delta_k is a steer angle, it is not always small. By using a reference steer angle \\delta_{\\mathrm{ref}, k} calculated by the reference path curvature \\kappa_k, we express \\delta_k with a small value \\Delta \\delta_k.

Note that the steer angle \\delta_k is within the steer angle limitation \\delta_{\\max}. When the reference steer angle \\delta_{\\mathrm{ref}, k} is larger than the steer angle limitation \\delta_{\\max}, and \\delta_{\\mathrm{ref}, k} is used to linearize the steer angle, \\Delta \\delta_k is \\Delta \\delta_k = \\delta - \\delta_{\\mathrm{ref}, k} = \\delta_{\\max} - \\delta_{\\mathrm{ref}, k}, and the absolute \\Delta \\delta_k gets larger. Therefore, we have to apply the steer angle limitation to \\delta_{\\mathrm{ref}, k} as well.

\\begin{align} \\delta_{\\mathrm{ref}, k} & = \\mathrm{clamp}(\\arctan(L \\kappa_k), -\\delta_{\\max}, \\delta_{\\max}) \\\\ \\delta_k & = \\delta_{\\mathrm{ref}, k} + \\Delta \\delta_k, \\ \\Delta \\delta_k \\ll 1 \\\\ \\end{align}

\\mathrm{clamp}(v, v_{\\min}, v_{\\max}) is a function to convert v to be larger than v_{\\min} and smaller than v_{\\max}.

Using this \\delta_{\\mathrm{ref}, k}, \\tan \\delta_k is linearized as follows.

\\begin{align} \\tan \\delta_k & \\approx \\tan \\delta_{\\mathrm{ref}, k} + \\left.\\frac{d \\tan \\delta}{d \\delta}\\right|_{\\delta = \\delta_{\\mathrm{ref}, k}} \\Delta \\delta_k \\\\ & = \\tan \\delta_{\\mathrm{ref}, k} + \\left.\\frac{d \\tan \\delta}{d \\delta}\\right|_{\\delta = \\delta_{\\mathrm{ref}, k}} (\\delta_{\\mathrm{ref}, k} - \\delta_k) \\\\ & = \\tan \\delta_{\\mathrm{ref}, k} - \\frac{\\delta_{\\mathrm{ref}, k}}{\\cos^2 \\delta_{\\mathrm{ref}, k}} + \\frac{1}{\\cos^2 \\delta_{\\mathrm{ref}, k}} \\delta_k \\end{align}"},{"location":"planning/obstacle_avoidance_planner/docs/mpt/#one-step-state-equation","title":"One-step state equation","text":"

Based on the linearization, the error kinematics is formulated with the following linear equations,

\\begin{align} \\begin{pmatrix} y_{k+1} \\\\ \\theta_{k+1} \\end{pmatrix} = \\begin{pmatrix} 1 & v dt \\\\ 0 & 1 \\\\ \\end{pmatrix} \\begin{pmatrix} y_k \\\\ \\theta_k \\\\ \\end{pmatrix} + \\begin{pmatrix} 0 \\\\ \\frac{v dt}{L \\cos^{2} \\delta_{\\mathrm{ref}, k}} \\\\ \\end{pmatrix} \\delta_{k} + \\begin{pmatrix} 0 \\\\ \\frac{v \\tan(\\delta_{\\mathrm{ref}, k}) dt}{L} - \\frac{v \\delta_{\\mathrm{ref}, k} dt}{L \\cos^{2} \\delta_{\\mathrm{ref}, k}} - \\kappa_k v dt\\\\ \\end{pmatrix} \\end{align}

which can be formulated as follows with the state \\boldsymbol{x}, control input u and some matrices, where \\boldsymbol{x} = (y_k, \\theta_k)

\\begin{align} \\boldsymbol{x}_{k+1} = A_k \\boldsymbol{x}_k + \\boldsymbol{b}_k u_k + \\boldsymbol{w}_k \\end{align}"},{"location":"planning/obstacle_avoidance_planner/docs/mpt/#time-series-state-equation","title":"Time-series state equation","text":"

Then, we formulate time-series state equation by concatenating states, control inputs and matrices respectively as

\\begin{align} \\boldsymbol{x} = A \\boldsymbol{x}_0 + B \\boldsymbol{u} + \\boldsymbol{w} \\end{align}

where

\\begin{align} \\boldsymbol{x} = (\\boldsymbol{x}^T_1, \\boldsymbol{x}^T_2, \\boldsymbol{x}^T_3, \\dots, \\boldsymbol{x}^T_{n-1})^T \\\\ \\boldsymbol{u} = (u_0, u_1, u_2, \\dots, u_{n-2})^T \\\\ \\boldsymbol{w} = (\\boldsymbol{w}^T_0, \\boldsymbol{w}^T_1, \\boldsymbol{w}^T_2, \\dots, \\boldsymbol{w}^T_{n-1})^T. \\\\ \\end{align}

In detail, each matrices are constructed as follows.

\\begin{align} \\begin{pmatrix} \\boldsymbol{x}_1 \\\\ \\boldsymbol{x}_2 \\\\ \\boldsymbol{x}_3 \\\\ \\vdots \\\\ \\boldsymbol{x}_{n-1} \\end{pmatrix} = \\begin{pmatrix} A_0 \\\\ A_1 A_0 \\\\ A_2 A_1 A_0\\\\ \\vdots \\\\ \\prod\\limits_{k=0}^{n-1} A_{k} \\end{pmatrix} \\boldsymbol{x}_0 + \\begin{pmatrix} B_0 & 0 & & \\dots & 0 \\\\ A_0 B_0 & B_1 & 0 & \\dots & 0 \\\\ A_1 A_0 B_0 & A_0 B_1 & B_2 & \\dots & 0 \\\\ \\vdots & \\vdots & & \\ddots & 0 \\\\ \\prod\\limits_{k=0}^{n-3} A_k B_0 & \\prod\\limits_{k=0}^{n-4} A_k B_1 & \\dots & A_0 B_{n-3} & B_{n-2} \\end{pmatrix} \\begin{pmatrix} u_0 \\\\ u_1 \\\\ u_2 \\\\ \\vdots \\\\ u_{n-2} \\end{pmatrix} + \\begin{pmatrix} I & 0 & & \\dots & 0 \\\\ A_0 & I & 0 & \\dots & 0 \\\\ A_1 A_0 & A_0 & I & \\dots & 0 \\\\ \\vdots & \\vdots & & \\ddots & 0 \\\\ \\prod\\limits_{k=0}^{n-3} A_k & \\prod\\limits_{k=0}^{n-4} A_k & \\dots & A_0 & I \\end{pmatrix} \\begin{pmatrix} \\boldsymbol{w}_0 \\\\ \\boldsymbol{w}_1 \\\\ \\boldsymbol{w}_2 \\\\ \\vdots \\\\ \\boldsymbol{w}_{n-2} \\end{pmatrix} \\end{align}"},{"location":"planning/obstacle_avoidance_planner/docs/mpt/#free-boundary-conditioned-time-series-state-equation","title":"Free-boundary-conditioned time-series state equation","text":"

For path planning which does not start from the current ego pose, \\boldsymbol{x}_0 should be the design variable of optimization. Therefore, we make \\boldsymbol{u}' by concatenating \\boldsymbol{x}_0 and \\boldsymbol{u}, and redefine \\boldsymbol{x} as follows.

\\begin{align} \\boldsymbol{u}' & = (\\boldsymbol{x}^T_0, \\boldsymbol{u}^T)^T \\\\ \\boldsymbol{x} & = (\\boldsymbol{x}^T_0, \\boldsymbol{x}^T_1, \\boldsymbol{x}^T_2, \\dots, \\boldsymbol{x}^T_{n-1})^T \\end{align}

Then we get the following state equation

\\begin{align} \\boldsymbol{x}' = B \\boldsymbol{u}' + \\boldsymbol{w}, \\end{align}

which is in detail

\\begin{align} \\begin{pmatrix} \\boldsymbol{x}_0 \\\\ \\boldsymbol{x}_1 \\\\ \\boldsymbol{x}_2 \\\\ \\boldsymbol{x}_3 \\\\ \\vdots \\\\ \\boldsymbol{x}_{n-1} \\end{pmatrix} = \\begin{pmatrix} I & 0 & \\dots & & & 0 \\\\ A_0 & B_0 & 0 & & \\dots & 0 \\\\ A_1 A_0 & A_0 B_0 & B_1 & 0 & \\dots & 0 \\\\ A_2 A_1 A_0 & A_1 A_0 B_0 & A_0 B_1 & B_2 & \\dots & 0 \\\\ \\vdots & \\vdots & \\vdots & & \\ddots & 0 \\\\ \\prod\\limits_{k=0}^{n-1} A_k & \\prod\\limits_{k=0}^{n-3} A_k B_0 & \\prod\\limits_{k=0}^{n-4} A_k B_1 & \\dots & A_0 B_{n-3} & B_{n-2} \\end{pmatrix} \\begin{pmatrix} \\boldsymbol{x}_0 \\\\ u_0 \\\\ u_1 \\\\ u_2 \\\\ \\vdots \\\\ u_{n-2} \\end{pmatrix} + \\begin{pmatrix} 0 & \\dots & & & 0 \\\\ I & 0 & & \\dots & 0 \\\\ A_0 & I & 0 & \\dots & 0 \\\\ A_1 A_0 & A_0 & I & \\dots & 0 \\\\ \\vdots & \\vdots & & \\ddots & 0 \\\\ \\prod\\limits_{k=0}^{n-3} A_k & \\prod\\limits_{k=0}^{n-4} A_k & \\dots & A_0 & I \\end{pmatrix} \\begin{pmatrix} \\boldsymbol{w}_0 \\\\ \\boldsymbol{w}_1 \\\\ \\boldsymbol{w}_2 \\\\ \\vdots \\\\ \\boldsymbol{w}_{n-2} \\end{pmatrix}. \\end{align}"},{"location":"planning/obstacle_avoidance_planner/docs/mpt/#objective-function","title":"Objective function","text":"

The objective function for smoothing and tracking is shown as follows, which can be formulated with value function matrices Q, R.

\\begin{align} J_1 (\\boldsymbol{x}', \\boldsymbol{u}') & = w_y \\sum_{k} y_k^2 + w_{\\theta} \\sum_{k} \\theta_k^2 + w_{\\delta} \\sum_k \\delta_k^2 + w_{\\dot{\\delta}} \\sum_k \\dot{\\delta}_k^2 + w_{\\ddot{\\delta}} \\sum_k \\ddot{\\delta}_k^2 \\\\ & = \\boldsymbol{x}'^T Q \\boldsymbol{x}' + \\boldsymbol{u}'^T R \\boldsymbol{u}' \\\\ & = \\boldsymbol{u}'^T H \\boldsymbol{u}' + \\boldsymbol{u}'^T \\boldsymbol{f} \\end{align}

As mentioned before, the constraints to be collision free with obstacles and road boundaries are formulated to be soft constraints. Assuming that the lateral distance to the road boundaries or obstacles from the back wheel center, front wheel center, and the point between them are y_{\\mathrm{base}, k}, y_{\\mathrm{top}, k}, y_{\\mathrm{mid}, k} respectively, and slack variables for each point are \\lambda_{\\mathrm{base}}, \\lambda_{\\mathrm{top}}, \\lambda_{\\mathrm{mid}}, the soft constraints can be formulated as follows.

y_{\\mathrm{base}, k, \\min} - \\lambda_{\\mathrm{base}, k} \\leq y_{\\mathrm{base}, k} (y_k) \\leq y_{\\mathrm{base}, k, \\max} + \\lambda_{\\mathrm{base}, k}\\\\ y_{\\mathrm{top}, k, \\min} - \\lambda_{\\mathrm{top}, k} \\leq y_{\\mathrm{top}, k} (y_k) \\leq y_{\\mathrm{top}, k, \\max} + \\lambda_{\\mathrm{top}, k}\\\\ y_{\\mathrm{mid}, k, \\min} - \\lambda_{\\mathrm{mid}, k} \\leq y_{\\mathrm{mid}, k} (y_k) \\leq y_{\\mathrm{mid}, k, \\max} + \\lambda_{\\mathrm{mid}, k} \\\\ 0 \\leq \\lambda_{\\mathrm{base}, k} \\\\ 0 \\leq \\lambda_{\\mathrm{top}, k} \\\\ 0 \\leq \\lambda_{\\mathrm{mid}, k}

Since y_{\\mathrm{base}, k}, y_{\\mathrm{top}, k}, y_{\\mathrm{mid}, k} is formulated as a linear function of y_k, the objective function for soft constraints is formulated as follows.

\\begin{align} J_2 & (\\boldsymbol{\\lambda}_\\mathrm{base}, \\boldsymbol{\\lambda}_\\mathrm{top}, \\boldsymbol {\\lambda}_\\mathrm{mid})\\\\ & = w_{\\mathrm{base}} \\sum_{k} \\lambda_{\\mathrm{base}, k} + w_{\\mathrm{mid}} \\sum_k \\lambda_{\\mathrm{mid}, k} + w_{\\mathrm{top}} \\sum_k \\lambda_{\\mathrm{top}, k} \\end{align}

Slack variables are also design variables for optimization. We define a vector \\boldsymbol{v}, that concatenates all the design variables.

\\begin{align} \\boldsymbol{v} = \\begin{pmatrix} \\boldsymbol{u}'^T & \\boldsymbol{\\lambda}_\\mathrm{base}^T & \\boldsymbol{\\lambda}_\\mathrm{top}^T & \\boldsymbol{\\lambda}_\\mathrm{mid}^T \\end{pmatrix}^T \\end{align}

The summation of these two objective functions is the objective function for the optimization problem.

\\begin{align} \\min_{\\boldsymbol{v}} J (\\boldsymbol{v}) = \\min_{\\boldsymbol{v}} J_1 (\\boldsymbol{u}') + J_2 (\\boldsymbol{\\lambda}_\\mathrm{base}, \\boldsymbol{\\lambda}_\\mathrm{top}, \\boldsymbol{\\lambda}_\\mathrm{mid}) \\end{align}

As mentioned before, we use hard constraints where some trajectory points in front of the ego are the same as the previously generated trajectory points. This hard constraints is formulated as follows.

\\begin{align} \\delta_k = \\delta_{k}^{\\mathrm{prev}} (0 \\leq i \\leq N_{\\mathrm{fix}}) \\end{align}

Finally we transform those objective functions to the following QP problem, and solve it.

\\begin{align} \\min_{\\boldsymbol{v}} \\ & \\frac{1}{2} \\boldsymbol{v}^T \\boldsymbol{H} \\boldsymbol{v} + \\boldsymbol{f} \\boldsymbol{v} \\\\ \\mathrm{s.t.} \\ & \\boldsymbol{b}_{lower} \\leq \\boldsymbol{A} \\boldsymbol{v} \\leq \\boldsymbol{b}_{upper} \\end{align}"},{"location":"planning/obstacle_avoidance_planner/docs/mpt/#constraints","title":"Constraints","text":""},{"location":"planning/obstacle_avoidance_planner/docs/mpt/#steer-angle-limitation","title":"Steer angle limitation","text":"

Steer angle has a limitation \\delta_{max} and \\delta_{min}. Therefore we add linear inequality equations.

\\begin{align} \\delta_{min} \\leq \\delta_i \\leq \\delta_{max} \\end{align}"},{"location":"planning/obstacle_avoidance_planner/docs/mpt/#collision-free","title":"Collision free","text":"

To realize collision-free trajectory planning, we have to formulate constraints that the vehicle is inside the road and also does not collide with obstacles in linear equations. For linearity, we implemented some methods to approximate the vehicle shape with a set of circles, that is reliable and easy to implement.

Now we formulate the linear constraints where a set of circles on each trajectory point is collision-free. By using the drivable area, we calculate upper and lower boundaries along reference points, which will be interpolated on any position on the trajectory. NOTE that upper and lower boundary is left and right, respectively.

Assuming that upper and lower boundaries are b_l, b_u respectively, and r is a radius of a circle, lateral deviation of the circle center y' has to be

b_l + r \\leq y' \\leq b_u - r.

Based on the following figure, y' can be formulated as follows.

\\begin{align} y' & = L \\sin(\\theta + \\beta) + y \\cos \\beta - l \\sin(\\gamma - \\phi_a) \\\\ & = L \\sin \\theta \\cos \\beta + L \\cos \\theta \\sin \\beta + y \\cos \\beta - l \\sin(\\gamma - \\phi_a) \\\\ & \\approx L \\theta \\cos \\beta + L \\sin \\beta + y \\cos \\beta - l \\sin(\\gamma - \\phi_a) \\end{align} b_l + r - \\lambda \\leq y' \\leq b_u - r + \\lambda. \\begin{align} y' & = C_1 \\boldsymbol{x} + C_2 \\\\ & = C_1 (B \\boldsymbol{v} + \\boldsymbol{w}) + C_2 \\\\ & = C_1 B \\boldsymbol{v} + \\boldsymbol{w} + C_2 \\end{align}

Note that longitudinal position of the circle center and the trajectory point to calculate boundaries are different. But each boundaries are vertical against the trajectory, resulting in less distortion by the longitudinal position difference since road boundaries does not change so much. For example, if the boundaries are not vertical against the trajectory and there is a certain difference of longitudinal position between the circe center and the trajectory point, we can easily guess that there is much more distortion when comparing lateral deviation and boundaries.

\\begin{align} A_{blk} & = \\begin{pmatrix} C_1 B & O & \\dots & O & I_{N_{ref} \\times N_{ref}} & O \\dots & O\\\\ -C_1 B & O & \\dots & O & I & O \\dots & O\\\\ O & O & \\dots & O & I & O \\dots & O \\end{pmatrix} \\in \\boldsymbol{R}^{3 N_{ref} \\times D_v + N_{circle} N_{ref}} \\\\ \\boldsymbol{b}_{lower, blk} & = \\begin{pmatrix} \\boldsymbol{b}_{lower} - C_1 \\boldsymbol{w} - C_2 \\\\ -\\boldsymbol{b}_{upper} + C_1 \\boldsymbol{w} + C_2 \\\\ O \\end{pmatrix} \\in \\boldsymbol{R}^{3 N_{ref}} \\\\ \\boldsymbol{b}_{upper, blk} & = \\boldsymbol{\\infty} \\in \\boldsymbol{R}^{3 N_{ref}} \\end{align}

We will explain options for optimization.

"},{"location":"planning/obstacle_avoidance_planner/docs/mpt/#l-infinity-optimization","title":"L-infinity optimization","text":"

The above formulation is called L2 norm for slack variables. Instead, if we use L-infinity norm where slack variables are shared by enabling l_inf_norm.

\\begin{align} A_{blk} = \\begin{pmatrix} C_1 B & I_{N_{ref} \\times N_{ref}} \\\\ -C_1 B & I \\\\ O & I \\end{pmatrix} \\in \\boldsymbol{R}^{3N_{ref} \\times D_v + N_{ref}} \\end{align}"},{"location":"planning/obstacle_avoidance_planner/docs/mpt/#two-step-soft-constraints","title":"Two-step soft constraints","text":"\\begin{align} \\boldsymbol{v}' = \\begin{pmatrix} \\boldsymbol{v} \\\\ \\boldsymbol{\\lambda}^{soft_1} \\\\ \\boldsymbol{\\lambda}^{soft_2} \\\\ \\end{pmatrix} \\in \\boldsymbol{R}^{D_v + 2N_{slack}} \\end{align}

* depends on whether to use L2 norm or L-infinity optimization.

\\begin{align} A_{blk} & = \\begin{pmatrix} A^{soft_1}_{blk} \\\\ A^{soft_2}_{blk} \\\\ \\end{pmatrix}\\\\ & = \\begin{pmatrix} C_1^{soft_1} B & & \\\\ -C_1^{soft_1} B & \\Huge{*} & \\Huge{O} \\\\ O & & \\\\ C_1^{soft_2} B & & \\\\ -C_1^{soft_2} B & \\Huge{O} & \\Huge{*} \\\\ O & & \\end{pmatrix} \\in \\boldsymbol{R}^{6 N_{ref} \\times D_v + 2 N_{slack}} \\end{align}

N_{slack} is N_{circle} when L2 optimization, or 1 when L-infinity optimization. N_{circle} is the number of circles to check collision.

"},{"location":"planning/obstacle_cruise_planner/","title":"Obstacle Cruise Planner","text":""},{"location":"planning/obstacle_cruise_planner/#overview","title":"Overview","text":"

The obstacle_cruise_planner package has following modules.

"},{"location":"planning/obstacle_cruise_planner/#interfaces","title":"Interfaces","text":""},{"location":"planning/obstacle_cruise_planner/#input-topics","title":"Input topics","text":"Name Type Description ~/input/trajectory autoware_auto_planning_msgs::Trajectory input trajectory ~/input/objects autoware_auto_perception_msgs::PredictedObjects dynamic objects ~/input/odometry nav_msgs::msg::Odometry ego odometry"},{"location":"planning/obstacle_cruise_planner/#output-topics","title":"Output topics","text":"Name Type Description ~/output/trajectory autoware_auto_planning_msgs::Trajectory output trajectory ~/output/velocity_limit tier4_planning_msgs::VelocityLimit velocity limit for cruising ~/output/clear_velocity_limit tier4_planning_msgs::VelocityLimitClearCommand clear command for velocity limit ~/output/stop_reasons tier4_planning_msgs::StopReasonArray reasons that make the vehicle to stop"},{"location":"planning/obstacle_cruise_planner/#design","title":"Design","text":"

Design for the following functions is defined here.

A data structure for cruise and stop planning is as follows. This planner data is created first, and then sent to the planning algorithm.

struct ObstacleCruisePlannerData\n{\nrclcpp::Time current_time;\nautoware_auto_planning_msgs::msg::Trajectory traj;\ngeometry_msgs::msg::Pose current_pose;\ndouble current_vel;\ndouble current_acc;\nstd::vector<TargetObstacle> target_obstacles;\n};\n
struct TargetObstacle\n{\nrclcpp::Time time_stamp;\ngeometry_msgs::msg::Pose pose;\nbool orientation_reliable;\ndouble velocity;\nbool velocity_reliable;\nObjectClassification classification;\nstd::string uuid;\nstd::vector<PredictedPath> predicted_paths;\nstd::vector<geometry_msgs::msg::PointStamped> collision_points;\nbool has_stopped;\n};\n
"},{"location":"planning/obstacle_cruise_planner/#obstacle-candidates-selection","title":"Obstacle candidates selection","text":"

In this function, target obstacles for stopping or cruising are selected based on their pose and velocity.

By default, objects that realize one of the following conditions are considered to be the target obstacle candidates. Some terms will be defined in the following subsections.

Note that currently the obstacle candidates selection algorithm is for autonomous driving. However, we have following parameters as well for stop and cruise respectively so that we can extend the obstacles candidates selection algorithm for non vehicle robots. By default, unknown and vehicles are obstacles to cruise and stop, and non vehicles are obstacles just to stop.

Parameter Type Description cruise_obstacle_type.unknown bool flag to consider unknown objects as being cruised cruise_obstacle_type.car bool flag to consider unknown objects as being cruised cruise_obstacle_type.truck bool flag to consider unknown objects as being cruised ... bool ... stop_obstacle_type.unknown bool flag to consider unknown objects as being stopped ... bool ..."},{"location":"planning/obstacle_cruise_planner/#inside-the-detection-area","title":"Inside the detection area","text":"

To calculate obstacles inside the detection area, firstly, obstacles whose distance to the trajectory is less than rough_detection_area_expand_width are selected. Then, the detection area, which is a trajectory with some lateral margin, is calculated as shown in the figure. The detection area width is a vehicle's width + detection_area_expand_width, and it is represented as a polygon resampled with decimate_trajectory_step_length longitudinally. The roughly selected obstacles inside the detection area are considered as inside the detection area.

This two-step detection is used for calculation efficiency since collision checking of polygons is heavy. Boost.Geometry is used as a library to check collision among polygons.

In the obstacle_filtering namespace,

Parameter Type Description rough_detection_area_expand_width double rough lateral margin for rough detection area expansion [m] detection_area_expand_width double lateral margin for precise detection area expansion [m] decimate_trajectory_step_length double longitudinal step length to calculate trajectory polygon for collision checking [m]"},{"location":"planning/obstacle_cruise_planner/#far-crossing-vehicles","title":"Far crossing vehicles","text":"

Near crossing vehicles (= not far crossing vehicles) are defined as vehicle objects realizing either of following conditions.

Assuming t_1 to be the time for the ego to reach the current crossing obstacle position with the constant velocity motion, and t_2 to be the time for the crossing obstacle to go outside the detection area, if the following condition is realized, the crossing vehicle will be ignored.

t_1 - t_2 > \\mathrm{margin\\_for\\_collision\\_time}

In the obstacle_filtering namespace,

Parameter Type Description crossing_obstacle_velocity_threshold double velocity threshold to decide crossing obstacle [m/s] crossing_obstacle_traj_angle_threshold double yaw threshold of crossing obstacle against the nearest trajectory point [rad] collision_time_margin double time threshold of collision between obstacle and ego [s]"},{"location":"planning/obstacle_cruise_planner/#near-cut-in-vehicles","title":"Near Cut-in vehicles","text":"

Near Cut-in vehicles are defined as vehicle objects

In the obstacle_filtering namespace,

Parameter Type Description ego_obstacle_overlap_time_threshold double time threshold to decide cut-in obstacle for cruise or stop [s] max_prediction_time_for_collision_check double prediction time to check collision between obstacle and ego [s] outside_obstacle_min_velocity_threshold double minimum velocity threshold of target obstacle for cut-in detection [m/s]"},{"location":"planning/obstacle_cruise_planner/#stop-planning","title":"Stop planning","text":"Parameter Type Description common.min_strong_accel double ego's minimum acceleration to stop [m/ss] common.safe_distance_margin double distance with obstacles for stop [m] common.terminal_safe_distance_margin double terminal_distance with obstacles for stop, which cannot be exceed safe distance margin [m]

The role of the stop planning is keeping a safe distance with static vehicle objects or dynamic/static non vehicle objects.

The stop planning just inserts the stop point in the trajectory to keep a distance with obstacles inside the detection area. The safe distance is parameterized as common.safe_distance_margin. When it stops at the end of the trajectory, and obstacle is on the same point, the safe distance becomes terminal_safe_distance_margin.

When inserting the stop point, the required acceleration for the ego to stop in front of the stop point is calculated. If the acceleration is less than common.min_strong_accel, the stop planning will be cancelled since this package does not assume a strong sudden brake for emergency.

"},{"location":"planning/obstacle_cruise_planner/#adaptive-cruise-planning","title":"Adaptive cruise planning","text":"Parameter Type Description common.safe_distance_margin double minimum distance with obstacles for cruise [m]

The role of the adaptive cruise planning is keeping a safe distance with dynamic vehicle objects with smoothed velocity transition. This includes not only cruising a front vehicle, but also reacting a cut-in and cut-out vehicle.

The safe distance is calculated dynamically based on the Responsibility-Sensitive Safety (RSS) by the following equation.

d_{rss} = v_{ego} t_{idling} + \\frac{1}{2} a_{ego} t_{idling}^2 + \\frac{v_{ego}^2}{2 a_{ego}} - \\frac{v_{obstacle}^2}{2 a_{obstacle}},

assuming that d_rss is the calculated safe distance, t_{idling} is the idling time for the ego to detect the front vehicle's deceleration, v_{ego} is the ego's current velocity, v_{obstacle} is the front obstacle's current velocity, a_{ego} is the ego's acceleration, and a_{obstacle} is the obstacle's acceleration. These values are parameterized as follows. Other common values such as ego's minimum acceleration is defined in common.param.yaml.

Parameter Type Description common.idling_time double idling time for the ego to detect the front vehicle starting deceleration [s] common.min_ego_accel_for_rss double ego's acceleration for RSS [m/ss] common.min_object_accel_for_rss double front obstacle's acceleration for RSS [m/ss]

The detailed formulation is as follows.

d_{error} = d - d_{rss} \\\\ d_{normalized} = lpf(d_{error} / d_{obstacle}) \\\\ d_{quad, normalized} = sign(d_{normalized}) *d_{normalized}*d_{normalized} \\\\ v_{pid} = pid(d_{quad, normalized}) \\\\ v_{add} = v_{pid} > 0 ? v_{pid}* w_{acc} : v_{pid} \\\\ v_{target} = max(v_{ego} + v_{add}, v_{min, cruise}) Variable Description d actual distance to obstacle d_{rss} ideal distance to obstacle based on RSS v_{min, cruise} min_cruise_target_vel w_{acc} output_ratio_during_accel lpf(val) apply low-pass filter to val pid(val) apply pid to val"},{"location":"planning/obstacle_cruise_planner/#implementation","title":"Implementation","text":""},{"location":"planning/obstacle_cruise_planner/#flowchart","title":"Flowchart","text":"

Successive functions consist of obstacle_cruise_planner as follows.

Various algorithms for stop and cruise planning will be implemented, and one of them is designated depending on the use cases. The core algorithm implementation generateTrajectory depends on the designated algorithm.

"},{"location":"planning/obstacle_cruise_planner/#algorithm-selection","title":"Algorithm selection","text":"

Currently, only a PID-based planner is supported. Each planner will be explained in the following.

Parameter Type Description common.planning_method string cruise and stop planning algorithm, selected from \"pid_base\""},{"location":"planning/obstacle_cruise_planner/#pid-based-planner","title":"PID-based planner","text":""},{"location":"planning/obstacle_cruise_planner/#stop-planning_1","title":"Stop planning","text":"

In the pid_based_planner namespace,

Parameter Type Description obstacle_velocity_threshold_from_cruise_to_stop double obstacle velocity threshold to be stopped from cruised [m/s]

Only one obstacle is targeted for the stop planning. It is the obstacle among obstacle candidates whose velocity is less than obstacle_velocity_threshold_from_cruise_to_stop, and which is the nearest to the ego along the trajectory. A stop point is inserted keepingcommon.safe_distance_margin distance between the ego and obstacle.

Note that, as explained in the stop planning design, a stop planning which requires a strong acceleration (less than common.min_strong_accel) will be canceled.

"},{"location":"planning/obstacle_cruise_planner/#adaptive-cruise-planning_1","title":"Adaptive cruise planning","text":"

In the pid_based_planner namespace,

Parameter Type Description kp double p gain for pid control [-] ki double i gain for pid control [-] kd double d gain for pid control [-] output_ratio_during_accel double The output velocity will be multiplied by the ratio during acceleration to follow the front vehicle. [-] vel_to_acc_weight double target acceleration is target velocity * vel_to_acc_weight [-] min_cruise_target_vel double minimum target velocity during cruise [m/s]

In order to keep the safe distance, the target velocity and acceleration is calculated and sent as an external velocity limit to the velocity smoothing package (motion_velocity_smoother by default). The target velocity and acceleration is respectively calculated with the PID controller according to the error between the reference safe distance and the actual distance.

"},{"location":"planning/obstacle_cruise_planner/#optimization-based-planner","title":"Optimization-based planner","text":"

under construction

"},{"location":"planning/obstacle_cruise_planner/#minor-functions","title":"Minor functions","text":""},{"location":"planning/obstacle_cruise_planner/#prioritization-of-behavior-modules-stop-point","title":"Prioritization of behavior module's stop point","text":"

When stopping for a pedestrian walking on the crosswalk, the behavior module inserts the zero velocity in the trajectory in front of the crosswalk. Also obstacle_cruise_planner's stop planning also works, and the ego may not reach the behavior module's stop point since the safe distance defined in obstacle_cruise_planner may be longer than the behavior module's safe distance. To resolve this non-alignment of the stop point between the behavior module and obstacle_cruise_planner, common.min_behavior_stop_margin is defined. In the case of the crosswalk described above, obstacle_cruise_planner inserts the stop point with a distance common.min_behavior_stop_margin at minimum between the ego and obstacle.

Parameter Type Description common.min_behavior_stop_margin double minimum stop margin when stopping with the behavior module enabled [m]"},{"location":"planning/obstacle_cruise_planner/#a-function-to-keep-the-closest-stop-obstacle-in-target-obstacles","title":"A function to keep the closest stop obstacle in target obstacles","text":"

In order to keep the closest stop obstacle in the target obstacles, we check whether it is disappeared or not from the target obstacles in the checkConsistency function. If the previous closest stop obstacle is remove from the lists, we keep it in the lists for stop_obstacle_hold_time_threshold seconds. Note that if a new stop obstacle appears and the previous closest obstacle removes from the lists, we do not add it to the target obstacles again.

Parameter Type Description obstacle_filtering.stop_obstacle_hold_time_threshold double maximum time for holding closest stop obstacle [s]"},{"location":"planning/obstacle_cruise_planner/#visualization-for-debugging","title":"Visualization for debugging","text":""},{"location":"planning/obstacle_cruise_planner/#detection-area","title":"Detection area","text":"

Green polygons which is a detection area is visualized by detection_polygons in the ~/debug/marker topic.

"},{"location":"planning/obstacle_cruise_planner/#collision-point","title":"Collision point","text":"

Red point which is a collision point with obstacle is visualized by collision_points in the ~/debug/marker topic.

"},{"location":"planning/obstacle_cruise_planner/#obstacle-for-cruise","title":"Obstacle for cruise","text":"

Yellow sphere which is a obstacle for cruise is visualized by obstacles_to_cruise in the ~/debug/marker topic.

"},{"location":"planning/obstacle_cruise_planner/#obstacle-for-stop","title":"Obstacle for stop","text":"

Red sphere which is a obstacle for stop is visualized by obstacles_to_stop in the ~/debug/marker topic.

"},{"location":"planning/obstacle_cruise_planner/#obstacle-cruise-wall","title":"Obstacle cruise wall","text":"

Yellow wall which means a safe distance to cruise if the ego's front meets the wall is visualized in the ~/debug/cruise_wall_marker topic.

"},{"location":"planning/obstacle_cruise_planner/#obstacle-stop-wall","title":"Obstacle stop wall","text":"

Red wall which means a safe distance to stop if the ego's front meets the wall is visualized in the ~/debug/stop_wall_marker topic.

"},{"location":"planning/obstacle_cruise_planner/#known-limits","title":"Known Limits","text":""},{"location":"planning/obstacle_stop_planner/","title":"Obstacle Stop Planner","text":""},{"location":"planning/obstacle_stop_planner/#overview","title":"Overview","text":"

obstacle_stop_planner has following modules

"},{"location":"planning/obstacle_stop_planner/#input-topics","title":"Input topics","text":"Name Type Description ~/input/pointcloud sensor_msgs::PointCloud2 obstacle pointcloud ~/input/trajectory autoware_auto_planning_msgs::Trajectory trajectory ~/input/vector_map autoware_auto_mapping_msgs::HADMapBin vector map ~/input/odometry nav_msgs::Odometry vehicle velocity ~/input/dynamic_objects autoware_auto_perception_msgs::PredictedObjects dynamic objects ~/input/expand_stop_range tier4_planning_msgs::msg::ExpandStopRange expand stop range"},{"location":"planning/obstacle_stop_planner/#output-topics","title":"Output topics","text":"Name Type Description ~output/trajectory autoware_auto_planning_msgs::Trajectory trajectory to be followed ~output/stop_reasons tier4_planning_msgs::StopReasonArray reasons that cause the vehicle to stop"},{"location":"planning/obstacle_stop_planner/#common-parameter","title":"Common Parameter","text":"Parameter Type Description enable_slow_down bool enable slow down planner [-] max_velocity double max velocity [m/s] chattering_threshold double even if the obstacle disappears, the stop judgment continues for chattering_threshold [s] enable_z_axis_obstacle_filtering bool filter obstacles in z axis (height) [-] z_axis_filtering_buffer double additional buffer for z axis filtering [m]]"},{"location":"planning/obstacle_stop_planner/#obstacle-stop-planner_1","title":"Obstacle Stop Planner","text":""},{"location":"planning/obstacle_stop_planner/#role","title":"Role","text":"

This module inserts the stop point before the obstacle with margin. In nominal case, the margin is the sum of baselink_to_front and max_longitudinal_margin. The baselink_to_front means the distance between baselink(center of rear-wheel axis) and front of the car. The detection area is generated along the processed trajectory as following figure. (This module cut off the input trajectory behind the ego position and decimates the trajectory points for reducing computational costs.)

parameters for obstacle stop planner

target for obstacle stop planner

If another stop point has already been inserted by other modules within max_longitudinal_margin, the margin is the sum of baselink_to_front and min_longitudinal_margin. This feature exists to avoid stopping unnaturally position. (For example, the ego stops unnaturally far away from stop line of crosswalk that pedestrians cross to without this feature.)

minimum longitudinal margin

The module searches the obstacle pointcloud within detection area. When the pointcloud is found, Adaptive Cruise Controller modules starts to work. only when Adaptive Cruise Controller modules does not insert target velocity, the stop point is inserted to the trajectory. The stop point means the point with 0 velocity.

"},{"location":"planning/obstacle_stop_planner/#restart-prevention","title":"Restart prevention","text":"

If it needs X meters (e.g. 0.5 meters) to stop once the vehicle starts moving due to the poor vehicle control performance, the vehicle goes over the stopping position that should be strictly observed when the vehicle starts to moving in order to approach the near stop point (e.g. 0.3 meters away).

This module has parameter hold_stop_margin_distance in order to prevent from these redundant restart. If the vehicle is stopped within hold_stop_margin_distance meters from stop point of the module, the module judges that the vehicle has already stopped for the module's stop point and plans to keep stopping current position even if the vehicle is stopped due to other factors.

parameters

outside the hold_stop_margin_distance

inside the hold_stop_margin_distance"},{"location":"planning/obstacle_stop_planner/#parameters","title":"Parameters","text":""},{"location":"planning/obstacle_stop_planner/#stop-position","title":"Stop position","text":"Parameter Type Description max_longitudinal_margin double margin between obstacle and the ego's front [m] max_longitudinal_margin_behind_goal double margin between obstacle and the ego's front when the stop point is behind the goal[m] min_longitudinal_margin double if any obstacle exists within max_longitudinal_margin, this module set margin as the value of stop margin to min_longitudinal_margin [m] hold_stop_margin_distance double parameter for restart prevention (See above section) [m]"},{"location":"planning/obstacle_stop_planner/#obstacle-detection-area","title":"Obstacle detection area","text":"Parameter Type Description lateral_margin double lateral margin from the vehicle footprint for collision obstacle detection area [m] step_length double step length for pointcloud search range [m] enable_stop_behind_goal_for_obstacle bool enabling extend trajectory after goal lane for obstacle detection"},{"location":"planning/obstacle_stop_planner/#flowchart","title":"Flowchart","text":""},{"location":"planning/obstacle_stop_planner/#slow-down-planner","title":"Slow Down Planner","text":""},{"location":"planning/obstacle_stop_planner/#role_1","title":"Role","text":"

This module inserts the slow down section before the obstacle with forward margin and backward margin. The forward margin is the sum of baselink_to_front and longitudinal_forward_margin, and the backward margin is the sum of baselink_to_front and longitudinal_backward_margin. The ego keeps slow down velocity in slow down section. The velocity is calculated the following equation.

v_{target} = v_{min} + \\frac{l_{ld} - l_{vw}/2}{l_{margin}} (v_{max} - v_{min} )

The above equation means that the smaller the lateral deviation of the pointcloud, the lower the velocity of the slow down section.

parameters for slow down planner

target for slow down planner"},{"location":"planning/obstacle_stop_planner/#parameters_1","title":"Parameters","text":""},{"location":"planning/obstacle_stop_planner/#slow-down-section","title":"Slow down section","text":"Parameter Type Description longitudinal_forward_margin double margin between obstacle and the ego's front [m] longitudinal_backward_margin double margin between obstacle and the ego's rear [m]"},{"location":"planning/obstacle_stop_planner/#obstacle-detection-area_1","title":"Obstacle detection area","text":"Parameter Type Description lateral_margin double lateral margin from the vehicle footprint for slow down obstacle detection area [m]"},{"location":"planning/obstacle_stop_planner/#slow-down-target-velocity","title":"Slow down target velocity","text":"Parameter Type Description max_slow_down_velocity double max slow down velocity [m/s] min_slow_down_velocity double min slow down velocity [m/s]"},{"location":"planning/obstacle_stop_planner/#flowchart_1","title":"Flowchart","text":""},{"location":"planning/obstacle_stop_planner/#adaptive-cruise-controller","title":"Adaptive Cruise Controller","text":""},{"location":"planning/obstacle_stop_planner/#role_2","title":"Role","text":"

Adaptive Cruise Controller module embeds maximum velocity in trajectory when there is a dynamic point cloud on the trajectory. The value of maximum velocity depends on the own velocity, the velocity of the point cloud ( = velocity of the front car), and the distance to the point cloud (= distance to the front car).

Parameter Type Description adaptive_cruise_control.use_object_to_estimate_vel bool use dynamic objects for estimating object velocity or not adaptive_cruise_control.use_pcl_to_estimate_vel bool use raw pointclouds for estimating object velocity or not adaptive_cruise_control.consider_obj_velocity bool consider forward vehicle velocity to calculate target velocity in adaptive cruise or not adaptive_cruise_control.obstacle_velocity_thresh_to_start_acc double start adaptive cruise control when the velocity of the forward obstacle exceeds this value [m/s] adaptive_cruise_control.obstacle_velocity_thresh_to_stop_acc double stop acc when the velocity of the forward obstacle falls below this value [m/s] adaptive_cruise_control.emergency_stop_acceleration double supposed minimum acceleration (deceleration) in emergency stop [m/ss] adaptive_cruise_control.emergency_stop_idling_time double supposed idling time to start emergency stop [s] adaptive_cruise_control.min_dist_stop double minimum distance of emergency stop [m] adaptive_cruise_control.obstacle_emergency_stop_acceleration double supposed minimum acceleration (deceleration) in emergency stop [m/ss] adaptive_cruise_control.max_standard_acceleration double supposed maximum acceleration in active cruise control [m/ss] adaptive_cruise_control.min_standard_acceleration double supposed minimum acceleration (deceleration) in active cruise control [m/ss] adaptive_cruise_control.standard_idling_time double supposed idling time to react object in active cruise control [s] adaptive_cruise_control.min_dist_standard double minimum distance in active cruise control [m] adaptive_cruise_control.obstacle_min_standard_acceleration double supposed minimum acceleration of forward obstacle [m/ss] adaptive_cruise_control.margin_rate_to_change_vel double rate of margin distance to insert target velocity [-] adaptive_cruise_control.use_time_compensation_to_calc_distance bool use time-compensation to calculate distance to forward vehicle adaptive_cruise_control.p_coefficient_positive double coefficient P in PID control (used when target dist -current_dist >=0) [-] adaptive_cruise_control.p_coefficient_negative double coefficient P in PID control (used when target dist -current_dist <0) [-] adaptive_cruise_control.d_coefficient_positive double coefficient D in PID control (used when delta_dist >=0) [-] adaptive_cruise_control.d_coefficient_negative double coefficient D in PID control (used when delta_dist <0) [-] adaptive_cruise_control.object_polygon_length_margin double The distance to extend the polygon length the object in pointcloud-object matching [m] adaptive_cruise_control.object_polygon_width_margin double The distance to extend the polygon width the object in pointcloud-object matching [m] adaptive_cruise_control.valid_estimated_vel_diff_time double Maximum time difference treated as continuous points in speed estimation using a point cloud [s] adaptive_cruise_control.valid_vel_que_time double Time width of information used for speed estimation in speed estimation using a point cloud [s] adaptive_cruise_control.valid_estimated_vel_max double Maximum value of valid speed estimation results in speed estimation using a point cloud [m/s] adaptive_cruise_control.valid_estimated_vel_min double Minimum value of valid speed estimation results in speed estimation using a point cloud [m/s] adaptive_cruise_control.thresh_vel_to_stop double Embed a stop line if the maximum speed calculated by ACC is lower than this speed [m/s] adaptive_cruise_control.lowpass_gain_of_upper_velocity double Lowpass-gain of target velocity adaptive_cruise_control.use_rough_velocity_estimation: bool Use rough estimated velocity if the velocity estimation is failed adaptive_cruise_control.rough_velocity_rate double In the rough velocity estimation, the velocity of front car is estimated as self current velocity * this value"},{"location":"planning/obstacle_stop_planner/#flowchart_2","title":"Flowchart","text":"

(*1) The target vehicle point is calculated as a closest obstacle PointCloud from ego along the trajectory.

(*2) The sources of velocity estimation can be changed by the following ROS parameters.

This module works only when the target point is found in the detection area of the Obstacle stop planner module.

The first process of this module is to estimate the velocity of the target vehicle point. The velocity estimation uses the velocity information of dynamic objects or the travel distance of the target vehicle point from the previous step. The dynamic object information is primal, and the travel distance estimation is used as a backup in case of the perception failure. If the target vehicle point is contained in the bounding box of a dynamic object geometrically, the velocity of the dynamic object is used as the target point velocity. Otherwise, the target point velocity is calculated by the travel distance of the target point from the previous step; that is (current_position - previous_position) / dt. Note that this travel distance based estimation fails when the target point is detected in the first time (it mainly happens in the cut-in situation). To improve the stability of the estimation, the median of the calculation result for several steps is used.

If the calculated velocity is within the threshold range, it is used as the target point velocity.

Only when the estimation is succeeded and the estimated velocity exceeds the value of obstacle_stop_velocity_thresh_*, the distance to the pointcloud from self-position is calculated. For prevent chattering in the mode transition, obstacle_velocity_thresh_to_start_acc is used for the threshold to start adaptive cruise, and obstacle_velocity_thresh_to_stop_acc is used for the threshold to stop adaptive cruise. When the calculated distance value exceeds the emergency distance d\\_{emergency} calculated by emergency_stop parameters, target velocity to insert is calculated.

The emergency distance d\\_{emergency} is calculated as follows.

d_{emergency} = d_{margin_{emergency}} + t_{idling_{emergency}} \\cdot v_{ego} + (-\\frac{v_{ego}^2}{2 \\cdot a_{ego_{emergency}}}) - (-\\frac{v_{obj}^2}{2 \\cdot a_{obj_{emergency}}})

The target velocity is determined to keep the distance to the obstacle pointcloud from own vehicle at the standard distance d\\_{standard} calculated as following. Therefore, if the distance to the obstacle pointcloud is longer than standard distance, The target velocity becomes higher than the current velocity, and vice versa. For keeping the distance, a PID controller is used.

d_{standard} = d_{margin_{standard}} + t_{idling_{standard}} \\cdot v_{ego} + (-\\frac{v_{ego}^2}{2 \\cdot a_{ego_{standard}}}) - (-\\frac{v_{obj}^2}{2 \\cdot a_{obj_{standard}}})

If the target velocity exceeds the value of thresh_vel_to_stop, the target velocity is embedded in the trajectory.

"},{"location":"planning/obstacle_stop_planner/#known-limits","title":"Known Limits","text":" "},{"location":"planning/obstacle_velocity_limiter/","title":"Obstacle Velocity Limiter","text":""},{"location":"planning/obstacle_velocity_limiter/#purpose","title":"Purpose","text":"

This node limits the velocity when driving in the direction of an obstacle. For example, it allows to reduce the velocity when driving close to a guard rail in a curve.

Without this node With this node"},{"location":"planning/obstacle_velocity_limiter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

Using a parameter min_ttc (minimum time to collision), the node set velocity limits such that no collision with an obstacle would occur, even without new control inputs for a duration of min_ttc.

To achieve this, the motion of the ego vehicle is simulated forward in time at each point of the trajectory to create a corresponding footprint. If the footprint collides with some obstacle, the velocity at the trajectory point is reduced such that the new simulated footprint do not have any collision.

"},{"location":"planning/obstacle_velocity_limiter/#simulated-motion-footprint-and-collision-distance","title":"Simulated Motion, Footprint, and Collision Distance","text":"

The motion of the ego vehicle is simulated at each trajectory point using the heading, velocity, and steering defined at the point. Footprints are then constructed from these simulations and checked for collision. If a collision is found, the distance from the trajectory point is used to calculate the adjusted velocity that would produce a collision-free footprint. Parameter simulation.distance_method allow to switch between an exact distance calculation and a less expensive approximation using a simple euclidean distance.

Two models can be selected with parameter simulation.model for simulating the motion of the vehicle: a simple particle model and a more complicated bicycle model.

"},{"location":"planning/obstacle_velocity_limiter/#particle-model","title":"Particle Model","text":"

The particle model uses the constant heading and velocity of the vehicle at a trajectory point to simulate the future motion. The simulated forward motion corresponds to a straight line and the footprint to a rectangle.

"},{"location":"planning/obstacle_velocity_limiter/#footprint","title":"Footprint","text":"

The rectangle footprint is built from 2 lines parallel to the simulated forward motion and at a distance of half the vehicle width.

"},{"location":"planning/obstacle_velocity_limiter/#distance","title":"Distance","text":"

When a collision point is found within the footprint, the distance is calculated as described in the following figure.

"},{"location":"planning/obstacle_velocity_limiter/#bicycle-model","title":"Bicycle Model","text":"

The bicycle model uses the constant heading, velocity, and steering of the vehicle at a trajectory point to simulate the future motion. The simulated forward motion corresponds to an arc around the circle of curvature associated with the steering. Uncertainty in the steering can be introduced with the simulation.steering_offset parameter which will generate a range of motion from a left-most to a right-most steering. This results in 3 curved lines starting from the same trajectory point. A parameter simulation.nb_points is used to adjust the precision of these lines, with a minimum of 2 resulting in straight lines and higher values increasing the precision of the curves.

By default, the steering values contained in the trajectory message are used. Parameter trajectory_preprocessing.calculate_steering_angles allows to recalculate these values when set to true.

"},{"location":"planning/obstacle_velocity_limiter/#footprint_1","title":"Footprint","text":"

The footprint of the bicycle model is created from lines parallel to the left and right simulated motion at a distance of half the vehicle width. In addition, the two points on the left and right of the end point of the central simulated motion are used to complete the polygon.

"},{"location":"planning/obstacle_velocity_limiter/#distance_1","title":"Distance","text":"

The distance to a collision point is calculated by finding the curvature circle passing through the trajectory point and the collision point.

"},{"location":"planning/obstacle_velocity_limiter/#obstacle-detection","title":"Obstacle Detection","text":"

Obstacles are represented as points or linestrings (i.e., sequence of points) around the obstacles and are constructed from an occupancy grid, a pointcloud, or the lanelet map. The lanelet map is always checked for obstacles but the other source is switched using parameter obstacles.dynamic_source.

To efficiently find obstacles intersecting with a footprint, they are stored in a R-tree. Two trees are used, one for the obstacle points, and one for the obstacle linestrings (which are decomposed into segments to simplify the R-tree).

"},{"location":"planning/obstacle_velocity_limiter/#obstacle-masks","title":"Obstacle masks","text":""},{"location":"planning/obstacle_velocity_limiter/#dynamic-obstacles","title":"Dynamic obstacles","text":"

Moving obstacles such as other cars should not be considered by this module. These obstacles are detected by the perception modules and represented as polygons. Obstacles inside these polygons are ignored.

Only dynamic obstacles with a velocity above parameter obstacles.dynamic_obstacles_min_vel are removed.

To deal with delays and precision errors, the polygons can be enlarged with parameter obstacles.dynamic_obstacles_buffer.

"},{"location":"planning/obstacle_velocity_limiter/#obstacles-outside-of-the-safety-envelope","title":"Obstacles outside of the safety envelope","text":"

Obstacles that are not inside any forward simulated footprint are ignored if parameter obstacles.filter_envelope is set to true. The safety envelope polygon is built from all the footprints and used as a positive mask on the occupancy grid or pointcloud.

This option can reduce the total number of obstacles which reduces the cost of collision detection. However, the cost of masking the envelope is usually too high to be interesting.

"},{"location":"planning/obstacle_velocity_limiter/#obstacles-on-the-ego-path","title":"Obstacles on the ego path","text":"

If parameter obstacles.ignore_obstacles_on_path is set to true, a polygon mask is built from the trajectory and the vehicle dimension. Any obstacle in this polygon is ignored.

The size of the polygon can be increased using parameter obstacles.ignore_extra_distance which is added to the vehicle lateral offset.

This option is a bit expensive and should only be used in case of noisy dynamic obstacles where obstacles are wrongly detected on the ego path, causing unwanted velocity limits.

"},{"location":"planning/obstacle_velocity_limiter/#lanelet-map","title":"Lanelet Map","text":"

Information about static obstacles can be stored in the Lanelet map using the value of the type tag of linestrings. If any linestring has a type with one of the value from parameter obstacles.static_map_tags, then it will be used as an obstacle.

Obstacles from the lanelet map are not impacted by the masks.

"},{"location":"planning/obstacle_velocity_limiter/#occupancy-grid","title":"Occupancy Grid","text":"

Masking is performed by iterating through the cells inside each polygon mask using the grid_map_utils::PolygonIterator function. A threshold is then applied to only keep cells with an occupancy value above parameter obstacles.occupancy_grid_threshold. Finally, the image is converted to an image and obstacle linestrings are extracted using the opencv function findContour.

"},{"location":"planning/obstacle_velocity_limiter/#pointcloud","title":"Pointcloud","text":"

Masking is performed using the pcl::CropHull function. Points from the pointcloud are then directly used as obstacles.

"},{"location":"planning/obstacle_velocity_limiter/#velocity-adjustment","title":"Velocity Adjustment","text":"

If a collision is found, the velocity at the trajectory point is adjusted such that the resulting footprint would no longer collide with an obstacle: velocity = \\frac{dist\\_to\\_collision}{min\\_ttc}

To prevent sudden deceleration of the ego vehicle, the parameter max_deceleration limits the deceleration relative to the current ego velocity. For a trajectory point occurring at a duration t in the future (calculated from the original velocity profile), the adjusted velocity cannot be set lower than v_{current} - t * max\\_deceleration.

Furthermore, a parameter min_adjusted_velocity provides a lower bound on the modified velocity.

"},{"location":"planning/obstacle_velocity_limiter/#trajectory-preprocessing","title":"Trajectory preprocessing","text":"

The node only modifies part of the input trajectory, starting from the current ego position. Parameter trajectory_preprocessing.start_distance is used to adjust how far ahead of the ego position the velocities will start being modified. Parameters trajectory_preprocessing.max_length and trajectory_preprocessing.max_duration are used to control how much of the trajectory will see its velocity adjusted.

To reduce computation cost at the cost of precision, the trajectory can be downsampled using parameter trajectory_preprocessing.downsample_factor. For example a value of 1 means all trajectory points will be evaluated while a value of 10 means only 1/10th of the points will be evaluated.

"},{"location":"planning/obstacle_velocity_limiter/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"planning/obstacle_velocity_limiter/#inputs","title":"Inputs","text":"Name Type Description ~/input/trajectory autoware_auto_planning_msgs/Trajectory Reference trajectory ~/input/occupancy_grid nav_msgs/OccupancyGrid Occupancy grid with obstacle information ~/input/obstacle_pointcloud sensor_msgs/PointCloud2 Pointcloud containing only obstacle points ~/input/dynamic_obstacles autoware_auto_perception_msgs/PredictedObjects Dynamic objects ~/input/odometry nav_msgs/Odometry Odometry used to retrieve the current ego velocity ~/input/map autoware_auto_mapping_msgs/HADMapBin Vector map used to retrieve static obstacles"},{"location":"planning/obstacle_velocity_limiter/#outputs","title":"Outputs","text":"Name Type Description ~/output/trajectory autoware_auto_planning_msgs/Trajectory Trajectory with adjusted velocities ~/output/debug_markers visualization_msgs/MarkerArray Debug markers (envelopes, obstacle polygons) ~/output/runtime_microseconds std_msgs/Int64 Time taken to calculate the trajectory (in microseconds)"},{"location":"planning/obstacle_velocity_limiter/#parameters","title":"Parameters","text":"Name Type Description min_ttc float [s] required minimum time with no collision at each point of the trajectory assuming constant heading and velocity. distance_buffer float [m] required distance buffer with the obstacles. min_adjusted_velocity float [m/s] minimum adjusted velocity this node can set. max_deceleration float [m/s\u00b2] maximum deceleration an adjusted velocity can cause. trajectory_preprocessing.start_distance float [m] controls from which part of the trajectory (relative to the current ego pose) the velocity is adjusted. trajectory_preprocessing.max_length float [m] controls the maximum length (starting from the start_distance) where the velocity is adjusted. trajectory_preprocessing.max_distance float [s] controls the maximum duration (measured from the start_distance) where the velocity is adjusted. trajectory_preprocessing.downsample_factor int trajectory downsampling factor to allow tradeoff between precision and performance. trajectory_preprocessing.calculate_steering_angle bool if true, the steering angles of the trajectory message are not used but are recalculated. simulation.model string model to use for forward simulation. Either \"particle\" or \"bicycle\". simulation.distance_method string method to use for calculating distance to collision. Either \"exact\" or \"approximation\". simulation.steering_offset float offset around the steering used by the bicycle model. simulation.nb_points int number of points used to simulate motion with the bicycle model. obstacles.dynamic_source string source of dynamic obstacle used for collision checking. Can be \"occupancy_grid\", \"point_cloud\", or \"static_only\" (no dynamic obstacle). obstacles.occupancy_grid_threshold int value in the occupancy grid above which a cell is considered an obstacle. obstacles.dynamic_obstacles_buffer float buffer around dynamic obstacles used when masking an obstacle in order to prevent noise. obstacles.dynamic_obstacles_min_vel float velocity above which to mask a dynamic obstacle. obstacles.static_map_tags string list linestring of the lanelet map with this tags are used as obstacles. obstacles.filter_envelope bool wether to use the safety envelope to filter the dynamic obstacles source."},{"location":"planning/obstacle_velocity_limiter/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

The velocity profile produced by this node is not meant to be a realistic velocity profile and can contain sudden jumps of velocity with no regard for acceleration and jerk. This velocity profile is meant to be used as an upper bound on the actual velocity of the vehicle.

"},{"location":"planning/obstacle_velocity_limiter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":"

The critical case for this node is when an obstacle is falsely detected very close to the trajectory such that the corresponding velocity suddenly becomes very low. This can cause a sudden brake and two mechanisms can be used to mitigate these errors.

Parameter min_adjusted_velocity allow to set a minimum to the adjusted velocity, preventing the node to slow down the vehicle too much. Parameter max_deceleration allow to set a maximum deceleration (relative to the current ego velocity) that the adjusted velocity would incur.

"},{"location":"planning/obstacle_velocity_limiter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"planning/obstacle_velocity_limiter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"planning/obstacle_velocity_limiter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"planning/planning_debug_tools/Readme/","title":"Planning Debug Tools","text":"

This package contains several planning-related debug tools.

"},{"location":"planning/planning_debug_tools/Readme/#trajectory-analyzer","title":"Trajectory analyzer","text":"

The trajectory_analyzer visualizes the information (speed, curvature, yaw, etc) along the trajectory. This feature would be helpful for purposes such as \"investigating the reason why the vehicle decelerates here\". This feature employs the OSS PlotJuggler.

"},{"location":"planning/planning_debug_tools/Readme/#stop-reason-visualizer","title":"Stop reason visualizer","text":"

This is to visualize stop factor and reason. see the details

"},{"location":"planning/planning_debug_tools/Readme/#how-to-use","title":"How to use","text":"

please launch the analyzer node

ros2 launch planning_debug_tools trajectory_analyzer.launch.xml\n

and visualize the analyzed data on the plot juggler following below.

"},{"location":"planning/planning_debug_tools/Readme/#setup-plotjuggler","title":"setup PlotJuggler","text":"

For the first time, please add the following code to reactive script and save it as the picture below! (Looking for the way to automatically load the configuration file...)

You can customize what you plot by editing this code.

in Global code

behavior_path = '/planning/scenario_planning/lane_driving/behavior_planning/path_with_lane_id/debug_info'\nbehavior_velocity = '/planning/scenario_planning/lane_driving/behavior_planning/path/debug_info'\nmotion_avoid = '/planning/scenario_planning/lane_driving/motion_planning/obstacle_avoidance_planner/trajectory/debug_info'\nmotion_smoother_latacc = '/planning/scenario_planning/motion_velocity_smoother/debug/trajectory_lateral_acc_filtered/debug_info'\nmotion_smoother = '/planning/scenario_planning/trajectory/debug_info'\n

in function(tracker_time)

PlotCurvatureOverArclength('k_behavior_path', behavior_path, tracker_time)\nPlotCurvatureOverArclength('k_behavior_velocity', behavior_velocity, tracker_time)\nPlotCurvatureOverArclength('k_motion_avoid', motion_avoid, tracker_time)\nPlotCurvatureOverArclength('k_motion_smoother', motion_smoother, tracker_time)\n\nPlotVelocityOverArclength('v_behavior_path', behavior_path, tracker_time)\nPlotVelocityOverArclength('v_behavior_velocity', behavior_velocity, tracker_time)\nPlotVelocityOverArclength('v_motion_avoid', motion_avoid, tracker_time)\nPlotVelocityOverArclength('v_motion_smoother_latacc', motion_smoother_latacc, tracker_time)\nPlotVelocityOverArclength('v_motion_smoother', motion_smoother, tracker_time)\n\nPlotAccelerationOverArclength('a_behavior_path', behavior_path, tracker_time)\nPlotAccelerationOverArclength('a_behavior_velocity', behavior_velocity, tracker_time)\nPlotAccelerationOverArclength('a_motion_avoid', motion_avoid, tracker_time)\nPlotAccelerationOverArclength('a_motion_smoother_latacc', motion_smoother_latacc, tracker_time)\nPlotAccelerationOverArclength('a_motion_smoother', motion_smoother, tracker_time)\n\nPlotYawOverArclength('yaw_behavior_path', behavior_path, tracker_time)\nPlotYawOverArclength('yaw_behavior_velocity', behavior_velocity, tracker_time)\nPlotYawOverArclength('yaw_motion_avoid', motion_avoid, tracker_time)\nPlotYawOverArclength('yaw_motion_smoother_latacc', motion_smoother_latacc, tracker_time)\nPlotYawOverArclength('yaw_motion_smoother', motion_smoother, tracker_time)\n\nPlotCurrentVelocity('localization_kinematic_state', '/localization/kinematic_state', tracker_time)\n

in Function Library

function PlotValue(name, path, timestamp, value)\n  new_series = ScatterXY.new(name)\n  index = 0\n  while(true) do\n    series_k = TimeseriesView.find( string.format( \"%s/\"..value..\".%d\", path, index) )\n    series_s = TimeseriesView.find( string.format( \"%s/arclength.%d\", path, index) )\n    series_size = TimeseriesView.find( string.format( \"%s/size\", path) )\n\n    if series_k == nil or series_s == nil then break end\n\n    k = series_k:atTime(timestamp)\n    s = series_s:atTime(timestamp)\n    size = series_size:atTime(timestamp)\n\n    if index >= size then break end\n\n    new_series:push_back(s,k)\n    index = index+1\n  end\nend\n\nfunction PlotCurvatureOverArclength(name, path, timestamp)\n  PlotValue(name, path, timestamp,\"curvature\")\nend\n\nfunction PlotVelocityOverArclength(name, path, timestamp)\n  PlotValue(name, path, timestamp,\"velocity\")\nend\n\nfunction PlotAccelerationOverArclength(name, path, timestamp)\n  PlotValue(name, path, timestamp,\"acceleration\")\nend\n\nfunction PlotYawOverArclength(name, path, timestamp)\n  PlotValue(name, path, timestamp,\"yaw\")\nend\n\nfunction PlotCurrentVelocity(name, kinematics_name, timestamp)\n  new_series = ScatterXY.new(name)\n  series_v = TimeseriesView.find( string.format( \"%s/twist/twist/linear/x\", kinematics_name))\n  if series_v == nil then\n    print(\"error\")\n    return\n  end\n  v = series_v:atTime(timestamp)\n  new_series:push_back(0.0, v)\nend\n

Then, run the plot juggler.

"},{"location":"planning/planning_debug_tools/Readme/#how-to-customize-the-plot","title":"How to customize the plot","text":"

Add Path/PathWithLaneIds/Trajectory topics you want to plot in the trajectory_analyzer.launch.xml, then the analyzed topics for these messages will be published with TrajectoryDebugINfo.msg type. You can then visualize these data by editing the reactive script on the PlotJuggler.

"},{"location":"planning/planning_debug_tools/Readme/#requirements","title":"Requirements","text":"

The version of the plotJuggler must be > 3.5.0

"},{"location":"planning/planning_debug_tools/Readme/#closest-velocity-checker","title":"Closest velocity checker","text":"

This node prints the velocity information indicated by planning/control modules on a terminal. For trajectories calculated by planning modules, the target velocity on the trajectory point which is closest to the ego vehicle is printed. For control commands calculated by control modules, the target velocity and acceleration is directly printed. This feature would be helpful for purposes such as \"investigating the reason why the vehicle does not move\".

You can launch by

ros2 run planning_debug_tools closest_velocity_checker.py\n

"},{"location":"planning/planning_debug_tools/Readme/#trajectory-visualizer","title":"Trajectory visualizer","text":"

The old version of the trajectory analyzer. It is written in Python and more flexible, but very slow.

"},{"location":"planning/planning_debug_tools/Readme/#for-other-use-case-experimental","title":"For other use case (experimental)","text":"

To see behavior velocity planner's internal plath with lane id add below example value to behavior velocity analyzer and set is_publish_debug_path: true

crosswalk ='/planning/scenario_planning/lane_driving/behavior_planning/behavior_velocity_planner/debug/path_with_lane_id/crosswalk/debug_info'\nintersection ='/planning/scenario_planning/lane_driving/behavior_planning/behavior_velocity_planner/debug/path_with_lane_id/intersection/debug_info'\ntraffic_light ='/planning/scenario_planning/lane_driving/behavior_planning/behavior_velocity_planner/debug/path_with_lane_id/traffic_light/debug_info'\nmerge_from_private ='/planning/scenario_planning/lane_driving/behavior_planning/behavior_velocity_planner/debug/path_with_lane_id/merge_from_private/debug_info'\nocclusion_spot ='/planning/scenario_planning/lane_driving/behavior_planning/behavior_velocity_planner/debug/path_with_lane_id/occlusion_spot/debug_info'\n
PlotVelocityOverArclength('v_crosswalk', crosswalk, tracker_time)\nPlotVelocityOverArclength('v_intersection', intersection, tracker_time)\nPlotVelocityOverArclength('v_merge_from_private', merge_from_private, tracker_time)\nPlotVelocityOverArclength('v_traffic_light', traffic_light, tracker_time)\nPlotVelocityOverArclength('v_occlusion', occlusion_spot, tracker_time)\n\nPlotYawOverArclength('yaw_crosswalk', crosswalk, tracker_time)\nPlotYawOverArclength('yaw_intersection', intersection, tracker_time)\nPlotYawOverArclength('yaw_merge_from_private', merge_from_private, tracker_time)\nPlotYawOverArclength('yaw_traffic_light', traffic_light, tracker_time)\nPlotYawOverArclength('yaw_occlusion', occlusion_spot, tracker_time)\n\nPlotCurrentVelocity('localization_kinematic_state', '/localization/kinematic_state', tracker_time)\n
"},{"location":"planning/planning_debug_tools/doc-stop-reason-visualizer/","title":"Doc stop reason visualizer","text":""},{"location":"planning/planning_debug_tools/doc-stop-reason-visualizer/#stop_reason_visualizer","title":"stop_reason_visualizer","text":"

This module is to visualize stop factor quickly without selecting correct debug markers. This is supposed to use with virtual wall marker like below.

"},{"location":"planning/planning_debug_tools/doc-stop-reason-visualizer/#how-to-use","title":"How to use","text":"

Run this node.

ros2 run planning_debug_tools stop_reason_visualizer_exe\n

Add stop reason debug marker from rviz.

Note: ros2 process can be sometimes deleted only from killall stop_reason_visualizer_exe

Reference

"},{"location":"planning/planning_validator/","title":"Planning Validator","text":"

The planning_validator is a module that checks the validity of a trajectory before it is published. The status of the validation can be viewed in the /diagnostics and /validation_status topics. When an invalid trajectory is detected, the planning_validator will process the trajectory following the selected option: \"0. publish the trajectory as it is\", \"1. stop publishing the trajectory\", \"2. publish the last validated trajectory\".

"},{"location":"planning/planning_validator/#supported-features","title":"Supported features","text":"

The following features are supported for trajectory validation and can have thresholds set by parameters:

The following features are to be implemented.

"},{"location":"planning/planning_validator/#inputsoutputs","title":"Inputs/Outputs","text":""},{"location":"planning/planning_validator/#inputs","title":"Inputs","text":"

The planning_validator takes in the following inputs:

Name Type Description ~/input/kinematics nav_msgs/Odometry ego pose and twist ~/input/trajectory autoware_auto_planning_msgs/Trajectory target trajectory to be validated in this node"},{"location":"planning/planning_validator/#outputs","title":"Outputs","text":"

It outputs the following:

Name Type Description ~/output/trajectory autoware_auto_planning_msgs/Trajectory validated trajectory ~/output/validation_status planning_validator/PlanningValidatorStatus validator status to inform the reason why the trajectory is valid/invalid /diagnostics diagnostic_msgs/DiagnosticStatus diagnostics to report errors"},{"location":"planning/planning_validator/#parameters","title":"Parameters","text":"

The following parameters can be set for the planning_validator:

"},{"location":"planning/planning_validator/#system-parameters","title":"System parameters","text":"Name Type Description Default value invalid_trajectory_handling_type int set the operation when the invalid trajectory is detected. 0: publish the trajectory even if it is invalid, 1: stop publishing the trajectory, 2: publish the last validated trajectory. 0 publish_diag bool the Diag will be set to ERROR when the number of consecutive invalid trajectory exceeds this threshold. (For example, threshold = 1 means, even if the trajectory is invalid, the Diag will not be ERROR if the next trajectory is valid.) true diag_error_count_threshold int if true, diagnostics msg is published. true display_on_terminal bool show error msg on terminal true"},{"location":"planning/planning_validator/#algorithm-parameters","title":"Algorithm parameters","text":""},{"location":"planning/planning_validator/#thresholds","title":"Thresholds","text":"

The input trajectory is detected as invalid if the index exceeds the following thresholds.

Name Type Description Default value thresholds.interval double invalid threshold of the distance of two neighboring trajectory points 100.0 thresholds.relative_angle double invalid threshold of the relative angle of two neighboring trajectory points 2.0 thresholds.curvature double invalid threshold of the curvature in each trajectory point 1.0 thresholds.lateral_acc double invalid threshold of the lateral acceleration in each trajectory point 9.8 thresholds.longitudinal_max_acc double invalid threshold of the maximum longitudinal acceleration in each trajectory point 9.8 thresholds.longitudinal_min_acc double invalid threshold of the minimum longitudinal deceleration in each trajectory point -9.8 thresholds.steering double invalid threshold of the steering angle in each trajectory point 1.414 thresholds.steering_rate double invalid threshold of the steering angle rate in each trajectory point 10.0 thresholds.velocity_deviation double invalid threshold of the velocity deviation between the ego velocity and the trajectory point closest to ego. 100.0 thresholds.distance_deviation double invalid threshold of the distance deviation between the ego position and the trajectory point closest to ego. 100.0"},{"location":"planning/route_handler/","title":"route handler","text":"

route_handler is a library for calculating driving route on the lanelet map.

"},{"location":"planning/rtc_auto_mode_manager/","title":"RTC Auto Mode Manager","text":""},{"location":"planning/rtc_auto_mode_manager/#purpose","title":"Purpose","text":"

RTC Auto Mode Manager is a node to approve request to cooperate from behavior planning modules automatically.

"},{"location":"planning/rtc_auto_mode_manager/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"planning/rtc_auto_mode_manager/#input","title":"Input","text":"Name Type Description /planning/enable_auto_mode/** tier4_rtc_msgs/srv/AutoMode Service to enable auto mode for the module"},{"location":"planning/rtc_auto_mode_manager/#output","title":"Output","text":"Name Type Description /planning/enable_auto_mode/internal/** tier4_rtc_msgs/srv/AutoMode Service to enable auto mode for the module"},{"location":"planning/rtc_auto_mode_manager/#parameters","title":"Parameters","text":"Name Type Description module_list List of string Module names managing in rtc_auto_mode_manager default_enable_list List of string Module names enabled auto mode at initialization"},{"location":"planning/rtc_auto_mode_manager/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"planning/rtc_auto_mode_manager/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"planning/rtc_auto_mode_manager/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":""},{"location":"planning/rtc_interface/","title":"RTC Interface","text":""},{"location":"planning/rtc_interface/#purpose","title":"Purpose","text":"

RTC Interface is an interface to publish the decision status of behavior planning modules and receive execution command from external of an autonomous driving system.

"},{"location":"planning/rtc_interface/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"planning/rtc_interface/#usage-example","title":"Usage example","text":"
// Generate instance (in this example, \"intersection\" is selected)\nrtc_interface::RTCInterface rtc_interface(node, \"intersection\");\n\n// Generate UUID\nconst unique_identifier_msgs::msg::UUID uuid = generateUUID(getModuleId());\n\n// Repeat while module is running\nwhile (...) {\n// Get safety status of the module corresponding to the module id\nconst bool safe = ...\n\n// Get distance to the object corresponding to the module id\nconst double start_distance = ...\nconst double finish_distance = ...\n\n// Get time stamp\nconst rclcpp::Time stamp = ...\n\n// Update status\nrtc_interface.updateCooperateStatus(uuid, safe, start_distance, finish_distance, stamp);\n\nif (rtc_interface.isActivated(uuid)) {\n// Execute planning\n} else {\n// Stop planning\n}\n// Get time stamp\nconst rclcpp::Time stamp = ...\n\n// Publish status topic\nrtc_interface.publishCooperateStatus(stamp);\n}\n\n// Remove the status from array\nrtc_interface.removeCooperateStatus(uuid);\n
"},{"location":"planning/rtc_interface/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"planning/rtc_interface/#rtcinterface-constructor","title":"RTCInterface (Constructor)","text":"
rtc_interface::RTCInterface(rclcpp::Node & node, const std::string & name);\n
"},{"location":"planning/rtc_interface/#description","title":"Description","text":"

A constructor for rtc_interface::RTCInterface.

"},{"location":"planning/rtc_interface/#input","title":"Input","text":""},{"location":"planning/rtc_interface/#output","title":"Output","text":"

An instance of RTCInterface

"},{"location":"planning/rtc_interface/#publishcooperatestatus","title":"publishCooperateStatus","text":"
rtc_interface::publishCooperateStatus(const rclcpp::Time & stamp)\n
"},{"location":"planning/rtc_interface/#description_1","title":"Description","text":"

Publish registered cooperate status.

"},{"location":"planning/rtc_interface/#input_1","title":"Input","text":""},{"location":"planning/rtc_interface/#output_1","title":"Output","text":"

Nothing

"},{"location":"planning/rtc_interface/#updatecooperatestatus","title":"updateCooperateStatus","text":"
rtc_interface::updateCooperateStatus(const unique_identifier_msgs::msg::UUID & uuid, const bool safe, const double start_distance, const double finish_distance, const rclcpp::Time & stamp)\n
"},{"location":"planning/rtc_interface/#description_2","title":"Description","text":"

Update cooperate status corresponding to uuid. If cooperate status corresponding to uuid is not registered yet, add new cooperate status.

"},{"location":"planning/rtc_interface/#input_2","title":"Input","text":""},{"location":"planning/rtc_interface/#output_2","title":"Output","text":"

Nothing

"},{"location":"planning/rtc_interface/#removecooperatestatus","title":"removeCooperateStatus","text":"
rtc_interface::removeCooperateStatus(const unique_identifier_msgs::msg::UUID & uuid)\n
"},{"location":"planning/rtc_interface/#description_3","title":"Description","text":"

Remove cooperate status corresponding to uuid from registered statuses.

"},{"location":"planning/rtc_interface/#input_3","title":"Input","text":""},{"location":"planning/rtc_interface/#output_3","title":"Output","text":"

Nothing

"},{"location":"planning/rtc_interface/#clearcooperatestatus","title":"clearCooperateStatus","text":"
rtc_interface::clearCooperateStatus()\n
"},{"location":"planning/rtc_interface/#description_4","title":"Description","text":"

Remove all cooperate statuses.

"},{"location":"planning/rtc_interface/#input_4","title":"Input","text":"

Nothing

"},{"location":"planning/rtc_interface/#output_4","title":"Output","text":"

Nothing

"},{"location":"planning/rtc_interface/#isactivated","title":"isActivated","text":"
rtc_interface::isActivated(const unique_identifier_msgs::msg::UUID & uuid)\n
"},{"location":"planning/rtc_interface/#description_5","title":"Description","text":"

Return received command status corresponding to uuid.

"},{"location":"planning/rtc_interface/#input_5","title":"Input","text":""},{"location":"planning/rtc_interface/#output_5","title":"Output","text":"

If auto mode is enabled, return based on the safety status. If not, if received command is ACTIVATED, return true. If not, return false.

"},{"location":"planning/rtc_interface/#isregistered","title":"isRegistered","text":"
rtc_interface::isRegistered(const unique_identifier_msgs::msg::UUID & uuid)\n
"},{"location":"planning/rtc_interface/#description_6","title":"Description","text":"

Return true if uuid is registered.

"},{"location":"planning/rtc_interface/#input_6","title":"Input","text":""},{"location":"planning/rtc_interface/#output_6","title":"Output","text":"

If uuid is registered, return true. If not, return false.

"},{"location":"planning/rtc_interface/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"planning/rtc_interface/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":""},{"location":"planning/rtc_replayer/","title":"rtc_replayer","text":""},{"location":"planning/rtc_replayer/#purpose","title":"Purpose","text":"

The current issue for RTC commands is that service is not recorded to rosbag, so it's very hard to analyze what was happened exactly. So this package makes it possible to replay rtc commands service from rosbag rtc status topic to resolve that issue.

"},{"location":"planning/rtc_replayer/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"planning/rtc_replayer/#input","title":"Input","text":"Name Type Description /debug/rtc_status tier4_rtc_msgs::msg::CooperateStatusArray CooperateStatusArray that is recorded in rosbag"},{"location":"planning/rtc_replayer/#output","title":"Output","text":"Name Type Description /api/external/set/rtc_commands tier4_rtc_msgs::msg::CooperateCommands CooperateCommands that is replayed by this package"},{"location":"planning/rtc_replayer/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"planning/rtc_replayer/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

This package can't replay CooperateCommands correctly if CooperateStatusArray is not stable. And this replay is always later one step than actual however it will not affect much for behavior.

"},{"location":"planning/rtc_replayer/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":"

tbd.

"},{"location":"planning/scenario_selector/","title":"scenario_selector","text":""},{"location":"planning/scenario_selector/#scenario_selector_node","title":"scenario_selector_node","text":"

scenario_selector_node is a node that switches trajectories from each scenario.

"},{"location":"planning/scenario_selector/#input-topics","title":"Input topics","text":"Name Type Description ~input/lane_driving/trajectory autoware_auto_planning_msgs::Trajectory trajectory of LaneDriving scenario ~input/parking/trajectory autoware_auto_planning_msgs::Trajectory trajectory of Parking scenario ~input/lanelet_map autoware_auto_mapping_msgs::HADMapBin ~input/route autoware_planning_msgs::LaneletRoute route and goal pose ~input/odometry nav_msgs::Odometry for checking whether vehicle is stopped is_parking_completed bool (implemented as rosparam) whether all split trajectory of Parking are published"},{"location":"planning/scenario_selector/#output-topics","title":"Output topics","text":"Name Type Description ~output/scenario tier4_planning_msgs::Scenario current scenario and scenarios to be activated ~output/trajectory autoware_auto_planning_msgs::Trajectory trajectory to be followed"},{"location":"planning/scenario_selector/#output-tfs","title":"Output TFs","text":"

None

"},{"location":"planning/scenario_selector/#how-to-launch","title":"How to launch","text":"
  1. Write your remapping info in scenario_selector.launch or add args when executing roslaunch
  2. roslaunch scenario_selector scenario_selector.launch
    • If you would like to use only a single scenario, roslaunch scenario_selector dummy_scenario_selector_{scenario_name}.launch
"},{"location":"planning/scenario_selector/#parameters","title":"Parameters","text":"Parameter Type Description update_rate double timer's update rate th_max_message_delay_sec double threshold time of input messages' maximum delay th_arrived_distance_m double threshold distance to check if vehicle has arrived at the trajectory's endpoint th_stopped_time_sec double threshold time to check if vehicle is stopped th_stopped_velocity_mps double threshold velocity to check if vehicle is stopped"},{"location":"planning/scenario_selector/#flowchart","title":"Flowchart","text":""},{"location":"planning/static_centerline_optimizer/","title":"Static Centerline Optimizer","text":""},{"location":"planning/static_centerline_optimizer/#purpose","title":"Purpose","text":"

This package statically calculates the centerline satisfying path footprints inside the drivable area.

On narrow-road driving, the default centerline, which is the middle line between lanelets' right and left boundaries, often causes path footprints outside the drivable area. To make path footprints inside the drivable area, we use online path shape optimization by the obstacle_avoidance_planner package.

Instead of online path shape optimization, we introduce static centerline optimization. With this static centerline optimization, we have following advantages.

"},{"location":"planning/static_centerline_optimizer/#use-cases","title":"Use cases","text":"

There are two interfaces to communicate with the centerline optimizer.

"},{"location":"planning/static_centerline_optimizer/#vector-map-builder-interface","title":"Vector Map Builder Interface","text":"

Note: This function of Vector Map Builder has not been released. Please wait for a while. Currently there is no documentation about Vector Map Builder's operation for this function.

The optimized centerline can be generated from Vector Map Builder's operation.

We can run

with the following command by designating <vehicle_model>

ros2 launch static_centerline_optimizer run_planning_server.launch.xml vehicle_model:=<vehicle-model>\n

FYI, port ID of the http server is 4010 by default.

"},{"location":"planning/static_centerline_optimizer/#command-line-interface","title":"Command Line Interface","text":"

The optimized centerline can be generated from the command line interface by designating

ros2 launch static_centerline_optimizer static_centerline_optimizer.launch.xml run_backgrond:=false lanelet2_input_file_path:=<input-osm-path> lanelet2_output_file_path:=<output-osm-path> start_lanelet_id:=<start-lane-id> end_lanelet_id:=<end-lane-id> vehicle_model:=<vehicle-model>\n

The default output map path containing the optimized centerline locates /tmp/lanelet2_map.osm. If you want to change the output map path, you can remap the path by designating <output-osm-path>.

"},{"location":"planning/static_centerline_optimizer/#visualization","title":"Visualization","text":"

When launching the path planning server, rviz is launched as well as follows.

"},{"location":"planning/static_centerline_optimizer/#unsafe-footprints","title":"Unsafe footprints","text":"

Sometimes the optimized centerline footprints are close to the lanes' boundaries. We can check how close they are with unsafe footprints marker as follows.

Footprints' color depends on its distance to the boundaries, and text expresses its distance.

By default, footprints' color is

"},{"location":"planning/surround_obstacle_checker/","title":"Surround Obstacle Checker","text":""},{"location":"planning/surround_obstacle_checker/#purpose","title":"Purpose","text":"

This module subscribes required data (ego-pose, obstacles, etc), and publishes zero velocity limit to keep stopping if any of stop conditions are satisfied.

"},{"location":"planning/surround_obstacle_checker/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"planning/surround_obstacle_checker/#flow-chart","title":"Flow chart","text":""},{"location":"planning/surround_obstacle_checker/#algorithms","title":"Algorithms","text":""},{"location":"planning/surround_obstacle_checker/#check-data","title":"Check data","text":"

Check that surround_obstacle_checker receives no ground pointcloud, dynamic objects and current velocity data.

"},{"location":"planning/surround_obstacle_checker/#get-distance-to-nearest-object","title":"Get distance to nearest object","text":"

Calculate distance between ego vehicle and the nearest object. In this function, it calculates the minimum distance between the polygon of ego vehicle and all points in pointclouds and the polygons of dynamic objects.

"},{"location":"planning/surround_obstacle_checker/#stop-requirement","title":"Stop requirement","text":"

If it satisfies all following conditions, it plans stopping.

"},{"location":"planning/surround_obstacle_checker/#states","title":"States","text":"

To prevent chattering, surround_obstacle_checker manages two states. As mentioned in stop condition section, it prevents chattering by changing threshold to find surround obstacle depending on the states.

"},{"location":"planning/surround_obstacle_checker/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"planning/surround_obstacle_checker/#input","title":"Input","text":"Name Type Description /perception/obstacle_segmentation/pointcloud sensor_msgs::msg::PointCloud2 Pointcloud of obstacles which the ego-vehicle should stop or avoid /perception/object_recognition/objects autoware_auto_perception_msgs::msg::PredictedObjects Dynamic objects /localization/kinematic_state nav_msgs::msg::Odometry Current twist /tf tf2_msgs::msg::TFMessage TF /tf_static tf2_msgs::msg::TFMessage TF static"},{"location":"planning/surround_obstacle_checker/#output","title":"Output","text":"Name Type Description ~/output/velocity_limit_clear_command tier4_planning_msgs::msg::VelocityLimitClearCommand Velocity limit clear command ~/output/max_velocity tier4_planning_msgs::msg::VelocityLimit Velocity limit command ~/output/no_start_reason diagnostic_msgs::msg::DiagnosticStatus No start reason ~/output/stop_reasons tier4_planning_msgs::msg::StopReasonArray Stop reasons ~/debug/marker visualization_msgs::msg::MarkerArray Marker for visualization ~/debug/footprint geometry_msgs::msg::PolygonStamped Ego vehicle base footprint for visualization ~/debug/footprint_offset geometry_msgs::msg::PolygonStamped Ego vehicle footprint with surround_check_distance offset for visualization ~/debug/footprint_recover_offset geometry_msgs::msg::PolygonStamped Ego vehicle footprint with surround_check_recover_distance offset for visualization"},{"location":"planning/surround_obstacle_checker/#parameters","title":"Parameters","text":"Name Type Description Default value use_pointcloud bool Use pointcloud as obstacle check true use_dynamic_object bool Use dynamic object as obstacle check true surround_check_distance double If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status [m] 0.5 surround_check_recover_distance double If no object exists in this distance, transit to \"non-surrounding-obstacle\" status [m] 0.8 state_clear_time double Threshold to clear stop state [s] 2.0 stop_state_ego_speed double Threshold to check ego vehicle stopped [m/s] 0.1 stop_state_entry_duration_time double Threshold to check ego vehicle stopped [s] 0.1 publish_debug_footprints bool Publish vehicle footprint with/without offsets true"},{"location":"planning/surround_obstacle_checker/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

To perform stop planning, it is necessary to get obstacle pointclouds data. Hence, it does not plan stopping if the obstacle is in blind spot.

"},{"location":"planning/surround_obstacle_checker/surround_obstacle_checker-design.ja/","title":"Surround Obstacle Checker","text":""},{"location":"planning/surround_obstacle_checker/surround_obstacle_checker-design.ja/#purpose","title":"Purpose","text":"

surround_obstacle_checker \u306f\u3001\u81ea\u8eca\u304c\u505c\u8eca\u4e2d\u3001\u81ea\u8eca\u306e\u5468\u56f2\u306b\u969c\u5bb3\u7269\u304c\u5b58\u5728\u3059\u308b\u5834\u5408\u306b\u767a\u9032\u3057\u306a\u3044\u3088\u3046\u306b\u505c\u6b62\u8a08\u753b\u3092\u884c\u3046\u30e2\u30b8\u30e5\u30fc\u30eb\u3067\u3042\u308b\u3002

"},{"location":"planning/surround_obstacle_checker/surround_obstacle_checker-design.ja/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"planning/surround_obstacle_checker/surround_obstacle_checker-design.ja/#flow-chart","title":"Flow chart","text":""},{"location":"planning/surround_obstacle_checker/surround_obstacle_checker-design.ja/#algorithms","title":"Algorithms","text":""},{"location":"planning/surround_obstacle_checker/surround_obstacle_checker-design.ja/#check-data","title":"Check data","text":"

\u70b9\u7fa4\u3001\u52d5\u7684\u7269\u4f53\u3001\u81ea\u8eca\u901f\u5ea6\u306e\u30c7\u30fc\u30bf\u304c\u53d6\u5f97\u3067\u304d\u3066\u3044\u308b\u304b\u3069\u3046\u304b\u3092\u78ba\u8a8d\u3059\u308b\u3002

"},{"location":"planning/surround_obstacle_checker/surround_obstacle_checker-design.ja/#get-distance-to-nearest-object","title":"Get distance to nearest object","text":"

\u81ea\u8eca\u3068\u6700\u8fd1\u508d\u306e\u969c\u5bb3\u7269\u3068\u306e\u8ddd\u96e2\u3092\u8a08\u7b97\u3059\u308b\u3002 \u3053\u3053\u3067\u306f\u3001\u81ea\u8eca\u306e\u30dd\u30ea\u30b4\u30f3\u3092\u8a08\u7b97\u3057\u3001\u70b9\u7fa4\u306e\u5404\u70b9\u304a\u3088\u3073\u5404\u52d5\u7684\u7269\u4f53\u306e\u30dd\u30ea\u30b4\u30f3\u3068\u306e\u8ddd\u96e2\u3092\u305d\u308c\u305e\u308c\u8a08\u7b97\u3059\u308b\u3053\u3068\u3067\u6700\u8fd1\u508d\u306e\u969c\u5bb3\u7269\u3068\u306e\u8ddd\u96e2\u3092\u6c42\u3081\u308b\u3002

"},{"location":"planning/surround_obstacle_checker/surround_obstacle_checker-design.ja/#stop-condition","title":"Stop condition","text":"

\u6b21\u306e\u6761\u4ef6\u3092\u3059\u3079\u3066\u6e80\u305f\u3059\u3068\u304d\u3001\u81ea\u8eca\u306f\u505c\u6b62\u8a08\u753b\u3092\u884c\u3046\u3002

"},{"location":"planning/surround_obstacle_checker/surround_obstacle_checker-design.ja/#states","title":"States","text":"

\u30c1\u30e3\u30bf\u30ea\u30f3\u30b0\u9632\u6b62\u306e\u305f\u3081\u3001surround_obstacle_checker \u3067\u306f\u72b6\u614b\u3092\u7ba1\u7406\u3057\u3066\u3044\u308b\u3002 Stop condition \u306e\u9805\u3067\u8ff0\u3079\u305f\u3088\u3046\u306b\u3001\u72b6\u614b\u306b\u3088\u3063\u3066\u969c\u5bb3\u7269\u5224\u5b9a\u306e\u3057\u304d\u3044\u5024\u3092\u5909\u66f4\u3059\u308b\u3053\u3068\u3067\u30c1\u30e3\u30bf\u30ea\u30f3\u30b0\u3092\u9632\u6b62\u3057\u3066\u3044\u308b\u3002

"},{"location":"planning/surround_obstacle_checker/surround_obstacle_checker-design.ja/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"planning/surround_obstacle_checker/surround_obstacle_checker-design.ja/#input","title":"Input","text":"Name Type Description /perception/obstacle_segmentation/pointcloud sensor_msgs::msg::PointCloud2 Pointcloud of obstacles which the ego-vehicle should stop or avoid /perception/object_recognition/objects autoware_auto_perception_msgs::msg::PredictedObjects Dynamic objects /localization/kinematic_state nav_msgs::msg::Odometry Current twist /tf tf2_msgs::msg::TFMessage TF /tf_static tf2_msgs::msg::TFMessage TF static"},{"location":"planning/surround_obstacle_checker/surround_obstacle_checker-design.ja/#output","title":"Output","text":"Name Type Description ~/output/velocity_limit_clear_command tier4_planning_msgs::msg::VelocityLimitClearCommand Velocity limit clear command ~/output/max_velocity tier4_planning_msgs::msg::VelocityLimit Velocity limit command ~/output/no_start_reason diagnostic_msgs::msg::DiagnosticStatus No start reason ~/output/stop_reasons tier4_planning_msgs::msg::StopReasonArray Stop reasons ~/debug/marker visualization_msgs::msg::MarkerArray Marker for visualization"},{"location":"planning/surround_obstacle_checker/surround_obstacle_checker-design.ja/#parameters","title":"Parameters","text":"Name Type Description Default value use_pointcloud bool Use pointcloud as obstacle check true use_dynamic_object bool Use dynamic object as obstacle check true surround_check_distance double If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status [m] 0.5 surround_check_recover_distance double If no object exists in this distance, transit to \"non-surrounding-obstacle\" status [m] 0.8 state_clear_time double Threshold to clear stop state [s] 2.0 stop_state_ego_speed double Threshold to check ego vehicle stopped [m/s] 0.1 stop_state_entry_duration_time double Threshold to check ego vehicle stopped [s] 0.1"},{"location":"planning/surround_obstacle_checker/surround_obstacle_checker-design.ja/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

\u3053\u306e\u6a5f\u80fd\u304c\u52d5\u4f5c\u3059\u308b\u305f\u3081\u306b\u306f\u969c\u5bb3\u7269\u70b9\u7fa4\u306e\u89b3\u6e2c\u304c\u5fc5\u8981\u306a\u305f\u3081\u3001\u969c\u5bb3\u7269\u304c\u6b7b\u89d2\u306b\u5165\u3063\u3066\u3044\u308b\u5834\u5408\u306f\u505c\u6b62\u8a08\u753b\u3092\u884c\u308f\u306a\u3044\u3002

"},{"location":"sensing/geo_pos_conv/","title":"geo_pos_conv","text":""},{"location":"sensing/geo_pos_conv/#purpose","title":"Purpose","text":"

The geo_pos_conv is a library to calculate the conversion between x, y positions on the plane rectangular coordinate and latitude/longitude on the earth.

"},{"location":"sensing/geo_pos_conv/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"sensing/geo_pos_conv/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"sensing/geo_pos_conv/#parameters","title":"Parameters","text":""},{"location":"sensing/geo_pos_conv/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/geo_pos_conv/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/geo_pos_conv/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/geo_pos_conv/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"sensing/geo_pos_conv/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/gnss_poser/","title":"gnss_poser","text":""},{"location":"sensing/gnss_poser/#purpose","title":"Purpose","text":"

The gnss_poser is a node that subscribes gnss sensing messages and calculates vehicle pose with covariance.

"},{"location":"sensing/gnss_poser/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"sensing/gnss_poser/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"sensing/gnss_poser/#input","title":"Input","text":"Name Type Description ~/input/fix sensor_msgs::msg::NavSatFix gnss status message ~/input/autoware_orientation autoware_sensing_msgs::msg::GnssInsOrientationStamped orientation click here for more details"},{"location":"sensing/gnss_poser/#output","title":"Output","text":"Name Type Description ~/output/pose geometry_msgs::msg::PoseStamped vehicle pose calculated from gnss sensing data ~/output/gnss_pose_cov geometry_msgs::msg::PoseWithCovarianceStamped vehicle pose with covariance calculated from gnss sensing data ~/output/gnss_fixed tier4_debug_msgs::msg::BoolStamped gnss fix status"},{"location":"sensing/gnss_poser/#parameters","title":"Parameters","text":""},{"location":"sensing/gnss_poser/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description base_frame string \"base_link\" frame id gnss_frame string \"gnss\" frame id gnss_base_frame string \"gnss_base_link\" frame id map_frame string \"map\" frame id coordinate_system int \"4\" coordinate system enumeration; 0: UTM, 1: MGRS, 2: Plane, 3: WGS84 Local Coordinate System, 4: UTM Local Coordinate System plane_zone int 9 identification number of the plane rectangular coordinate systems."},{"location":"sensing/gnss_poser/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/gnss_poser/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/gnss_poser/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/gnss_poser/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"sensing/gnss_poser/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/image_diagnostics/","title":"image_diagnostics","text":""},{"location":"sensing/image_diagnostics/#purpose","title":"Purpose","text":"

The image_diagnostics is a node that check the status of the input raw image.

"},{"location":"sensing/image_diagnostics/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

Below figure shows the flowchart of image diagnostics node. Each image is divided into small blocks for block state assessment.

Each small image block state is assessed as below figure.

After all image's blocks state are evaluated, the whole image status is summarized as below.

"},{"location":"sensing/image_diagnostics/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"sensing/image_diagnostics/#input","title":"Input","text":"Name Type Description input/raw_image sensor_msgs::msg::Image raw image"},{"location":"sensing/image_diagnostics/#output","title":"Output","text":"Name Type Description image_diag/debug/gray_image sensor_msgs::msg::Image gray image image_diag/debug/dft_image sensor_msgs::msg::Image discrete Fourier transformation image image_diag/debug/diag_block_image sensor_msgs::msg::Image each block state colorization image_diag/image_state_diag tier4_debug_msgs::msg::Int32Stamped image diagnostics status value /diagnostics diagnostic_msgs::msg::DiagnosticArray diagnostics"},{"location":"sensing/image_diagnostics/#parameters","title":"Parameters","text":""},{"location":"sensing/image_diagnostics/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/image_diagnostics/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/image_diagnostics/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/image_diagnostics/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"sensing/image_diagnostics/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":" "},{"location":"sensing/image_transport_decompressor/","title":"image_transport_decompressor","text":""},{"location":"sensing/image_transport_decompressor/#purpose","title":"Purpose","text":"

The image_transport_decompressor is a node that decompresses images.

"},{"location":"sensing/image_transport_decompressor/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"sensing/image_transport_decompressor/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"sensing/image_transport_decompressor/#input","title":"Input","text":"Name Type Description ~/input/compressed_image sensor_msgs::msg::CompressedImage compressed image"},{"location":"sensing/image_transport_decompressor/#output","title":"Output","text":"Name Type Description ~/output/raw_image sensor_msgs::msg::Image decompressed image"},{"location":"sensing/image_transport_decompressor/#parameters","title":"Parameters","text":""},{"location":"sensing/image_transport_decompressor/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/image_transport_decompressor/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/image_transport_decompressor/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/image_transport_decompressor/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"sensing/image_transport_decompressor/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/imu_corrector/","title":"imu_corrector","text":""},{"location":"sensing/imu_corrector/#purpose","title":"Purpose","text":"

imu_corrector_node is a node that correct imu data.

  1. Correct yaw rate offset b by reading the parameter.
  2. Correct yaw rate standard deviation \\sigma by reading the parameter.

Mathematically, we assume the following equation:

\\tilde{\\omega}(t) = \\omega(t) + b(t) + n(t)

where \\tilde{\\omega} denotes observed angular velocity, \\omega denotes true angular velocity, b denotes an offset, and n denotes a gaussian noise. We also assume that n\\sim\\mathcal{N}(0, \\sigma^2).

"},{"location":"sensing/imu_corrector/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"sensing/imu_corrector/#input","title":"Input","text":"Name Type Description ~input sensor_msgs::msg::Imu raw imu data"},{"location":"sensing/imu_corrector/#output","title":"Output","text":"Name Type Description ~output sensor_msgs::msg::Imu corrected imu data"},{"location":"sensing/imu_corrector/#parameters","title":"Parameters","text":""},{"location":"sensing/imu_corrector/#core-parameters","title":"Core Parameters","text":"Name Type Description angular_velocity_offset_x double roll rate offset in imu_link [rad/s] angular_velocity_offset_y double pitch rate offset imu_link [rad/s] angular_velocity_offset_z double yaw rate offset imu_link [rad/s] angular_velocity_stddev_xx double roll rate standard deviation imu_link [rad/s] angular_velocity_stddev_yy double pitch rate standard deviation imu_link [rad/s] angular_velocity_stddev_zz double yaw rate standard deviation imu_link [rad/s] acceleration_stddev double acceleration standard deviation imu_link [m/s^2]"},{"location":"sensing/imu_corrector/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/imu_corrector/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/imu_corrector/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/imu_corrector/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"sensing/imu_corrector/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/livox/livox_tag_filter/","title":"livox_tag_filter","text":""},{"location":"sensing/livox/livox_tag_filter/#purpose","title":"Purpose","text":"

The livox_tag_filter is a node that removes noise from pointcloud by using the following tags:

"},{"location":"sensing/livox/livox_tag_filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"sensing/livox/livox_tag_filter/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"sensing/livox/livox_tag_filter/#input","title":"Input","text":"Name Type Description ~/input sensor_msgs::msg::PointCloud2 reference points"},{"location":"sensing/livox/livox_tag_filter/#output","title":"Output","text":"Name Type Description ~/output sensor_msgs::msg::PointCloud2 filtered points"},{"location":"sensing/livox/livox_tag_filter/#parameters","title":"Parameters","text":""},{"location":"sensing/livox/livox_tag_filter/#node-parameters","title":"Node Parameters","text":"Name Type Description ignore_tags vector ignored tags (See the following table)"},{"location":"sensing/livox/livox_tag_filter/#tag-parameters","title":"Tag Parameters","text":"Bit Description Options 0~1 Point property based on spatial position 00: Normal 01: High confidence level of the noise 10: Moderate confidence level of the noise 11: Low confidence level of the noise 2~3 Point property based on intensity 00: Normal 01: High confidence level of the noise 10: Moderate confidence level of the noise 11: Reserved 4~5 Return number 00: return 0 01: return 1 10: return 2 11: return 3 6~7 Reserved

You can download more detail description about the livox from external link [1].

"},{"location":"sensing/livox/livox_tag_filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/livox/livox_tag_filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/livox/livox_tag_filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/livox/livox_tag_filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":"

[1] https://www.livoxtech.com/downloads

"},{"location":"sensing/livox/livox_tag_filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/pointcloud_preprocessor/","title":"pointcloud_preprocessor","text":""},{"location":"sensing/pointcloud_preprocessor/#purpose","title":"Purpose","text":"

The pointcloud_preprocessor is a package that includes the following filters:

"},{"location":"sensing/pointcloud_preprocessor/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

Detail description of each filter's algorithm is in the following links.

Filter Name Description Detail concatenate_data subscribe multiple pointclouds and concatenate them into a pointcloud link crop_box_filter remove points within a given box link distortion_corrector compensate pointcloud distortion caused by ego vehicle's movement during 1 scan link downsample_filter downsampling input pointcloud link outlier_filter remove points caused by hardware problems, rain drops and small insects as a noise link passthrough_filter remove points on the outside of a range in given field (e.g. x, y, z, intensity) link pointcloud_accumulator accumulate pointclouds for a given amount of time link vector_map_filter remove points on the outside of lane by using vector map link vector_map_inside_area_filter remove points inside of vector map area that has given type by parameter link"},{"location":"sensing/pointcloud_preprocessor/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"sensing/pointcloud_preprocessor/#input","title":"Input","text":"Name Type Description ~/input/points sensor_msgs::msg::PointCloud2 reference points ~/input/indices pcl_msgs::msg::Indices reference indices"},{"location":"sensing/pointcloud_preprocessor/#output","title":"Output","text":"Name Type Description ~/output/points sensor_msgs::msg::PointCloud2 filtered points"},{"location":"sensing/pointcloud_preprocessor/#parameters","title":"Parameters","text":""},{"location":"sensing/pointcloud_preprocessor/#node-parameters","title":"Node Parameters","text":"Name Type Default Value Description input_frame string \" \" input frame id output_frame string \" \" output frame id max_queue_size int 5 max queue size of input/output topics use_indices bool false flag to use pointcloud indices latched_indices bool false flag to latch pointcloud indices approximate_sync bool false flag to use approximate sync option"},{"location":"sensing/pointcloud_preprocessor/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

pointcloud_preprocessor::Filter is implemented based on pcl_perception [1] because of this issue.

"},{"location":"sensing/pointcloud_preprocessor/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/pointcloud_preprocessor/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/pointcloud_preprocessor/#referencesexternal-links","title":"References/External links","text":"

[1] https://github.com/ros-perception/perception_pcl/blob/ros2/pcl_ros/src/pcl_ros/filters/filter.cpp

"},{"location":"sensing/pointcloud_preprocessor/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/pointcloud_preprocessor/docs/blockage_diag/","title":"blockage_diag","text":""},{"location":"sensing/pointcloud_preprocessor/docs/blockage_diag/#purpose","title":"Purpose","text":"

To ensure the performance of LiDAR and safety for autonomous driving, the abnormal condition diagnostics feature is needed. LiDAR blockage is abnormal condition of LiDAR when some unwanted objects stitch to and block the light pulses and return signal. This node's purpose is to detect the existing of blockage on LiDAR and its related size and location.

"},{"location":"sensing/pointcloud_preprocessor/docs/blockage_diag/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

This node bases on the no-return region and its location to decide if it is a blockage.

The logic is showed as below

"},{"location":"sensing/pointcloud_preprocessor/docs/blockage_diag/#inputs-outputs","title":"Inputs / Outputs","text":"

This implementation inherits pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"sensing/pointcloud_preprocessor/docs/blockage_diag/#input","title":"Input","text":"Name Type Description ~/input/pointcloud_raw_ex sensor_msgs::msg::PointCloud2 The raw point cloud data is used to detect the no-return region"},{"location":"sensing/pointcloud_preprocessor/docs/blockage_diag/#output","title":"Output","text":"Name Type Description ~/output/blockage_diag/debug/blockage_mask_image sensor_msgs::msg::Image The mask image of detected blockage ~/output/blockage_diag/debug/ground_blockage_ratio tier4_debug_msgs::msg::Float32Stamped The area ratio of blockage region in ground region ~/output/blockage_diag/debug/sky_blockage_ratio tier4_debug_msgs::msg::Float32Stamped The area ratio of blockage region in sky region ~/output/blockage_diag/debug/lidar_depth_map sensor_msgs::msg::Image The depth map image of input point cloud"},{"location":"sensing/pointcloud_preprocessor/docs/blockage_diag/#parameters","title":"Parameters","text":"Name Type Description blockage_ratio_threshold float The threshold of blockage area ratio blockage_count_threshold float The threshold of number continuous blockage frames horizontal_ring_id int The id of horizontal ring of the LiDAR angle_range vector The effective range of LiDAR vertical_bins int The LiDAR channel number model string The LiDAR model buffering_frames uint The number of buffering [range:1-200] buffering_interval uint The interval of buffering"},{"location":"sensing/pointcloud_preprocessor/docs/blockage_diag/#assumptions-known-limits","title":"Assumptions / Known limits","text":"
  1. Only Hesai Pandar40P and Hesai PandarQT were tested. For a new LiDAR, it is necessary to check order of channel id in vertical distribution manually and modify the code.
  2. The current method is still limited for dust type of blockage when dust particles are sparsely distributed.
"},{"location":"sensing/pointcloud_preprocessor/docs/blockage_diag/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/pointcloud_preprocessor/docs/blockage_diag/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/pointcloud_preprocessor/docs/blockage_diag/#referencesexternal-links","title":"References/External links","text":""},{"location":"sensing/pointcloud_preprocessor/docs/blockage_diag/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/pointcloud_preprocessor/docs/concatenate-data/","title":"concatenate_data","text":""},{"location":"sensing/pointcloud_preprocessor/docs/concatenate-data/#purpose","title":"Purpose","text":"

Many self-driving cars combine multiple LiDARs to expand the sensing range. Therefore, a function to combine a plurality of point clouds is required.

To combine multiple sensor data with a similar timestamp, the message_filters is often used in the ROS-based system, but this requires the assumption that all inputs can be received. Since safety must be strongly considered in autonomous driving, the point clouds concatenate node must be designed so that even if one sensor fails, the remaining sensor information can be output.

"},{"location":"sensing/pointcloud_preprocessor/docs/concatenate-data/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

The figure below represents the reception time of each sensor data and how it is combined in the case.

"},{"location":"sensing/pointcloud_preprocessor/docs/concatenate-data/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"sensing/pointcloud_preprocessor/docs/concatenate-data/#input","title":"Input","text":"Name Type Description ~/input/twist autoware_auto_vehicle_msgs::msg::VelocityReport The vehicle odometry is used to interpolate the timestamp of each sensor data"},{"location":"sensing/pointcloud_preprocessor/docs/concatenate-data/#output","title":"Output","text":"Name Type Description ~/output/points sensor_msgs::msg::Pointcloud2 concatenated point clouds"},{"location":"sensing/pointcloud_preprocessor/docs/concatenate-data/#parameters","title":"Parameters","text":"Name Type Default Value Description input/points vector of string [] input topic names that type must be sensor_msgs::msg::Pointcloud2 input_frame string \"\" input frame id output_frame string \"\" output frame id max_queue_size int 5 max queue size of input/output topics"},{"location":"sensing/pointcloud_preprocessor/docs/concatenate-data/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description timeout_sec double 0.1 tolerance of time to publish next pointcloud [s]When this time limit is exceeded, the filter concatenates and publishes pointcloud, even if not all the point clouds are subscribed."},{"location":"sensing/pointcloud_preprocessor/docs/concatenate-data/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

It is necessary to assume that the vehicle odometry value exists, the sensor data and odometry timestamp are correct, and the TF from base_link to sensor_frame is also correct.

"},{"location":"sensing/pointcloud_preprocessor/docs/crop-box-filter/","title":"crop_box_filter","text":""},{"location":"sensing/pointcloud_preprocessor/docs/crop-box-filter/#purpose","title":"Purpose","text":"

The crop_box_filter is a node that removes points with in a given box region. This filter is used to remove the points that hit the vehicle itself.

"},{"location":"sensing/pointcloud_preprocessor/docs/crop-box-filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

pcl::CropBox is used, which filters all points inside a given box.

"},{"location":"sensing/pointcloud_preprocessor/docs/crop-box-filter/#inputs-outputs","title":"Inputs / Outputs","text":"

This implementation inherit pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"sensing/pointcloud_preprocessor/docs/crop-box-filter/#parameters","title":"Parameters","text":""},{"location":"sensing/pointcloud_preprocessor/docs/crop-box-filter/#node-parameters","title":"Node Parameters","text":"

This implementation inherit pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"sensing/pointcloud_preprocessor/docs/crop-box-filter/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description min_x double -1.0 x-coordinate minimum value for crop range max_x double 1.0 x-coordinate maximum value for crop range min_y double -1.0 y-coordinate minimum value for crop range max_y double 1.0 y-coordinate maximum value for crop range min_z double -1.0 z-coordinate minimum value for crop range max_z double 1.0 z-coordinate maximum value for crop range"},{"location":"sensing/pointcloud_preprocessor/docs/crop-box-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/pointcloud_preprocessor/docs/crop-box-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/pointcloud_preprocessor/docs/crop-box-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/pointcloud_preprocessor/docs/crop-box-filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"sensing/pointcloud_preprocessor/docs/crop-box-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/pointcloud_preprocessor/docs/distortion-corrector/","title":"distortion_corrector","text":""},{"location":"sensing/pointcloud_preprocessor/docs/distortion-corrector/#purpose","title":"Purpose","text":"

The distortion_corrector is a node that compensates pointcloud distortion caused by ego vehicle's movement during 1 scan.

Since the LiDAR sensor scans by rotating an internal laser, the resulting point cloud will be distorted if the ego-vehicle moves during a single scan (as shown by the figure below). The node corrects this by interpolating sensor data using odometry of ego-vehicle.

"},{"location":"sensing/pointcloud_preprocessor/docs/distortion-corrector/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

The offset equation is given by $ TimeOffset = (55.296 \\mu s SequenceIndex) + (2.304 \\mu s DataPointIndex) $

To calculate the exact point time, add the TimeOffset to the timestamp. $ ExactPointTime = TimeStamp + TimeOffset $

"},{"location":"sensing/pointcloud_preprocessor/docs/distortion-corrector/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"sensing/pointcloud_preprocessor/docs/distortion-corrector/#input","title":"Input","text":"Name Type Description ~/input/points sensor_msgs::msg::PointCloud2 reference points ~/input/twist geometry_msgs::msg::TwistWithCovarianceStamped twist ~/input/imu sensor_msgs::msg::Imu imu data"},{"location":"sensing/pointcloud_preprocessor/docs/distortion-corrector/#output","title":"Output","text":"Name Type Description ~/output/points sensor_msgs::msg::PointCloud2 filtered points"},{"location":"sensing/pointcloud_preprocessor/docs/distortion-corrector/#parameters","title":"Parameters","text":""},{"location":"sensing/pointcloud_preprocessor/docs/distortion-corrector/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description timestamp_field_name string \"time_stamp\" time stamp field name use_imu bool true use gyroscope for yaw rate if true, else use vehicle status"},{"location":"sensing/pointcloud_preprocessor/docs/distortion-corrector/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/pointcloud_preprocessor/docs/downsample-filter/","title":"downsample_filter","text":""},{"location":"sensing/pointcloud_preprocessor/docs/downsample-filter/#purpose","title":"Purpose","text":"

The downsample_filter is a node that reduces the number of points.

"},{"location":"sensing/pointcloud_preprocessor/docs/downsample-filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"sensing/pointcloud_preprocessor/docs/downsample-filter/#approximate-downsample-filter","title":"Approximate Downsample Filter","text":"

pcl::VoxelGridNearestCentroid is used. The algorithm is described in tier4_pcl_extensions

"},{"location":"sensing/pointcloud_preprocessor/docs/downsample-filter/#random-downsample-filter","title":"Random Downsample Filter","text":"

pcl::RandomSample is used, which points are sampled with uniform probability.

"},{"location":"sensing/pointcloud_preprocessor/docs/downsample-filter/#voxel-grid-downsample-filter","title":"Voxel Grid Downsample Filter","text":"

pcl::VoxelGrid is used, which points in each voxel are approximated with their centroid.

"},{"location":"sensing/pointcloud_preprocessor/docs/downsample-filter/#inputs-outputs","title":"Inputs / Outputs","text":"

These implementations inherit pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"sensing/pointcloud_preprocessor/docs/downsample-filter/#parameters","title":"Parameters","text":""},{"location":"sensing/pointcloud_preprocessor/docs/downsample-filter/#note-parameters","title":"Note Parameters","text":"

These implementations inherit pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"sensing/pointcloud_preprocessor/docs/downsample-filter/#core-parameters","title":"Core Parameters","text":""},{"location":"sensing/pointcloud_preprocessor/docs/downsample-filter/#approximate-downsample-filter_1","title":"Approximate Downsample Filter","text":"Name Type Default Value Description voxel_size_x double 0.3 voxel size x [m] voxel_size_y double 0.3 voxel size y [m] voxel_size_z double 0.1 voxel size z [m]"},{"location":"sensing/pointcloud_preprocessor/docs/downsample-filter/#random-downsample-filter_1","title":"Random Downsample Filter","text":"Name Type Default Value Description sample_num int 1500 number of indices to be sampled"},{"location":"sensing/pointcloud_preprocessor/docs/downsample-filter/#voxel-grid-downsample-filter_1","title":"Voxel Grid Downsample Filter","text":"Name Type Default Value Description voxel_size_x double 0.3 voxel size x [m] voxel_size_y double 0.3 voxel size y [m] voxel_size_z double 0.1 voxel size z [m]"},{"location":"sensing/pointcloud_preprocessor/docs/downsample-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/pointcloud_preprocessor/docs/downsample-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/pointcloud_preprocessor/docs/downsample-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/pointcloud_preprocessor/docs/downsample-filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"sensing/pointcloud_preprocessor/docs/downsample-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/pointcloud_preprocessor/docs/dual-return-outlier-filter/","title":"dual_return_outlier_filter","text":""},{"location":"sensing/pointcloud_preprocessor/docs/dual-return-outlier-filter/#purpose","title":"Purpose","text":"

The purpose is to remove point cloud noise such as fog and rain and publish visibility as a diagnostic topic.

"},{"location":"sensing/pointcloud_preprocessor/docs/dual-return-outlier-filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

This node can remove rain and fog by considering the light reflected from the object in two stages according to the attenuation factor. The dual_return_outlier_filter is named because it removes noise using data that contains two types of return values separated by attenuation factor, as shown in the figure below.

Therefore, in order to use this node, the sensor driver must publish custom data including return_type. please refer to PointXYZIRADRT data structure.

Another feature of this node is that it publishes visibility as a diagnostic topic. With this function, for example, in heavy rain, the sensing module can notify that the processing performance has reached its limit, which can lead to ensuring the safety of the vehicle.

In some complicated road scenes where normal objects also reflect the light in two stages, for instance plants, leaves, some plastic net etc, the visibility faces some drop in fine weather condition. To deal with that, optional settings of a region of interest (ROI) are added.

  1. Fixed_xyz_ROI mode: Visibility estimation based on the weak points in a fixed cuboid surrounding region of ego-vehicle, defined by x, y, z in base_link perspective.
  2. Fixed_azimuth_ROI mode: Visibility estimation based on the weak points in a fixed surrounding region of ego-vehicle, defined by azimuth and distance of LiDAR perspective.

When select 2 fixed ROI modes, due to the range of weak points is shrink, the sensitivity of visibility is decrease so that a trade of between weak_first_local_noise_threshold and visibility_threshold is needed.

The figure below describe how the node works.

The below picture shows the ROI options.

"},{"location":"sensing/pointcloud_preprocessor/docs/dual-return-outlier-filter/#inputs-outputs","title":"Inputs / Outputs","text":"

This implementation inherits pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"sensing/pointcloud_preprocessor/docs/dual-return-outlier-filter/#output","title":"Output","text":"Name Type Description /dual_return_outlier_filter/frequency_image sensor_msgs::msg::Image The histogram image that represent visibility /dual_return_outlier_filter/visibility tier4_debug_msgs::msg::Float32Stamped A representation of visibility with a value from 0 to 1 /dual_return_outlier_filter/pointcloud_noise sensor_msgs::msg::Pointcloud2 The pointcloud removed as noise"},{"location":"sensing/pointcloud_preprocessor/docs/dual-return-outlier-filter/#parameters","title":"Parameters","text":""},{"location":"sensing/pointcloud_preprocessor/docs/dual-return-outlier-filter/#node-parameters","title":"Node Parameters","text":"

This implementation inherits pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"sensing/pointcloud_preprocessor/docs/dual-return-outlier-filter/#core-parameters","title":"Core Parameters","text":"Name Type Description vertical_bins int The number of vertical bin for visibility histogram max_azimuth_diff float Threshold for ring_outlier_filter weak_first_distance_ratio double Threshold for ring_outlier_filter general_distance_ratio double Threshold for ring_outlier_filter weak_first_local_noise_threshold int The parameter for determining whether it is noise visibility_error_threshold float When the percentage of white pixels in the binary histogram falls below this parameter the diagnostic status becomes ERR visibility_warn_threshold float When the percentage of white pixels in the binary histogram falls below this parameter the diagnostic status becomes WARN roi_mode string The name of ROI mode for switching min_azimuth_deg float The left limit of azimuth for Fixed_azimuth_ROI mode max_azimuth_deg float The right limit of azimuth for Fixed_azimuth_ROI mode max_distance float The limit distance for for Fixed_azimuth_ROI mode x_max float Maximum of x for Fixed_xyz_ROI mode x_min float Minimum of x for Fixed_xyz_ROI mode y_max float Maximum of y for Fixed_xyz_ROI mode y_min float Minimum of y for Fixed_xyz_ROI mode z_max float Maximum of z for Fixed_xyz_ROI mode z_min float Minimum of z for Fixed_xyz_ROI mode"},{"location":"sensing/pointcloud_preprocessor/docs/dual-return-outlier-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

Not recommended for use as it is under development. Input data must be PointXYZIRADRT type data including return_type.

"},{"location":"sensing/pointcloud_preprocessor/docs/dual-return-outlier-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/pointcloud_preprocessor/docs/dual-return-outlier-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/pointcloud_preprocessor/docs/dual-return-outlier-filter/#referencesexternal-links","title":"References/External links","text":""},{"location":"sensing/pointcloud_preprocessor/docs/dual-return-outlier-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/pointcloud_preprocessor/docs/outlier-filter/","title":"outlier_filter","text":""},{"location":"sensing/pointcloud_preprocessor/docs/outlier-filter/#purpose","title":"Purpose","text":"

The outlier_filter is a package for filtering outlier of points.

"},{"location":"sensing/pointcloud_preprocessor/docs/outlier-filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"Filter Name Description Detail radius search 2d outlier filter A method of removing point cloud noise based on the number of points existing within a certain radius link ring outlier filter A method of operating scan in chronological order and removing noise based on the rate of change in the distance between points link voxel grid outlier filter A method of removing point cloud noise based on the number of points existing within a voxel link dual return outlier filter (under development) A method of removing rain and fog by considering the light reflected from the object in two stages according to the attenuation factor. link"},{"location":"sensing/pointcloud_preprocessor/docs/outlier-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/pointcloud_preprocessor/docs/outlier-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/pointcloud_preprocessor/docs/outlier-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/pointcloud_preprocessor/docs/outlier-filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"sensing/pointcloud_preprocessor/docs/outlier-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/pointcloud_preprocessor/docs/passthrough-filter/","title":"passthrough_filter","text":""},{"location":"sensing/pointcloud_preprocessor/docs/passthrough-filter/#purpose","title":"Purpose","text":"

The passthrough_filter is a node that removes points on the outside of a range in a given field (e.g. x, y, z, intensity, ring, etc).

"},{"location":"sensing/pointcloud_preprocessor/docs/passthrough-filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"sensing/pointcloud_preprocessor/docs/passthrough-filter/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"sensing/pointcloud_preprocessor/docs/passthrough-filter/#input","title":"Input","text":"Name Type Description ~/input/points sensor_msgs::msg::PointCloud2 reference points ~/input/indices pcl_msgs::msg::Indices reference indices"},{"location":"sensing/pointcloud_preprocessor/docs/passthrough-filter/#output","title":"Output","text":"Name Type Description ~/output/points sensor_msgs::msg::PointCloud2 filtered points"},{"location":"sensing/pointcloud_preprocessor/docs/passthrough-filter/#parameters","title":"Parameters","text":""},{"location":"sensing/pointcloud_preprocessor/docs/passthrough-filter/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description filter_limit_min int 0 minimum allowed field value filter_limit_max int 127 maximum allowed field value filter_field_name string \"ring\" filtering field name keep_organized bool false flag to keep indices structure filter_limit_negative bool false flag to return whether the data is inside limit or not"},{"location":"sensing/pointcloud_preprocessor/docs/passthrough-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/pointcloud_preprocessor/docs/passthrough-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/pointcloud_preprocessor/docs/passthrough-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/pointcloud_preprocessor/docs/passthrough-filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"sensing/pointcloud_preprocessor/docs/passthrough-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/pointcloud_preprocessor/docs/pointcloud-accumulator/","title":"pointcloud_accumulator","text":""},{"location":"sensing/pointcloud_preprocessor/docs/pointcloud-accumulator/#purpose","title":"Purpose","text":"

The pointcloud_accumulator is a node that accumulates pointclouds for a given amount of time.

"},{"location":"sensing/pointcloud_preprocessor/docs/pointcloud-accumulator/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"sensing/pointcloud_preprocessor/docs/pointcloud-accumulator/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"sensing/pointcloud_preprocessor/docs/pointcloud-accumulator/#input","title":"Input","text":"Name Type Description ~/input/points sensor_msgs::msg::PointCloud2 reference points"},{"location":"sensing/pointcloud_preprocessor/docs/pointcloud-accumulator/#output","title":"Output","text":"Name Type Description ~/output/points sensor_msgs::msg::PointCloud2 filtered points"},{"location":"sensing/pointcloud_preprocessor/docs/pointcloud-accumulator/#parameters","title":"Parameters","text":""},{"location":"sensing/pointcloud_preprocessor/docs/pointcloud-accumulator/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description accumulation_time_sec double 2.0 accumulation period [s] pointcloud_buffer_size int 50 buffer size"},{"location":"sensing/pointcloud_preprocessor/docs/pointcloud-accumulator/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/pointcloud_preprocessor/docs/pointcloud-accumulator/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/pointcloud_preprocessor/docs/pointcloud-accumulator/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/pointcloud_preprocessor/docs/pointcloud-accumulator/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"sensing/pointcloud_preprocessor/docs/pointcloud-accumulator/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/pointcloud_preprocessor/docs/radius-search-2d-outlier-filter/","title":"radius_search_2d_outlier_filter","text":""},{"location":"sensing/pointcloud_preprocessor/docs/radius-search-2d-outlier-filter/#purpose","title":"Purpose","text":"

The purpose is to remove point cloud noise such as insects and rain.

"},{"location":"sensing/pointcloud_preprocessor/docs/radius-search-2d-outlier-filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

RadiusOutlierRemoval filter which removes all indices in its input cloud that don\u2019t have at least some number of neighbors within a certain range.

The description above is quoted from [1]. pcl::search::KdTree [2] is used to implement this package.

"},{"location":"sensing/pointcloud_preprocessor/docs/radius-search-2d-outlier-filter/#inputs-outputs","title":"Inputs / Outputs","text":"

This implementation inherits pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"sensing/pointcloud_preprocessor/docs/radius-search-2d-outlier-filter/#parameters","title":"Parameters","text":""},{"location":"sensing/pointcloud_preprocessor/docs/radius-search-2d-outlier-filter/#node-parameters","title":"Node Parameters","text":"

This implementation inherits pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"sensing/pointcloud_preprocessor/docs/radius-search-2d-outlier-filter/#core-parameters","title":"Core Parameters","text":"Name Type Description min_neighbors int If points in the circle centered on reference point is less than min_neighbors, a reference point is judged as outlier search_radius double Searching number of points included in search_radius"},{"location":"sensing/pointcloud_preprocessor/docs/radius-search-2d-outlier-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

Since the method is to count the number of points contained in the cylinder with the direction of gravity as the direction of the cylinder axis, it is a prerequisite that the ground has been removed.

"},{"location":"sensing/pointcloud_preprocessor/docs/radius-search-2d-outlier-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/pointcloud_preprocessor/docs/radius-search-2d-outlier-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/pointcloud_preprocessor/docs/radius-search-2d-outlier-filter/#referencesexternal-links","title":"References/External links","text":"

[1] https://pcl.readthedocs.io/projects/tutorials/en/latest/remove_outliers.html

[2] https://pcl.readthedocs.io/projects/tutorials/en/latest/kdtree_search.html#kdtree-search

"},{"location":"sensing/pointcloud_preprocessor/docs/radius-search-2d-outlier-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/pointcloud_preprocessor/docs/ring-outlier-filter/","title":"ring_outlier_filter","text":""},{"location":"sensing/pointcloud_preprocessor/docs/ring-outlier-filter/#purpose","title":"Purpose","text":"

The purpose is to remove point cloud noise such as insects and rain.

"},{"location":"sensing/pointcloud_preprocessor/docs/ring-outlier-filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

A method of operating scan in chronological order and removing noise based on the rate of change in the distance between points

"},{"location":"sensing/pointcloud_preprocessor/docs/ring-outlier-filter/#inputs-outputs","title":"Inputs / Outputs","text":"

This implementation inherits pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"sensing/pointcloud_preprocessor/docs/ring-outlier-filter/#parameters","title":"Parameters","text":""},{"location":"sensing/pointcloud_preprocessor/docs/ring-outlier-filter/#node-parameters","title":"Node Parameters","text":"

This implementation inherits pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"sensing/pointcloud_preprocessor/docs/ring-outlier-filter/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description distance_ratio double 1.03 object_length_threshold double 0.1 num_points_threshold int 4"},{"location":"sensing/pointcloud_preprocessor/docs/ring-outlier-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

It is a prerequisite to input a scan point cloud in chronological order. In this repository it is defined as blow structure (please refer to PointXYZIRADT).

"},{"location":"sensing/pointcloud_preprocessor/docs/ring-outlier-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/pointcloud_preprocessor/docs/ring-outlier-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/pointcloud_preprocessor/docs/ring-outlier-filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"sensing/pointcloud_preprocessor/docs/ring-outlier-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/pointcloud_preprocessor/docs/vector-map-filter/","title":"vector_map_filter","text":""},{"location":"sensing/pointcloud_preprocessor/docs/vector-map-filter/#purpose","title":"Purpose","text":"

The vector_map_filter is a node that removes points on the outside of lane by using vector map.

"},{"location":"sensing/pointcloud_preprocessor/docs/vector-map-filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"sensing/pointcloud_preprocessor/docs/vector-map-filter/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"sensing/pointcloud_preprocessor/docs/vector-map-filter/#input","title":"Input","text":"Name Type Description ~/input/points sensor_msgs::msg::PointCloud2 reference points ~/input/vector_map autoware_auto_mapping_msgs::msg::HADMapBin vector map"},{"location":"sensing/pointcloud_preprocessor/docs/vector-map-filter/#output","title":"Output","text":"Name Type Description ~/output/points sensor_msgs::msg::PointCloud2 filtered points"},{"location":"sensing/pointcloud_preprocessor/docs/vector-map-filter/#parameters","title":"Parameters","text":""},{"location":"sensing/pointcloud_preprocessor/docs/vector-map-filter/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description voxel_size_x double 0.04 voxel size voxel_size_y double 0.04 voxel size"},{"location":"sensing/pointcloud_preprocessor/docs/vector-map-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/pointcloud_preprocessor/docs/vector-map-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/pointcloud_preprocessor/docs/vector-map-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/pointcloud_preprocessor/docs/vector-map-filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"sensing/pointcloud_preprocessor/docs/vector-map-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/pointcloud_preprocessor/docs/vector-map-inside-area-filter/","title":"vector_map_inside_area_filter","text":""},{"location":"sensing/pointcloud_preprocessor/docs/vector-map-inside-area-filter/#purpose","title":"Purpose","text":"

The vector_map_inside_area_filter is a node that removes points inside the vector map area that has given type by parameter.

"},{"location":"sensing/pointcloud_preprocessor/docs/vector-map-inside-area-filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"sensing/pointcloud_preprocessor/docs/vector-map-inside-area-filter/#inputs-outputs","title":"Inputs / Outputs","text":"

This implementation inherits pointcloud_preprocessor::Filter class, so please see also README.

"},{"location":"sensing/pointcloud_preprocessor/docs/vector-map-inside-area-filter/#input","title":"Input","text":"Name Type Description ~/input sensor_msgs::msg::PointCloud2 input points ~/input/vector_map autoware_auto_mapping_msgs::msg::HADMapBin vector map used for filtering points"},{"location":"sensing/pointcloud_preprocessor/docs/vector-map-inside-area-filter/#output","title":"Output","text":"Name Type Description ~/output sensor_msgs::msg::PointCloud2 filtered points"},{"location":"sensing/pointcloud_preprocessor/docs/vector-map-inside-area-filter/#core-parameters","title":"Core Parameters","text":"Name Type Description polygon_type string polygon type to be filtered"},{"location":"sensing/pointcloud_preprocessor/docs/vector-map-inside-area-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/pointcloud_preprocessor/docs/voxel-grid-outlier-filter/","title":"voxel_grid_outlier_filter","text":""},{"location":"sensing/pointcloud_preprocessor/docs/voxel-grid-outlier-filter/#purpose","title":"Purpose","text":"

The purpose is to remove point cloud noise such as insects and rain.

"},{"location":"sensing/pointcloud_preprocessor/docs/voxel-grid-outlier-filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

Removing point cloud noise based on the number of points existing within a voxel. The radius_search_2d_outlier_filter is better for accuracy, but this method has the advantage of low calculation cost.

"},{"location":"sensing/pointcloud_preprocessor/docs/voxel-grid-outlier-filter/#inputs-outputs","title":"Inputs / Outputs","text":"

This implementation inherits pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"sensing/pointcloud_preprocessor/docs/voxel-grid-outlier-filter/#parameters","title":"Parameters","text":""},{"location":"sensing/pointcloud_preprocessor/docs/voxel-grid-outlier-filter/#node-parameters","title":"Node Parameters","text":"

This implementation inherits pointcloud_preprocessor::Filter class, please refer README.

"},{"location":"sensing/pointcloud_preprocessor/docs/voxel-grid-outlier-filter/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description voxel_size_x double 0.3 the voxel size along x-axis [m] voxel_size_y double 0.3 the voxel size along y-axis [m] voxel_size_z double 0.1 the voxel size along z-axis [m] voxel_points_threshold int 2 the minimum number of points in each voxel"},{"location":"sensing/pointcloud_preprocessor/docs/voxel-grid-outlier-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/pointcloud_preprocessor/docs/voxel-grid-outlier-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/pointcloud_preprocessor/docs/voxel-grid-outlier-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/pointcloud_preprocessor/docs/voxel-grid-outlier-filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"sensing/pointcloud_preprocessor/docs/voxel-grid-outlier-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/radar_scan_to_pointcloud2/","title":"radar_scan_to_pointcloud2","text":""},{"location":"sensing/radar_scan_to_pointcloud2/#radar_scan_to_pointcloud2_node","title":"radar_scan_to_pointcloud2_node","text":""},{"location":"sensing/radar_scan_to_pointcloud2/#input-topics","title":"Input topics","text":"Name Type Description input/radar radar_msgs::msg::RadarScan RadarScan"},{"location":"sensing/radar_scan_to_pointcloud2/#output-topics","title":"Output topics","text":"Name Type Description output/amplitude_pointcloud sensor_msgs::msg::PointCloud2 PointCloud2 radar pointcloud whose intensity is amplitude. output/doppler_pointcloud sensor_msgs::msg::PointCloud2 PointCloud2 radar pointcloud whose intensity is doppler velocity."},{"location":"sensing/radar_scan_to_pointcloud2/#parameters","title":"Parameters","text":"Name Type Description publish_amplitude_pointcloud bool Whether publish radar pointcloud whose intensity is amplitude. Default is true. publish_doppler_pointcloud bool Whether publish radar pointcloud whose intensity is doppler velocity. Default is false."},{"location":"sensing/radar_scan_to_pointcloud2/#how-to-launch","title":"How to launch","text":"
ros2 launch radar_scan_to_pointcloud2 radar_scan_to_pointcloud2.launch.xml\n
"},{"location":"sensing/radar_static_pointcloud_filter/","title":"radar_static_pointcloud_filter","text":""},{"location":"sensing/radar_static_pointcloud_filter/#radar_static_pointcloud_filter_node","title":"radar_static_pointcloud_filter_node","text":"

Extract static/dynamic radar pointcloud by using doppler velocity and ego motion. Calculation cost is O(n). n is the number of radar pointcloud.

"},{"location":"sensing/radar_static_pointcloud_filter/#input-topics","title":"Input topics","text":"Name Type Description input/radar radar_msgs::msg::RadarScan RadarScan input/odometry nav_msgs::msg::Odometry Ego vehicle odometry topic"},{"location":"sensing/radar_static_pointcloud_filter/#output-topics","title":"Output topics","text":"Name Type Description output/static_radar_scan radar_msgs::msg::RadarScan static radar pointcloud output/dynamic_radar_scan radar_msgs::msg::RadarScan dynamic radar pointcloud"},{"location":"sensing/radar_static_pointcloud_filter/#parameters","title":"Parameters","text":"Name Type Description doppler_velocity_sd double Standard deviation for radar doppler velocity. [m/s]"},{"location":"sensing/radar_static_pointcloud_filter/#how-to-launch","title":"How to launch","text":"
ros2 launch radar_static_pointcloud_filter radar_static_pointcloud_filter.launch\n
"},{"location":"sensing/radar_static_pointcloud_filter/#algorithm","title":"Algorithm","text":""},{"location":"sensing/radar_threshold_filter/","title":"radar_threshold_filter","text":""},{"location":"sensing/radar_threshold_filter/#radar_threshold_filter_node","title":"radar_threshold_filter_node","text":"

Remove noise from radar return by threshold.

Calculation cost is O(n). n is the number of radar return.

"},{"location":"sensing/radar_threshold_filter/#input-topics","title":"Input topics","text":"Name Type Description input/radar radar_msgs/msg/RadarScan.msg Radar pointcloud data"},{"location":"sensing/radar_threshold_filter/#output-topics","title":"Output topics","text":"Name Type Description output/radar radar_msgs/msg/RadarScan.msg Filtered radar pointcloud"},{"location":"sensing/radar_threshold_filter/#parameters","title":"Parameters","text":" Name Type Description is_amplitude_filter bool if this parameter is true, apply amplitude filter (publish amplitude_min < amplitude < amplitude_max) amplitude_min double [dBm^2] amplitude_max double [dBm^2] is_range_filter bool if this parameter is true, apply range filter (publish range_min < range < range_max) range_min double [m] range_max double [m] is_azimuth_filter bool if this parameter is true, apply angle filter (publish azimuth_min < range < azimuth_max) azimuth_min double [rad] azimuth_max double [rad] is_z_filter bool if this parameter is true, apply z position filter (publish z_min < z < z_max) z_min double [m] z_max double [m]"},{"location":"sensing/radar_threshold_filter/#how-to-launch","title":"How to launch","text":"
ros2 launch radar_threshold_filter radar_threshold_filter.launch.xml\n
"},{"location":"sensing/tier4_pcl_extensions/","title":"tier4_pcl_extensions","text":""},{"location":"sensing/tier4_pcl_extensions/#purpose","title":"Purpose","text":"

The tier4_pcl_extensions is a pcl extension library. The voxel grid filter in this package works with a different algorithm than the original one.

"},{"location":"sensing/tier4_pcl_extensions/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"sensing/tier4_pcl_extensions/#original-algorithm-1","title":"Original Algorithm [1]","text":"
  1. create a 3D voxel grid over the input pointcloud data
  2. calculate centroid in each voxel
  3. all the points are approximated with their centroid
"},{"location":"sensing/tier4_pcl_extensions/#extended-algorithm","title":"Extended Algorithm","text":"
  1. create a 3D voxel grid over the input pointcloud data
  2. calculate centroid in each voxel
  3. all the points are approximated with the closest point to their centroid
"},{"location":"sensing/tier4_pcl_extensions/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"sensing/tier4_pcl_extensions/#parameters","title":"Parameters","text":""},{"location":"sensing/tier4_pcl_extensions/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/tier4_pcl_extensions/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/tier4_pcl_extensions/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/tier4_pcl_extensions/#optional-referencesexternal-links","title":"(Optional) References/External links","text":"

[1] https://pointclouds.org/documentation/tutorials/voxel_grid.html

"},{"location":"sensing/tier4_pcl_extensions/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/vehicle_velocity_converter/","title":"vehicle_velocity_converter","text":""},{"location":"sensing/vehicle_velocity_converter/#purpose","title":"Purpose","text":"

This package converts autoware_auto_vehicle_msgs::msg::VehicleReport message to geometry_msgs::msg::TwistWithCovarianceStamped for gyro odometer node.

"},{"location":"sensing/vehicle_velocity_converter/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"sensing/vehicle_velocity_converter/#input","title":"Input","text":"Name Type Description velocity_status autoware_auto_vehicle_msgs::msg::VehicleReport vehicle velocity"},{"location":"sensing/vehicle_velocity_converter/#output","title":"Output","text":"Name Type Description twist_with_covariance geometry_msgs::msg::TwistWithCovarianceStamped twist with covariance converted from VehicleReport"},{"location":"sensing/vehicle_velocity_converter/#parameters","title":"Parameters","text":"Name Type Description speed_scale_factor double speed scale factor (ideal value is 1.0) frame_id string frame id for output message velocity_stddev_xx double standard deviation for vx angular_velocity_stddev_zz double standard deviation for yaw rate"},{"location":"simulator/dummy_perception_publisher/","title":"dummy_perception_publisher","text":""},{"location":"simulator/dummy_perception_publisher/#purpose","title":"Purpose","text":"

This node publishes the result of the dummy detection with the type of perception.

"},{"location":"simulator/dummy_perception_publisher/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"simulator/dummy_perception_publisher/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"simulator/dummy_perception_publisher/#input","title":"Input","text":"Name Type Description /tf tf2_msgs/TFMessage TF (self-pose) input/object dummy_perception_publisher::msg::Object dummy detection objects"},{"location":"simulator/dummy_perception_publisher/#output","title":"Output","text":"Name Type Description output/dynamic_object tier4_perception_msgs::msg::DetectedObjectsWithFeature Publishes objects output/points_raw sensor_msgs::msg::PointCloud2 point cloud of objects"},{"location":"simulator/dummy_perception_publisher/#parameters","title":"Parameters","text":"Name Type Default Value Explanation visible_range double 100.0 sensor visible range [m] detection_successful_rate double 0.8 sensor detection rate. (min) 0.0 - 1.0(max) enable_ray_tracing bool true if True, use ray tracking use_object_recognition bool true if True, publish objects topic"},{"location":"simulator/dummy_perception_publisher/#node-parameters","title":"Node Parameters","text":"

None.

"},{"location":"simulator/dummy_perception_publisher/#core-parameters","title":"Core Parameters","text":"

None.

"},{"location":"simulator/dummy_perception_publisher/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"simulator/fault_injection/","title":"fault_injection","text":""},{"location":"simulator/fault_injection/#purpose","title":"Purpose","text":"

This package is used to convert pseudo system faults from PSim to Diagnostics and notify Autoware. The component diagram is as follows:

"},{"location":"simulator/fault_injection/#test","title":"Test","text":"
source install/setup.bash\ncd fault_injection\nlaunch_test test/test_fault_injection_node.test.py\n
"},{"location":"simulator/fault_injection/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"simulator/fault_injection/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"simulator/fault_injection/#input","title":"Input","text":"Name Type Description ~/input/simulation_events tier4_simulation_msgs::msg::SimulationEvents simulation events"},{"location":"simulator/fault_injection/#output","title":"Output","text":"

None.

"},{"location":"simulator/fault_injection/#parameters","title":"Parameters","text":"

None.

"},{"location":"simulator/fault_injection/#node-parameters","title":"Node Parameters","text":"

None.

"},{"location":"simulator/fault_injection/#core-parameters","title":"Core Parameters","text":"

None.

"},{"location":"simulator/fault_injection/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"simulator/simple_planning_simulator/design/simple_planning_simulator-design/","title":"simple_planning_simulator","text":""},{"location":"simulator/simple_planning_simulator/design/simple_planning_simulator-design/#purpose-use-cases","title":"Purpose / Use cases","text":"

This node simulates the vehicle motion for a vehicle command in 2D using a simple vehicle model.

"},{"location":"simulator/simple_planning_simulator/design/simple_planning_simulator-design/#design","title":"Design","text":"

The purpose of this simulator is for the integration test of planning and control modules. This does not simulate sensing or perception, but is implemented in pure c++ only and works without GPU.

"},{"location":"simulator/simple_planning_simulator/design/simple_planning_simulator-design/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"simulator/simple_planning_simulator/design/simple_planning_simulator-design/#inputs-outputs-api","title":"Inputs / Outputs / API","text":""},{"location":"simulator/simple_planning_simulator/design/simple_planning_simulator-design/#input","title":"input","text":""},{"location":"simulator/simple_planning_simulator/design/simple_planning_simulator-design/#output","title":"output","text":""},{"location":"simulator/simple_planning_simulator/design/simple_planning_simulator-design/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"simulator/simple_planning_simulator/design/simple_planning_simulator-design/#common-parameters","title":"Common Parameters","text":"Name Type Description Default value simulated_frame_id string set to the child_frame_id in output tf \"base_link\" origin_frame_id string set to the frame_id in output tf \"odom\" initialize_source string If \"ORIGIN\", the initial pose is set at (0,0,0). If \"INITIAL_POSE_TOPIC\", node will wait until the input/initialpose topic is published. \"INITIAL_POSE_TOPIC\" add_measurement_noise bool If true, the Gaussian noise is added to the simulated results. true pos_noise_stddev double Standard deviation for position noise 0.01 rpy_noise_stddev double Standard deviation for Euler angle noise 0.0001 vel_noise_stddev double Standard deviation for longitudinal velocity noise 0.0 angvel_noise_stddev double Standard deviation for angular velocity noise 0.0 steer_noise_stddev double Standard deviation for steering angle noise 0.0001"},{"location":"simulator/simple_planning_simulator/design/simple_planning_simulator-design/#vehicle-model-parameters","title":"Vehicle Model Parameters","text":""},{"location":"simulator/simple_planning_simulator/design/simple_planning_simulator-design/#vehicle_model_type-options","title":"vehicle_model_type options","text":"

The IDEAL model moves ideally as commanded, while the DELAY model moves based on a 1st-order with time delay model. The STEER means the model receives the steer command. The VEL means the model receives the target velocity command, while the ACC model receives the target acceleration command. The GEARED suffix means that the motion will consider the gear command: the vehicle moves only one direction following the gear.

The table below shows which models correspond to what parameters. The model names are written in abbreviated form (e.g. IDEAL_STEER_VEL = I_ST_V).

Name Type Description I_ST_V I_ST_A I_ST_A_G D_ST_V D_ST_A D_ST_A_G Default value unit acc_time_delay double dead time for the acceleration input x x x x o o 0.1 [s] steer_time_delay double dead time for the steering input x x x o o o 0.24 [s] vel_time_delay double dead time for the velocity input x x x o x x 0.25 [s] acc_time_constant double time constant of the 1st-order acceleration dynamics x x x x o o 0.1 [s] steer_time_constant double time constant of the 1st-order steering dynamics x x x o o o 0.27 [s] vel_time_constant double time constant of the 1st-order velocity dynamics x x x o x x 0.5 [s] vel_lim double limit of velocity x x x o o o 50.0 [m/s] vel_rate_lim double limit of acceleration x x x o o o 7.0 [m/ss] steer_lim double limit of steering angle x x x o o o 1.0 [rad] steer_rate_lim double limit of steering angle change rate x x x o o o 5.0 [rad/s]

Note: The steering/velocity/acceleration dynamics is modeled by a first order system with a deadtime in a delay model. The definition of the time constant is the time it takes for the step response to rise up to 63% of its final value. The deadtime is a delay in the response to a control input.

"},{"location":"simulator/simple_planning_simulator/design/simple_planning_simulator-design/#default-tf-configuration","title":"Default TF configuration","text":"

Since the vehicle outputs odom->base_link tf, this simulator outputs the tf with the same frame_id configuration. In the simple_planning_simulator.launch.py, the node that outputs the map->odom tf, that usually estimated by the localization module (e.g. NDT), will be launched as well. Since the tf output by this simulator module is an ideal value, odom->map will always be 0.

"},{"location":"simulator/simple_planning_simulator/design/simple_planning_simulator-design/#error-detection-and-handling","title":"Error detection and handling","text":"

The only validation on inputs being done is testing for a valid vehicle model type.

"},{"location":"simulator/simple_planning_simulator/design/simple_planning_simulator-design/#security-considerations","title":"Security considerations","text":""},{"location":"simulator/simple_planning_simulator/design/simple_planning_simulator-design/#references-external-links","title":"References / External links","text":"

This is originally developed in the Autoware.AI. See the link below.

https://github.com/Autoware-AI/simulation/tree/master/wf_simulator

"},{"location":"simulator/simple_planning_simulator/design/simple_planning_simulator-design/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":""},{"location":"system/bluetooth_monitor/","title":"bluetooth_monitor","text":""},{"location":"system/bluetooth_monitor/#description","title":"Description","text":"

This node monitors a Bluetooth connection to a wireless device by using L2ping. L2ping generates PING echo command on Bluetooth L2CAP layer, and it is able to receive and check echo response from a wireless device.

"},{"location":"system/bluetooth_monitor/#block-diagram","title":"Block diagram","text":"

L2ping is only allowed for root by default, so this package provides the following approach to minimize security risks as much as possible:

"},{"location":"system/bluetooth_monitor/#output","title":"Output","text":""},{"location":"system/bluetooth_monitor/#bluetooth_monitor-bluetooth_connection","title":"bluetooth_monitor: bluetooth_connection","text":"

[summary]

level message OK OK WARN RTT warning ERROR Lost Function error

[values]

key value (example) Device [0-9]: Status OK / RTT warning / Verify error / Lost / Ping rejected / Function error Device [0-9]: Name Wireless Controller Device [0-9]: Manufacturer MediaTek, Inc. Device [0-9]: Address AA:BB:CC:DD:EE:FF Device [0-9]: RTT 0.00ms key (example) value (example) Device [0-9]: connect No such file or directory"},{"location":"system/bluetooth_monitor/#parameters","title":"Parameters","text":"Name Type Default Value Explanation port int 7640 Port number to connect to L2ping service. timeout int 5 Wait timeout seconds for the response. rtt_warn float 0.00 RTT(Round-Trip Time) to generate warn. addresses string * List of bluetooth address of wireless devices to monitor. "},{"location":"system/bluetooth_monitor/#instructions-before-starting","title":"Instructions before starting","text":"
  1. Assign capability to l2ping_service since L2ping requires cap_net_raw+eip capability.

    sudo setcap 'cap_net_raw+eip' ./build/bluetooth_monitor/l2ping_service\n
  2. Run l2ping_service and bluetooth_monitor.

    ./build/bluetooth_monitor/l2ping_service\nros2 launch bluetooth_monitor bluetooth_monitor.launch.xml\n
"},{"location":"system/bluetooth_monitor/#known-limitations-and-issues","title":"Known limitations and issues","text":"

None.

"},{"location":"system/default_ad_api/","title":"default_ad_api","text":"

This package is a default implementation AD API.

"},{"location":"system/default_ad_api/document/autoware-state/","title":"Autoware state compatibility","text":""},{"location":"system/default_ad_api/document/autoware-state/#overview","title":"Overview","text":"

Since /autoware/state was so widely used, default_ad_api creates it from the states of AD API for backwards compatibility. The diagnostic checks that ad_service_state_monitor used to perform have been replaced by component_state_monitor. The service /autoware/shutdown to change autoware state to finalizing is also supported for compatibility.

"},{"location":"system/default_ad_api/document/autoware-state/#conversion","title":"Conversion","text":"

This is the correspondence between AD API states and autoware states. The launch state is the data that default_ad_api node holds internally.

"},{"location":"system/default_ad_api/document/fail-safe/","title":"Fail-safe API","text":""},{"location":"system/default_ad_api/document/fail-safe/#overview","title":"Overview","text":"

The fail-safe API simply relays the MRM state. See the autoware-documentation for AD API specifications.

"},{"location":"system/default_ad_api/document/interface/","title":"Interface API","text":""},{"location":"system/default_ad_api/document/interface/#overview","title":"Overview","text":"

The interface API simply returns a version number. See the autoware-documentation for AD API specifications.

"},{"location":"system/default_ad_api/document/localization/","title":"Localization API","text":""},{"location":"system/default_ad_api/document/localization/#overview","title":"Overview","text":"

Unify the location initialization method to the service. The topic /initialpose from rviz is now only subscribed to by adapter node and converted to API call. This API call is forwarded to the pose initializer node so it can centralize the state of pose initialization. For other nodes that require initialpose, pose initializer node publishes as /initialpose3d. See the autoware-documentation for AD API specifications.

"},{"location":"system/default_ad_api/document/motion/","title":"Motion API","text":""},{"location":"system/default_ad_api/document/motion/#overview","title":"Overview","text":"

Provides a hook for when the vehicle starts. It is typically used for announcements that call attention to the surroundings. Add a pause function to the vehicle_cmd_gate, and API will control it based on vehicle stopped and start requested. See the autoware-documentation for AD API specifications.

"},{"location":"system/default_ad_api/document/motion/#states","title":"States","text":"

The implementation has more detailed state transitions to manage pause state synchronization. The correspondence with the AD API state is as follows.

"},{"location":"system/default_ad_api/document/operation-mode/","title":"Operation mode API","text":""},{"location":"system/default_ad_api/document/operation-mode/#overview","title":"Overview","text":"

Introduce operation mode. It handles autoware engage, gate_mode, external_cmd_selector and control_mode abstractly. When the mode is changed, it will be in-transition state, and if the transition completion condition to that mode is not satisfied, it will be returned to the previous mode. Also, currently, the condition for mode change is only WaitingForEngage in /autoware/state, and the engage state is shared between modes. After introducing the operation mode, each mode will have a transition available flag. See the autoware-documentation for AD API specifications.

"},{"location":"system/default_ad_api/document/operation-mode/#states","title":"States","text":"

The operation mode has the following state transitions. Disabling autoware control and changing operation mode when autoware control is disabled can be done immediately. Otherwise, enabling autoware control and changing operation mode when autoware control is enabled causes the state will be transition state. If the mode change completion condition is not satisfied within the timeout in the transition state, it will return to the previous mode.

"},{"location":"system/default_ad_api/document/operation-mode/#compatibility","title":"Compatibility","text":"

Ideally, vehicle_cmd_gate and external_cmd_selector should be merged so that the operation mode can be handled directly. However, currently the operation mode transition manager performs the following conversions to match the implementation. The transition manager monitors each topic in the previous interface and synchronizes the operation mode when it changes. When the operation mode is changed with the new interface, the transition manager disables synchronization and changes the operation mode using the previous interface.

"},{"location":"system/default_ad_api/document/routing/","title":"Routing API","text":""},{"location":"system/default_ad_api/document/routing/#overview","title":"Overview","text":"

Unify the route setting method to the service. This API supports two waypoint formats, poses and lanelet segments. The goal and checkpoint topics from rviz is only subscribed to by adapter node and converted to API call. This API call is forwarded to the mission planner node so it can centralize the state of routing. For other nodes that require route, mission planner node publishes as /planning/mission_planning/route. See the autoware-documentation for AD API specifications.

"},{"location":"system/default_ad_api_helpers/ad_api_adaptors/","title":"ad_api_adaptors","text":""},{"location":"system/default_ad_api_helpers/ad_api_adaptors/#initial_pose_adaptor","title":"initial_pose_adaptor","text":"

This node makes it easy to use the localization AD API from RViz. When a initial pose topic is received, call the localization initialize API. This node depends on the map height fitter library. See here for more details.

Interface Local Name Global Name Description Subscription initialpose /initialpose The pose for localization initialization. Client fit_map_height /localization/util/fit_map_height To fix initial pose to map height Client - /api/localization/initialize The localization initialize API."},{"location":"system/default_ad_api_helpers/ad_api_adaptors/#routing_adaptor","title":"routing_adaptor","text":"

This node makes it easy to use the routing AD API from RViz. When a goal pose topic is received, reset the waypoints and call the API. When a waypoint pose topic is received, append it to the end of the waypoints to call the API. The clear API is called automatically before setting the route.

Interface Local Name Global Name Description Subscription ~/input/goal /planning/mission_planning/goal The goal pose of route. Subscription ~/input/waypoint /planning/mission_planning/checkpoint The waypoint pose of route. Client - /api/routing/clear_route The route clear API. Client - /api/routing/set_route_points The route points set API."},{"location":"system/default_ad_api_helpers/automatic_pose_initializer/","title":"automatic_pose_initializer","text":""},{"location":"system/default_ad_api_helpers/automatic_pose_initializer/#automatic_pose_initializer_1","title":"automatic_pose_initializer","text":"

This node calls localization initialize API when the localization initialization state is uninitialized. Since the API uses GNSS pose when no pose is specified, initialization using GNSS can be performed automatically.

Interface Local Name Global Name Description Subscription - /api/localization/initialization_state The localization initialization state API. Client - /api/localization/initialize The localization initialize API."},{"location":"system/dummy_diag_publisher/","title":"dummy_diag_publisher","text":""},{"location":"system/dummy_diag_publisher/#purpose","title":"Purpose","text":"

This package outputs a dummy diagnostic data for debugging and developing.

"},{"location":"system/dummy_diag_publisher/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"system/dummy_diag_publisher/#outputs","title":"Outputs","text":"Name Type Description /diagnostics diagnostic_msgs::msgs::DiagnosticArray Diagnostics outputs"},{"location":"system/dummy_diag_publisher/#parameters","title":"Parameters","text":""},{"location":"system/dummy_diag_publisher/#node-parameters","title":"Node Parameters","text":"

The parameter DIAGNOSTIC_NAME must be a name that exists in the parameter YAML file. If the parameter status is given from a command line, the parameter is_active is automatically set to true.

Name Type Default Value Explanation Reconfigurable update_rate int 10 Timer callback period [Hz] false DIAGNOSTIC_NAME.is_active bool true Force update or not true DIAGNOSTIC_NAME.status string \"OK\" diag status set by dummy diag publisher true"},{"location":"system/dummy_diag_publisher/#yaml-format-for-dummy_diag_publisher","title":"YAML format for dummy_diag_publisher","text":"

If the value is default, the default value will be set.

Key Type Default Value Explanation required_diags.DIAGNOSTIC_NAME.is_active bool true Force update or not required_diags.DIAGNOSTIC_NAME.status string \"OK\" diag status set by dummy diag publisher"},{"location":"system/dummy_diag_publisher/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"system/dummy_diag_publisher/#usage","title":"Usage","text":""},{"location":"system/dummy_diag_publisher/#launch","title":"launch","text":"
ros2 launch dummy_diag_publisher dummy_diag_publisher.launch.xml\n
"},{"location":"system/dummy_diag_publisher/#reconfigure","title":"reconfigure","text":"
ros2 param set /dummy_diag_publisher velodyne_connection.status \"Warn\"\nros2 param set /dummy_diag_publisher velodyne_connection.is_active true\n
"},{"location":"system/dummy_infrastructure/","title":"dummy_infrastructure","text":"

This is a debug node for infrastructure communication.

"},{"location":"system/dummy_infrastructure/#usage","title":"Usage","text":"
ros2 launch dummy_infrastructure dummy_infrastructure.launch.xml\nros2 run rqt_reconfigure rqt_reconfigure\n
"},{"location":"system/dummy_infrastructure/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"system/dummy_infrastructure/#inputs","title":"Inputs","text":"Name Type Description ~/input/command_array tier4_v2x_msgs::msg::InfrastructureCommandArray Infrastructure command"},{"location":"system/dummy_infrastructure/#outputs","title":"Outputs","text":"Name Type Description ~/output/state_array tier4_v2x_msgs::msg::VirtualTrafficLightStateArray Virtual traffic light array"},{"location":"system/dummy_infrastructure/#parameters","title":"Parameters","text":""},{"location":"system/dummy_infrastructure/#node-parameters","title":"Node Parameters","text":"Name Type Default Value Explanation update_rate int 10 Timer callback period [Hz] use_first_command bool true Consider instrument id or not instrument_id string `` Used as command id approval bool false set approval filed to ros param is_finalized bool false Stop at stop_line if finalization isn't completed"},{"location":"system/dummy_infrastructure/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"system/emergency_handler/","title":"emergency_handler","text":""},{"location":"system/emergency_handler/#purpose","title":"Purpose","text":"

Emergency Handler is a node to select proper MRM from from system failure state contained in HazardStatus.

"},{"location":"system/emergency_handler/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"system/emergency_handler/#state-transitions","title":"State Transitions","text":""},{"location":"system/emergency_handler/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"system/emergency_handler/#input","title":"Input","text":"Name Type Description /system/emergency/hazard_status autoware_auto_system_msgs::msg::HazardStatusStamped Used to select proper MRM from system failure state contained in HazardStatus /control/vehicle_cmd autoware_auto_control_msgs::msg::AckermannControlCommand Used as reference when generate Emergency Control Command /localization/kinematic_state nav_msgs::msg::Odometry Used to decide whether vehicle is stopped or not /vehicle/status/control_mode autoware_auto_vehicle_msgs::msg::ControlModeReport Used to check vehicle mode: autonomous or manual /system/api/mrm/comfortable_stop/status tier4_system_msgs::msg::MrmBehaviorStatus Used to check if MRM comfortable stop operation is available /system/api/mrm/emergency_stop/status tier4_system_msgs::msg::MrmBehaviorStatus Used to check if MRM emergency stop operation is available"},{"location":"system/emergency_handler/#output","title":"Output","text":"Name Type Description /system/emergency/shift_cmd autoware_auto_vehicle_msgs::msg::GearCommand Required to execute proper MRM (send gear cmd) /system/emergency/hazard_cmd autoware_auto_vehicle_msgs::msg::HazardLightsCommand Required to execute proper MRM (send turn signal cmd) /api/fail_safe/mrm_state autoware_adapi_v1_msgs::msg::MrmState Inform MRM execution state and selected MRM behavior /system/api/mrm/comfortable_stop/operate tier4_system_msgs::srv::OperateMrm Execution order for MRM comfortable stop /system/api/mrm/emergency_stop/operate tier4_system_msgs::srv::OperateMrm Execution order for MRM emergency stop"},{"location":"system/emergency_handler/#parameters","title":"Parameters","text":""},{"location":"system/emergency_handler/#node-parameters","title":"Node Parameters","text":"Name Type Default Value Explanation update_rate int 10 Timer callback period."},{"location":"system/emergency_handler/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Explanation timeout_hazard_status double 0.5 If the input hazard_status topic cannot be received for more than timeout_hazard_status, vehicle will make an emergency stop. use_parking_after_stopped bool false If this parameter is true, it will publish PARKING shift command. turning_hazard_on.emergency bool true If this parameter is true, hazard lamps will be turned on during emergency state. use_comfortable_stop bool false If this parameter is true, operate comfortable stop when latent faults occur."},{"location":"system/emergency_handler/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"system/mrm_comfortable_stop_operator/","title":"mrm_comfortable_stop_operator","text":""},{"location":"system/mrm_comfortable_stop_operator/#purpose","title":"Purpose","text":"

MRM comfortable stop operator is a node that generates comfortable stop commands according to the comfortable stop MRM order.

"},{"location":"system/mrm_comfortable_stop_operator/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"system/mrm_comfortable_stop_operator/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"system/mrm_comfortable_stop_operator/#input","title":"Input","text":"Name Type Description ~/input/mrm/comfortable_stop/operate tier4_system_msgs::srv::OperateMrm MRM execution order"},{"location":"system/mrm_comfortable_stop_operator/#output","title":"Output","text":"Name Type Description ~/output/mrm/comfortable_stop/status tier4_system_msgs::msg::MrmBehaviorStatus MRM execution status ~/output/velocity_limit tier4_planning_msgs::msg::VelocityLimit Velocity limit command ~/output/velocity_limit/clear tier4_planning_msgs::msg::VelocityLimitClearCommand Velocity limit clear command"},{"location":"system/mrm_comfortable_stop_operator/#parameters","title":"Parameters","text":""},{"location":"system/mrm_comfortable_stop_operator/#node-parameters","title":"Node Parameters","text":"Name Type Default value Explanation update_rate int 10 Timer callback frequency [Hz]"},{"location":"system/mrm_comfortable_stop_operator/#core-parameters","title":"Core Parameters","text":"Name Type Default value Explanation min_acceleration double -1.0 Minimum acceleration for comfortable stop [m/s^2] max_jerk double 0.3 Maximum jerk for comfortable stop [m/s^3] min_jerk double -0.3 Minimum jerk for comfortable stop [m/s^3]"},{"location":"system/mrm_comfortable_stop_operator/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"system/mrm_emergency_stop_operator/READEME/","title":"mrm_emergency_stop_operator","text":""},{"location":"system/mrm_emergency_stop_operator/READEME/#purpose","title":"Purpose","text":"

MRM emergency stop operator is a node that generates emergency stop commands according to the emergency stop MRM order.

"},{"location":"system/mrm_emergency_stop_operator/READEME/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"system/mrm_emergency_stop_operator/READEME/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"system/mrm_emergency_stop_operator/READEME/#input","title":"Input","text":"Name Type Description ~/input/mrm/emergency_stop/operate tier4_system_msgs::srv::OperateMrm MRM execution order ~/input/control/control_cmd autoware_auto_control_msgs::msg::AckermannControlCommand Control command output from the last node of the control component. Used for the initial value of the emergency stop command."},{"location":"system/mrm_emergency_stop_operator/READEME/#output","title":"Output","text":"Name Type Description ~/output/mrm/emergency_stop/status tier4_system_msgs::msg::MrmBehaviorStatus MRM execution status ~/output/mrm/emergency_stop/control_cmd autoware_auto_control_msgs::msg::AckermannControlCommand Emergency stop command"},{"location":"system/mrm_emergency_stop_operator/READEME/#parameters","title":"Parameters","text":""},{"location":"system/mrm_emergency_stop_operator/READEME/#node-parameters","title":"Node Parameters","text":"Name Type Default value Explanation update_rate int 30 Timer callback frequency [Hz]"},{"location":"system/mrm_emergency_stop_operator/READEME/#core-parameters","title":"Core Parameters","text":"Name Type Default value Explanation target_acceleration double -2.5 Target acceleration for emergency stop [m/s^2] target_jerk double -1.5 Target jerk for emergency stop [m/s^3]"},{"location":"system/mrm_emergency_stop_operator/READEME/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"system/system_error_monitor/","title":"system_error_monitor","text":""},{"location":"system/system_error_monitor/#purpose","title":"Purpose","text":"

Autoware Error Monitor has two main functions.

  1. It is to judge the system hazard level from the aggregated diagnostic information of each module of Autoware.
  2. It enables automatic recovery from the emergency state.
"},{"location":"system/system_error_monitor/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"system/system_error_monitor/#state-transition","title":"State Transition","text":""},{"location":"system/system_error_monitor/#updateemergencyholdingcondition-flow-chart","title":"updateEmergencyHoldingCondition Flow Chart","text":""},{"location":"system/system_error_monitor/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"system/system_error_monitor/#input","title":"Input","text":"Name Type Description /diagnostics_agg diagnostic_msgs::msg::DiagnosticArray Diagnostic information aggregated based diagnostic_aggregator setting is used to /autoware/state autoware_auto_system_msgs::msg::AutowareState Required to ignore error during Route, Planning and Finalizing. /control/current_gate_mode tier4_control_msgs::msg::GateMode Required to select the appropriate module from autonomous_driving or external_control /vehicle/control_mode autoware_auto_vehicle_msgs::msg::ControlModeReport Required to not hold emergency during manual driving"},{"location":"system/system_error_monitor/#output","title":"Output","text":"Name Type Description /system/emergency/hazard_status autoware_auto_system_msgs::msg::HazardStatusStamped HazardStatus contains system hazard level, emergency hold status and failure details /diagnostics_err diagnostic_msgs::msg::DiagnosticArray This has the same contents as HazardStatus. This is used for visualization"},{"location":"system/system_error_monitor/#parameters","title":"Parameters","text":""},{"location":"system/system_error_monitor/#node-parameters","title":"Node Parameters","text":"Name Type Default Value Explanation ignore_missing_diagnostics bool false If this parameter is turned off, it will be ignored if required modules have not been received. add_leaf_diagnostics bool true Required to use children diagnostics. diag_timeout_sec double 1.0 (sec) If required diagnostic is not received for a diag_timeout_sec, the diagnostic state become STALE state. data_ready_timeout double 30.0 If input topics required for system_error_monitor are not available for data_ready_timeout seconds, autoware_state will translate to emergency state. data_heartbeat_timeout double 1.0 If input topics required for system_error_monitor are not no longer subscribed for data_heartbeat_timeout seconds, autoware_state will translate to emergency state."},{"location":"system/system_error_monitor/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Explanation hazard_recovery_timeout double 5.0 The vehicle can recovery to normal driving if emergencies disappear during hazard_recovery_timeout. use_emergency_hold bool false If it is false, the vehicle will return to normal as soon as emergencies disappear. use_emergency_hold_in_manual_driving bool false If this parameter is turned off, emergencies will be ignored during manual driving. emergency_hazard_level int 2 If hazard_level is more than emergency_hazard_level, autoware state will translate to emergency state"},{"location":"system/system_error_monitor/#yaml-format-for-system_error_monitor","title":"YAML format for system_error_monitor","text":"

The parameter key should be filled with the hierarchical diagnostics output by diagnostic_aggregator. Parameters prefixed with required_modules.autonomous_driving are for autonomous driving. Parameters with the required_modules.remote_control prefix are for remote control. If the value is default, the default value will be set.

Key Type Default Value Explanation required_modules.autonomous_driving.DIAGNOSTIC_NAME.sf_at string \"none\" Diagnostic level where it becomes Safe Fault. Available options are \"none\", \"warn\", \"error\". required_modules.autonomous_driving.DIAGNOSTIC_NAME.lf_at string \"warn\" Diagnostic level where it becomes Latent Fault. Available options are \"none\", \"warn\", \"error\". required_modules.autonomous_driving.DIAGNOSTIC_NAME.spf_at string \"error\" Diagnostic level where it becomes Single Point Fault. Available options are \"none\", \"warn\", \"error\". required_modules.autonomous_driving.DIAGNOSTIC_NAME.auto_recovery string \"true\" Determines whether the system will automatically recover when it recovers from an error. required_modules.remote_control.DIAGNOSTIC_NAME.sf_at string \"none\" Diagnostic level where it becomes Safe Fault. Available options are \"none\", \"warn\", \"error\". required_modules.remote_control.DIAGNOSTIC_NAME.lf_at string \"warn\" Diagnostic level where it becomes Latent Fault. Available options are \"none\", \"warn\", \"error\". required_modules.remote_control.DIAGNOSTIC_NAME.spf_at string \"error\" Diagnostic level where it becomes Single Point Fault. Available options are \"none\", \"warn\", \"error\". required_modules.remote_control.DIAGNOSTIC_NAME.auto_recovery string \"true\" Determines whether the system will automatically recover when it recovers from an error."},{"location":"system/system_error_monitor/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"system/system_monitor/","title":"System Monitor for Autoware","text":"

Further improvement of system monitor functionality for Autoware.

"},{"location":"system/system_monitor/#description","title":"Description","text":"

This package provides the following nodes for monitoring system:

"},{"location":"system/system_monitor/#supported-architecture","title":"Supported architecture","text":""},{"location":"system/system_monitor/#operation-confirmed-platform","title":"Operation confirmed platform","text":""},{"location":"system/system_monitor/#how-to-use","title":"How to use","text":"

Use colcon build and launch in the same way as other packages.

colcon build\nsource install/setup.bash\nros2 launch system_monitor system_monitor.launch.xml\n

CPU and GPU monitoring method differs depending on platform. CMake automatically chooses source to be built according to build environment. If you build this package on intel platform, CPU monitor and GPU monitor which run on intel platform are built.

"},{"location":"system/system_monitor/#ros-topics-published-by-system-monitor","title":"ROS topics published by system monitor","text":"

Every topic is published in 1 minute interval.

[Usage] \u2713\uff1aSupported, -\uff1aNot supported

Node Message Intel arm64(tegra) arm64(raspi) Notes CPU Monitor CPU Temperature \u2713 \u2713 \u2713 CPU Usage \u2713 \u2713 \u2713 CPU Load Average \u2713 \u2713 \u2713 CPU Thermal Throttling \u2713 - \u2713 CPU Frequency \u2713 \u2713 \u2713 Notification of frequency only, normally error not generated. HDD Monitor HDD Temperature \u2713 \u2713 \u2713 HDD PowerOnHours \u2713 \u2713 \u2713 HDD TotalDataWritten \u2713 \u2713 \u2713 HDD RecoveredError \u2713 \u2713 \u2713 HDD Usage \u2713 \u2713 \u2713 HDD ReadDataRate \u2713 \u2713 \u2713 HDD WriteDataRate \u2713 \u2713 \u2713 HDD ReadIOPS \u2713 \u2713 \u2713 HDD WriteIOPS \u2713 \u2713 \u2713 HDD Connection \u2713 \u2713 \u2713 Memory Monitor Memory Usage \u2713 \u2713 \u2713 Net Monitor Network Connection \u2713 \u2713 \u2713 Network Usage \u2713 \u2713 \u2713 Notification of usage only, normally error not generated. Network CRC Error \u2713 \u2713 \u2713 Warning occurs when the number of CRC errors in the period reaches the threshold value. The number of CRC errors that occur is the same as the value that can be confirmed with the ip command. IP Packet Reassembles Failed \u2713 \u2713 \u2713 NTP Monitor NTP Offset \u2713 \u2713 \u2713 Process Monitor Tasks Summary \u2713 \u2713 \u2713 High-load Proc[0-9] \u2713 \u2713 \u2713 High-mem Proc[0-9] \u2713 \u2713 \u2713 GPU Monitor GPU Temperature \u2713 \u2713 - GPU Usage \u2713 \u2713 - GPU Memory Usage \u2713 - - GPU Thermal Throttling \u2713 - - GPU Frequency \u2713 \u2713 - For Intel platform, monitor whether current GPU clock is supported by the GPU. Voltage Monitor CMOS Battery Status \u2713 - - Battery Health for RTC and BIOS -"},{"location":"system/system_monitor/#ros-parameters","title":"ROS parameters","text":"

See ROS parameters.

"},{"location":"system/system_monitor/#notes","title":"Notes","text":""},{"location":"system/system_monitor/#cpu-monitor-for-intel-platform","title":"CPU monitor for intel platform","text":"

Thermal throttling event can be monitored by reading contents of MSR(Model Specific Register), and accessing MSR is only allowed for root by default, so this package provides the following approach to minimize security risks as much as possible:

"},{"location":"system/system_monitor/#instructions-before-starting","title":"Instructions before starting","text":"
  1. Create a user to run 'msr_reader'.

    sudo adduser <username>\n
  2. Load kernel module 'msr' into your target system. The path '/dev/cpu/CPUNUM/msr' appears.

    sudo modprobe msr\n
  3. Allow user to access MSR with read-only privilege using the Access Control List (ACL).

    sudo setfacl -m u:<username>:r /dev/cpu/*/msr\n
  4. Assign capability to 'msr_reader' since msr kernel module requires rawio capability.

    sudo setcap cap_sys_rawio=ep install/system_monitor/lib/system_monitor/msr_reader\n
  5. Run 'msr_reader' as the user you created, and run system_monitor as a generic user.

    su <username>\ninstall/system_monitor/lib/system_monitor/msr_reader\n
"},{"location":"system/system_monitor/#see-also","title":"See also","text":"

msr_reader

"},{"location":"system/system_monitor/#hdd-monitor","title":"HDD Monitor","text":"

Generally, S.M.A.R.T. information is used to monitor HDD temperature and life of HDD, and normally accessing disk device node is allowed for root user or disk group. As with the CPU monitor, this package provides an approach to minimize security risks as much as possible:

"},{"location":"system/system_monitor/#instructions-before-starting_1","title":"Instructions before starting","text":"
  1. Create a user to run 'hdd_reader'.

    sudo adduser <username>\n
  2. Add user to the disk group.

    sudo usermod -a -G disk <username>\n
  3. Assign capabilities to 'hdd_reader' since SCSI kernel module requires rawio capability to send ATA PASS-THROUGH (12) command and NVMe kernel module requires admin capability to send Admin Command.

    sudo setcap 'cap_sys_rawio=ep cap_sys_admin=ep' install/system_monitor/lib/system_monitor/hdd_reader\n
  4. Run 'hdd_reader' as the user you created, and run system_monitor as a generic user.

    su <username>\ninstall/system_monitor/lib/system_monitor/hdd_reader\n
"},{"location":"system/system_monitor/#see-also_1","title":"See also","text":"

hdd_reader

"},{"location":"system/system_monitor/#gpu-monitor-for-intel-platform","title":"GPU Monitor for intel platform","text":"

Currently GPU monitor for intel platform only supports NVIDIA GPU whose information can be accessed by NVML API.

Also you need to install CUDA libraries. For installation instructions for CUDA 10.0, see NVIDIA CUDA Installation Guide for Linux.

"},{"location":"system/system_monitor/#voltage-monitor-for-cmos-battery","title":"Voltage monitor for CMOS Battery","text":"

Some platforms have built-in batteries for the RTC and CMOS. This node determines the battery status from the result of executing cat /proc/driver/rtc. Also, if lm-sensors is installed, it is possible to use the results. However, the return value of sensors varies depending on the chipset, so it is necessary to set a string to extract the corresponding voltage. It is also necessary to set the voltage for warning and error. For example, if you want a warning when the voltage is less than 2.9V and an error when it is less than 2.7V. The execution result of sensors on the chipset nct6106 is as follows, and \"in7:\" is the voltage of the CMOS battery.

$ sensors\npch_cannonlake-virtual-0\nAdapter: Virtual device\ntemp1:        +42.0\u00b0C\n\nnct6106-isa-0a10\nAdapter: ISA adapter\nin0:           728.00 mV (min =  +0.00 V, max =  +1.74 V)\nin1:             1.01 V  (min =  +0.00 V, max =  +2.04 V)\nin2:             3.34 V  (min =  +0.00 V, max =  +4.08 V)\nin3:             3.34 V  (min =  +0.00 V, max =  +4.08 V)\nin4:             1.07 V  (min =  +0.00 V, max =  +2.04 V)\nin5:             1.05 V  (min =  +0.00 V, max =  +2.04 V)\nin6:             1.67 V  (min =  +0.00 V, max =  +2.04 V)\nin7:             3.06 V  (min =  +0.00 V, max =  +4.08 V)\nin8:             2.10 V  (min =  +0.00 V, max =  +4.08 V)\nfan1:          2789 RPM  (min =    0 RPM)\nfan2:             0 RPM  (min =    0 RPM)\n

The setting value of voltage_monitor.param.yaml is as follows.

/**:\nros__parameters:\ncmos_battery_warn: 2.90\ncmos_battery_error: 2.70\ncmos_battery_label: \"in7:\"\n

The above values of 2.7V and 2.90V are hypothetical. Depending on the motherboard and chipset, the value may vary. However, if the voltage of the lithium battery drops below 2.7V, it is recommended to replace it. In the above example, the message output to the topic /diagnostics is as follows. If the voltage < 2.9V then:

  name: /autoware/system/resource_monitoring/voltage/cmos_battery\n  message: Warning\n  hardware_id: ''\n  values:\n  - key: 'voltage_monitor: CMOS Battery Status'\n    value: Low Battery\n

If the voltage < 2.7V then:

  name: /autoware/system/resource_monitoring/voltage/cmos_battery\n  message: Warning\n  hardware_id: ''\n  values:\n  - key: 'voltage_monitor: CMOS Battery Status'\n    value: Battery Died\n

If neither, then:

  name: /autoware/system/resource_monitoring/voltage/cmos_battery\n  message: OK\n  hardware_id: ''\n  values:\n  - key: 'voltage_monitor: CMOS Battery Status'\n    value: OK\n

If the CMOS battery voltage drops less than voltage_error or voltage_warn,It will be a warning. If the battery runs out, the RTC will stop working when the power is turned off. However, since the vehicle can run, it is not an error. The vehicle will stop when an error occurs, but there is no need to stop immediately. It can be determined by the value of \"Low Battery\" or \"Battery Died\".

"},{"location":"system/system_monitor/#uml-diagrams","title":"UML diagrams","text":"

See Class diagrams. See Sequence diagrams.

"},{"location":"system/system_monitor/docs/class_diagrams/","title":"Class diagrams","text":""},{"location":"system/system_monitor/docs/class_diagrams/#cpu-monitor","title":"CPU Monitor","text":""},{"location":"system/system_monitor/docs/class_diagrams/#hdd-monitor","title":"HDD Monitor","text":""},{"location":"system/system_monitor/docs/class_diagrams/#memory-monitor","title":"Memory Monitor","text":""},{"location":"system/system_monitor/docs/class_diagrams/#net-monitor","title":"Net Monitor","text":""},{"location":"system/system_monitor/docs/class_diagrams/#ntp-monitor","title":"NTP Monitor","text":""},{"location":"system/system_monitor/docs/class_diagrams/#process-monitor","title":"Process Monitor","text":""},{"location":"system/system_monitor/docs/class_diagrams/#gpu-monitor","title":"GPU Monitor","text":""},{"location":"system/system_monitor/docs/hdd_reader/","title":"hdd_reader","text":""},{"location":"system/system_monitor/docs/hdd_reader/#name","title":"Name","text":"

hdd_reader - Read S.M.A.R.T. information for monitoring HDD temperature and life of HDD

"},{"location":"system/system_monitor/docs/hdd_reader/#synopsis","title":"Synopsis","text":"

hdd_reader [OPTION]

"},{"location":"system/system_monitor/docs/hdd_reader/#description","title":"Description","text":"

Read S.M.A.R.T. information for monitoring HDD temperature and life of HDD. This runs as a daemon process and listens to a TCP/IP port (7635 by default).

Options: -h, --help \u00a0\u00a0\u00a0\u00a0Display help -p, --port # \u00a0\u00a0\u00a0\u00a0Port number to listen to

Exit status: Returns 0 if OK; non-zero otherwise.

"},{"location":"system/system_monitor/docs/hdd_reader/#notes","title":"Notes","text":"

The 'hdd_reader' accesses minimal data enough to get Model number, Serial number, HDD temperature, and life of HDD. This is an approach to limit its functionality, however, the functionality can be expanded for further improvements and considerations in the future.

"},{"location":"system/system_monitor/docs/hdd_reader/#ata","title":"[ATA]","text":"Purpose Name Length Model number, Serial number IDENTIFY DEVICE data 256 words(512 bytes) HDD temperature, life of HDD SMART READ DATA 256 words(512 bytes)

For details please see the documents below.

"},{"location":"system/system_monitor/docs/hdd_reader/#nvme","title":"[NVMe]","text":"Purpose Name Length Model number, Serial number Identify Controller data structure 4096 bytes HDD temperature, life of HDD SMART / Health Information 36 Dword(144 bytes)

For details please see the documents below.

"},{"location":"system/system_monitor/docs/hdd_reader/#operation-confirmed-drives","title":"Operation confirmed drives","text":""},{"location":"system/system_monitor/docs/msr_reader/","title":"msr_reader","text":""},{"location":"system/system_monitor/docs/msr_reader/#name","title":"Name","text":"

msr_reader - Read MSR register for monitoring thermal throttling event

"},{"location":"system/system_monitor/docs/msr_reader/#synopsis","title":"Synopsis","text":"

msr_reader [OPTION]

"},{"location":"system/system_monitor/docs/msr_reader/#description","title":"Description","text":"

Read MSR register for monitoring thermal throttling event. This runs as a daemon process and listens to a TCP/IP port (7634 by default).

Options: -h, --help \u00a0\u00a0\u00a0\u00a0Display help -p, --port # \u00a0\u00a0\u00a0\u00a0Port number to listen to

Exit status: Returns 0 if OK; non-zero otherwise.

"},{"location":"system/system_monitor/docs/msr_reader/#notes","title":"Notes","text":"

The 'msr_reader' accesses minimal data enough to get thermal throttling event. This is an approach to limit its functionality, however, the functionality can be expanded for further improvements and considerations in the future.

Register Address Name Length 1B1H IA32_PACKAGE_THERM_STATUS 64bit

For details please see the documents below.

"},{"location":"system/system_monitor/docs/msr_reader/#operation-confirmed-platform","title":"Operation confirmed platform","text":""},{"location":"system/system_monitor/docs/ros_parameters/","title":"ROS parameters","text":""},{"location":"system/system_monitor/docs/ros_parameters/#cpu-monitor","title":"CPU Monitor","text":"

cpu_monitor:

Name Type Unit Default Notes temp_warn float DegC 90.0 Generates warning when CPU temperature reaches a specified value or higher. temp_error float DegC 95.0 Generates error when CPU temperature reaches a specified value or higher. usage_warn float %(1e-2) 0.90 Generates warning when CPU usage reaches a specified value or higher and last for usage_warn_count counts. usage_error float %(1e-2) 1.00 Generates error when CPU usage reaches a specified value or higher and last for usage_error_count counts. usage_warn_count int n/a 2 Generates warning when CPU usage reaches usage_warn value or higher and last for a specified counts. usage_error_count int n/a 2 Generates error when CPU usage reaches usage_error value or higher and last for a specified counts. load1_warn float %(1e-2) 0.90 Generates warning when load average 1min reaches a specified value or higher. load5_warn float %(1e-2) 0.80 Generates warning when load average 5min reaches a specified value or higher. msr_reader_port int n/a 7634 Port number to connect to msr_reader."},{"location":"system/system_monitor/docs/ros_parameters/#hdd-monitor","title":"HDD Monitor","text":"

hdd_monitor:

\u00a0\u00a0disks:

Name Type Unit Default Notes name string n/a none The disk name to monitor temperature. (e.g. /dev/sda) temp_attribute_id int n/a 0xC2 S.M.A.R.T attribute ID of temperature. temp_warn float DegC 55.0 Generates warning when HDD temperature reaches a specified value or higher. temp_error float DegC 70.0 Generates error when HDD temperature reaches a specified value or higher. power_on_hours_attribute_id int n/a 0x09 S.M.A.R.T attribute ID of power-on hours. power_on_hours_warn int Hour 3000000 Generates warning when HDD power-on hours reaches a specified value or higher. total_data_written_attribute_id int n/a 0xF1 S.M.A.R.T attribute ID of total data written. total_data_written_warn int depends on device 4915200 Generates warning when HDD total data written reaches a specified value or higher. total_data_written_safety_factor int %(1e-2) 0.05 Safety factor of HDD total data written. recovered_error_attribute_id int n/a 0xC3 S.M.A.R.T attribute ID of recovered error. recovered_error_warn int n/a 1 Generates warning when HDD recovered error reaches a specified value or higher. read_data_rate_warn float MB/s 360.0 Generates warning when HDD read data rate reaches a specified value or higher. write_data_rate_warn float MB/s 103.5 Generates warning when HDD write data rate reaches a specified value or higher. read_iops_warn float IOPS 63360.0 Generates warning when HDD read IOPS reaches a specified value or higher. write_iops_warn float IOPS 24120.0 Generates warning when HDD write IOPS reaches a specified value or higher.

hdd_monitor:

Name Type Unit Default Notes hdd_reader_port int n/a 7635 Port number to connect to hdd_reader. usage_warn float %(1e-2) 0.95 Generates warning when disk usage reaches a specified value or higher. usage_error float %(1e-2) 0.99 Generates error when disk usage reaches a specified value or higher."},{"location":"system/system_monitor/docs/ros_parameters/#memory-monitor","title":"Memory Monitor","text":"

mem_monitor:

Name Type Unit Default Notes usage_warn float %(1e-2) 0.95 Generates warning when physical memory usage reaches a specified value or higher. usage_error float %(1e-2) 0.99 Generates error when physical memory usage reaches a specified value or higher."},{"location":"system/system_monitor/docs/ros_parameters/#net-monitor","title":"Net Monitor","text":"

net_monitor:

Name Type Unit Default Notes devices list[string] n/a none The name of network interface to monitor. (e.g. eth0, * for all network interfaces) monitor_program string n/a greengrass program name to be monitored by nethogs name. crc_error_check_duration int sec 1 CRC error check duration. crc_error_count_threshold int n/a 1 Generates warning when count of CRC errors during CRC error check duration reaches a specified value or higher. reassembles_failed_check_duration int sec 1 IP packet reassembles failed check duration. reassembles_failed_check_count int n/a 1 Generates warning when count of IP packet reassembles failed during IP packet reassembles failed check duration reaches a specified value or higher."},{"location":"system/system_monitor/docs/ros_parameters/#ntp-monitor","title":"NTP Monitor","text":"

ntp_monitor:

Name Type Unit Default Notes server string n/a ntp.ubuntu.com The name of NTP server to synchronize date and time. (e.g. ntp.nict.jp for Japan) offset_warn float sec 0.1 Generates warning when NTP offset reaches a specified value or higher. (default is 100ms) offset_error float sec 5.0 Generates warning when NTP offset reaches a specified value or higher. (default is 5sec)"},{"location":"system/system_monitor/docs/ros_parameters/#process-monitor","title":"Process Monitor","text":"

process_monitor:

Name Type Unit Default Notes num_of_procs int n/a 5 The number of processes to generate High-load Proc[0-9] and High-mem Proc[0-9]."},{"location":"system/system_monitor/docs/ros_parameters/#gpu-monitor","title":"GPU Monitor","text":"

gpu_monitor:

Name Type Unit Default Notes temp_warn float DegC 90.0 Generates warning when GPU temperature reaches a specified value or higher. temp_error float DegC 95.0 Generates error when GPU temperature reaches a specified value or higher. gpu_usage_warn float %(1e-2) 0.90 Generates warning when GPU usage reaches a specified value or higher. gpu_usage_error float %(1e-2) 1.00 Generates error when GPU usage reaches a specified value or higher. memory_usage_warn float %(1e-2) 0.90 Generates warning when GPU memory usage reaches a specified value or higher. memory_usage_error float %(1e-2) 1.00 Generates error when GPU memory usage reaches a specified value or higher."},{"location":"system/system_monitor/docs/ros_parameters/#voltage-monitor","title":"Voltage Monitor","text":"

voltage_monitor:

Name Type Unit Default Notes cmos_battery_warn float volt 2.9 Generates warning when voltage of CMOS Battery is lower. cmos_battery_error float volt 2.7 Generates error when voltage of CMOS Battery is lower. cmos_battery_label string n/a \"\" voltage string in sensors command outputs. if empty no voltage will be checked."},{"location":"system/system_monitor/docs/seq_diagrams/","title":"Sequence diagrams","text":""},{"location":"system/system_monitor/docs/seq_diagrams/#cpu-monitor","title":"CPU Monitor","text":""},{"location":"system/system_monitor/docs/seq_diagrams/#hdd-monitor","title":"HDD Monitor","text":""},{"location":"system/system_monitor/docs/seq_diagrams/#memory-monitor","title":"Memory Monitor","text":""},{"location":"system/system_monitor/docs/seq_diagrams/#net-monitor","title":"Net Monitor","text":""},{"location":"system/system_monitor/docs/seq_diagrams/#ntp-monitor","title":"NTP Monitor","text":""},{"location":"system/system_monitor/docs/seq_diagrams/#process-monitor","title":"Process Monitor","text":""},{"location":"system/system_monitor/docs/seq_diagrams/#gpu-monitor","title":"GPU Monitor","text":""},{"location":"system/system_monitor/docs/topics_cpu_monitor/","title":"ROS topics: CPU Monitor","text":""},{"location":"system/system_monitor/docs/topics_cpu_monitor/#cpu-temperature","title":"CPU Temperature","text":"

/diagnostics/cpu_monitor: CPU Temperature

[summary]

level message OK OK

[values]

key (example) value (example) Package id 0, Core [0-9], thermal_zone[0-9] 50.0 DegC

*key: thermal_zone[0-9] for ARM architecture.

"},{"location":"system/system_monitor/docs/topics_cpu_monitor/#cpu-usage","title":"CPU Usage","text":"

/diagnostics/cpu_monitor: CPU Usage

[summary]

level message OK OK WARN high load ERROR very high load

[values]

key value (example) CPU [all,0-9]: status OK / high load / very high load CPU [all,0-9]: usr 2.00% CPU [all,0-9]: nice 0.00% CPU [all,0-9]: sys 1.00% CPU [all,0-9]: idle 97.00%"},{"location":"system/system_monitor/docs/topics_cpu_monitor/#cpu-load-average","title":"CPU Load Average","text":"

/diagnostics/cpu_monitor: CPU Load Average

[summary]

level message OK OK WARN high load

[values]

key value (example) 1min 14.50% 5min 14.55% 15min 9.67%"},{"location":"system/system_monitor/docs/topics_cpu_monitor/#cpu-thermal-throttling","title":"CPU Thermal Throttling","text":"

Intel and raspi platform only. Tegra platform not supported.

/diagnostics/cpu_monitor: CPU Thermal Throttling

[summary]

level message OK OK ERROR throttling

[values for intel platform]

key value (example) CPU [0-9]: Pkg Thermal Status OK / throttling

[values for raspi platform]

key value (example) status All clear / Currently throttled / Soft temperature limit active"},{"location":"system/system_monitor/docs/topics_cpu_monitor/#cpu-frequency","title":"CPU Frequency","text":"

/diagnostics/cpu_monitor: CPU Frequency

[summary]

level message OK OK

[values]

key value (example) CPU [0-9]: clock 2879MHz"},{"location":"system/system_monitor/docs/topics_gpu_monitor/","title":"ROS topics: GPU Monitor","text":"

Intel and tegra platform only. Raspi platform not supported.

"},{"location":"system/system_monitor/docs/topics_gpu_monitor/#gpu-temperature","title":"GPU Temperature","text":"

/diagnostics/gpu_monitor: GPU Temperature

[summary]

level message OK OK WARN warm ERROR hot

[values]

key (example) value (example) GeForce GTX 1650, thermal_zone[0-9] 46.0 DegC

*key: thermal_zone[0-9] for ARM architecture.

"},{"location":"system/system_monitor/docs/topics_gpu_monitor/#gpu-usage","title":"GPU Usage","text":"

/diagnostics/gpu_monitor: GPU Usage

[summary]

level message OK OK WARN high load ERROR very high load

[values]

key value (example) GPU [0-9]: status OK / high load / very high load GPU [0-9]: name GeForce GTX 1650, gpu.[0-9] GPU [0-9]: usage 19.0%

*key: gpu.[0-9] for ARM architecture.

"},{"location":"system/system_monitor/docs/topics_gpu_monitor/#gpu-memory-usage","title":"GPU Memory Usage","text":"

Intel platform only. There is no separate gpu memory in tegra. Both cpu and gpu uses cpu memory.

/diagnostics/gpu_monitor: GPU Memory Usage

[summary]

level message OK OK WARN high load ERROR very high load

[values]

key value (example) GPU [0-9]: status OK / high load / very high load GPU [0-9]: name GeForce GTX 1650 GPU [0-9]: usage 13.0% GPU [0-9]: total 3G GPU [0-9]: used 1G GPU [0-9]: free 2G"},{"location":"system/system_monitor/docs/topics_gpu_monitor/#gpu-thermal-throttling","title":"GPU Thermal Throttling","text":"

Intel platform only. Tegra platform not supported.

/diagnostics/gpu_monitor: GPU Thermal Throttling

[summary]

level message OK OK ERROR throttling

[values]

key value (example) GPU [0-9]: status OK / throttling GPU [0-9]: name GeForce GTX 1650 GPU [0-9]: graphics clock 1020 MHz GPU [0-9]: reasons GpuIdle / SwThermalSlowdown etc."},{"location":"system/system_monitor/docs/topics_gpu_monitor/#gpu-frequency","title":"GPU Frequency","text":"

/diagnostics/gpu_monitor: GPU Frequency

"},{"location":"system/system_monitor/docs/topics_gpu_monitor/#intel-platform","title":"Intel platform","text":"

[summary]

level message OK OK WARN unsupported clock

[values]

key value (example) GPU [0-9]: status OK / unsupported clock GPU [0-9]: name GeForce GTX 1650 GPU [0-9]: graphics clock 1020 MHz"},{"location":"system/system_monitor/docs/topics_gpu_monitor/#tegra-platform","title":"Tegra platform","text":"

[summary]

level message OK OK

[values]

key (example) value (example) GPU 17000000.gv11b: clock 318 MHz"},{"location":"system/system_monitor/docs/topics_hdd_monitor/","title":"ROS topics: HDD Monitor","text":""},{"location":"system/system_monitor/docs/topics_hdd_monitor/#hdd-temperature","title":"HDD Temperature","text":"

/diagnostics/hdd_monitor: HDD Temperature

[summary]

level message OK OK WARN hot ERROR critical hot

[values]

key value (example) HDD [0-9]: status OK / hot / critical hot HDD [0-9]: name /dev/nvme0 HDD [0-9]: model SAMSUNG MZVLB1T0HBLR-000L7 HDD [0-9]: serial S4EMNF0M820682 HDD [0-9]: temperature 37.0 DegC not available"},{"location":"system/system_monitor/docs/topics_hdd_monitor/#hdd-poweronhours","title":"HDD PowerOnHours","text":"

/diagnostics/hdd_monitor: HDD PowerOnHours

[summary]

level message OK OK WARN lifetime limit

[values]

key value (example) HDD [0-9]: status OK / lifetime limit HDD [0-9]: name /dev/nvme0 HDD [0-9]: model PHISON PS5012-E12S-512G HDD [0-9]: serial FB590709182505050767 HDD [0-9]: power on hours 4834 Hours not available"},{"location":"system/system_monitor/docs/topics_hdd_monitor/#hdd-totaldatawritten","title":"HDD TotalDataWritten","text":"

/diagnostics/hdd_monitor: HDD TotalDataWritten

[summary]

level message OK OK WARN warranty period

[values]

key value (example) HDD [0-9]: status OK / warranty period HDD [0-9]: name /dev/nvme0 HDD [0-9]: model PHISON PS5012-E12S-512G HDD [0-9]: serial FB590709182505050767 HDD [0-9]: total data written 146295330 not available"},{"location":"system/system_monitor/docs/topics_hdd_monitor/#hdd-recoverederror","title":"HDD RecoveredError","text":"

/diagnostics/hdd_monitor: HDD RecoveredError

[summary]

level message OK OK WARN high soft error rate

[values]

key value (example) HDD [0-9]: status OK / high soft error rate HDD [0-9]: name /dev/nvme0 HDD [0-9]: model PHISON PS5012-E12S-512G HDD [0-9]: serial FB590709182505050767 HDD [0-9]: recovered error 0 not available"},{"location":"system/system_monitor/docs/topics_hdd_monitor/#hdd-usage","title":"HDD Usage","text":"

/diagnostics/hdd_monitor: HDD Usage

[summary]

level message OK OK WARN low disk space ERROR very low disk space

[values]

key value (example) HDD [0-9]: status OK / low disk space / very low disk space HDD [0-9]: filesystem /dev/nvme0n1p4 HDD [0-9]: size 264G HDD [0-9]: used 172G HDD [0-9]: avail 749G HDD [0-9]: use 69% HDD [0-9]: mounted on /"},{"location":"system/system_monitor/docs/topics_hdd_monitor/#hdd-readdatarate","title":"HDD ReadDataRate","text":"

/diagnostics/hdd_monitor: HDD ReadDataRate

[summary]

level message OK OK WARN high data rate of read

[values]

key value (example) HDD [0-9]: status OK / high data rate of read HDD [0-9]: name /dev/nvme0 HDD [0-9]: data rate of read 0.00 MB/s"},{"location":"system/system_monitor/docs/topics_hdd_monitor/#hdd-writedatarate","title":"HDD WriteDataRate","text":"

/diagnostics/hdd_monitor: HDD WriteDataRate

[summary]

level message OK OK WARN high data rate of write

[values]

key value (example) HDD [0-9]: status OK / high data rate of write HDD [0-9]: name /dev/nvme0 HDD [0-9]: data rate of write 0.00 MB/s"},{"location":"system/system_monitor/docs/topics_hdd_monitor/#hdd-readiops","title":"HDD ReadIOPS","text":"

/diagnostics/hdd_monitor: HDD ReadIOPS

[summary]

level message OK OK WARN high IOPS of read

[values]

key value (example) HDD [0-9]: status OK / high IOPS of read HDD [0-9]: name /dev/nvme0 HDD [0-9]: IOPS of read 0.00 IOPS"},{"location":"system/system_monitor/docs/topics_hdd_monitor/#hdd-writeiops","title":"HDD WriteIOPS","text":"

/diagnostics/hdd_monitor: HDD WriteIOPS

[summary]

level message OK OK WARN high IOPS of write

[values]

key value (example) HDD [0-9]: status OK / high IOPS of write HDD [0-9]: name /dev/nvme0 HDD [0-9]: IOPS of write 0.00 IOPS"},{"location":"system/system_monitor/docs/topics_hdd_monitor/#hdd-connection","title":"HDD Connection","text":"

/diagnostics/hdd_monitor: HDD Connection

[summary]

level message OK OK WARN not connected

[values]

key value (example) HDD [0-9]: status OK / not connected HDD [0-9]: name /dev/nvme0 HDD [0-9]: mount point /"},{"location":"system/system_monitor/docs/topics_mem_monitor/","title":"ROS topics: Memory Monitor","text":""},{"location":"system/system_monitor/docs/topics_mem_monitor/#memory-usage","title":"Memory Usage","text":"

/diagnostics/mem_monitor: Memory Usage

[summary]

level message OK OK WARN high load ERROR very high load

[values]

key value (example) Mem: usage 29.72% Mem: total 31.2G Mem: used 6.0G Mem: free 20.7G Mem: shared 2.9G Mem: buff/cache 4.5G Mem: available 21.9G Swap: total 2.0G Swap: used 218M Swap: free 1.8G Total: total 33.2G Total: used 6.2G Total: free 22.5G Total: used+ 9.1G"},{"location":"system/system_monitor/docs/topics_net_monitor/","title":"ROS topics: Net Monitor","text":""},{"location":"system/system_monitor/docs/topics_net_monitor/#network-connection","title":"Network Connection","text":"

/diagnostics/net_monitor: Network Connection

[summary]

level message OK OK WARN no such device

[values]

key value (example) Network [0-9]: status OK / no such device HDD [0-9]: name wlp82s0"},{"location":"system/system_monitor/docs/topics_net_monitor/#network-usage","title":"Network Usage","text":"

/diagnostics/net_monitor: Network Usage

[summary]

level message OK OK

[values]

key value (example) Network [0-9]: status OK Network [0-9]: interface name wlp82s0 Network [0-9]: rx_usage 0.00% Network [0-9]: tx_usage 0.00% Network [0-9]: rx_traffic 0.00 MB/s Network [0-9]: tx_traffic 0.00 MB/s Network [0-9]: capacity 400.0 MB/s Network [0-9]: mtu 1500 Network [0-9]: rx_bytes 58455228 Network [0-9]: rx_errors 0 Network [0-9]: tx_bytes 11069136 Network [0-9]: tx_errors 0 Network [0-9]: collisions 0"},{"location":"system/system_monitor/docs/topics_net_monitor/#network-traffic","title":"Network Traffic","text":"

/diagnostics/net_monitor: Network Traffic

[summary]

level message OK OK

[values when specified program is detected]

key value (example) nethogs [0-9]: program /lambda/greengrassSystemComponents/1384/999 nethogs [0-9]: sent (KB/Sec) 1.13574 nethogs [0-9]: received (KB/Sec) 0.261914

[values when error is occurring]

key value (example) error execve failed: No such file or directory"},{"location":"system/system_monitor/docs/topics_net_monitor/#network-crc-error","title":"Network CRC Error","text":"

/diagnostics/net_monitor: Network CRC Error

[summary]

level message OK OK WARN CRC error

[values]

key value (example) Network [0-9]: interface name wlp82s0 Network [0-9]: total rx_crc_errors 0 Network [0-9]: rx_crc_errors per unit time 0"},{"location":"system/system_monitor/docs/topics_net_monitor/#ip-packet-reassembles-failed","title":"IP Packet Reassembles Failed","text":"

/diagnostics/net_monitor: IP Packet Reassembles Failed

[summary]

level message OK OK WARN reassembles failed

[values]

key value (example) total packet reassembles failed 0 packet reassembles failed per unit time 0"},{"location":"system/system_monitor/docs/topics_ntp_monitor/","title":"ROS topics: NTP Monitor","text":""},{"location":"system/system_monitor/docs/topics_ntp_monitor/#ntp-offset","title":"NTP Offset","text":"

/diagnostics/ntp_monitor: NTP Offset

[summary]

level message OK OK WARN high ERROR too high

[values]

key value (example) NTP Offset -0.013181 sec NTP Delay 0.053880 sec"},{"location":"system/system_monitor/docs/topics_process_monitor/","title":"ROS topics: Process Monitor","text":""},{"location":"system/system_monitor/docs/topics_process_monitor/#tasks-summary","title":"Tasks Summary","text":"

/diagnostics/process_monitor: Tasks Summary

[summary]

level message OK OK

[values]

key value (example) total 409 running 2 sleeping 321 stopped 0 zombie 0"},{"location":"system/system_monitor/docs/topics_process_monitor/#high-load-proc0-9","title":"High-load Proc[0-9]","text":"

/diagnostics/process_monitor: High-load Proc[0-9]

[summary]

level message OK OK

[values]

key value (example) COMMAND /usr/lib/firefox/firefox %CPU 37.5 %MEM 2.1 PID 14062 USER autoware PR 20 NI 0 VIRT 3461152 RES 669052 SHR 481208 S S TIME+ 23:57.49"},{"location":"system/system_monitor/docs/topics_process_monitor/#high-mem-proc0-9","title":"High-mem Proc[0-9]","text":"

/diagnostics/process_monitor: High-mem Proc[0-9]

[summary]

level message OK OK

[values]

key value (example) COMMAND /snap/multipass/1784/usr/bin/qemu-system-x86_64 %CPU 0 %MEM 2.5 PID 1565 USER root PR 20 NI 0 VIRT 3722320 RES 812432 SHR 20340 S S TIME+ 0:22.84"},{"location":"system/system_monitor/docs/topics_voltage_monitor/","title":"ROS topics: Voltage Monitor","text":"

\"CMOS Battery Status\" and \"CMOS battery voltage\" are exclusive. Only one or the other is generated. Which one is generated depends on the value of cmos_battery_label.

"},{"location":"system/system_monitor/docs/topics_voltage_monitor/#cmos-battery-status","title":"CMOS Battery Status","text":"

/diagnostics/voltage_monitor: CMOS Battery Status

[summary]

level message OK OK WARN Battery Dead

[values]

key (example) value (example) CMOS battery status OK / Battery Dead

*key: thermal_zone[0-9] for ARM architecture.

"},{"location":"system/system_monitor/docs/topics_voltage_monitor/#cmos-battery-voltage","title":"CMOS Battery Voltage","text":"

/diagnostics/voltage_monitor: CMOS battery voltage

[summary]

level message OK OK WARN Low Battery WARN Battery Died

[values]

key value (example) CMOS battery voltage 3.06"},{"location":"system/system_monitor/docs/traffic_reader/","title":"traffic_reader","text":""},{"location":"system/system_monitor/docs/traffic_reader/#name","title":"Name","text":"

traffic_reader - monitoring network traffic by process

"},{"location":"system/system_monitor/docs/traffic_reader/#synopsis","title":"Synopsis","text":"

traffic_reader [OPTION]

"},{"location":"system/system_monitor/docs/traffic_reader/#description","title":"Description","text":"

Monitoring network traffic by process. This runs as a daemon process and listens to a TCP/IP port (7636 by default).

Options: -h, --help \u00a0\u00a0\u00a0\u00a0Display help -p, --port # \u00a0\u00a0\u00a0\u00a0Port number to listen to

Exit status: Returns 0 if OK; non-zero otherwise.

"},{"location":"system/system_monitor/docs/traffic_reader/#notes","title":"Notes","text":"

The 'traffic_reader' requires nethogs command.

"},{"location":"system/system_monitor/docs/traffic_reader/#operation-confirmed-platform","title":"Operation confirmed platform","text":""},{"location":"system/topic_state_monitor/Readme/","title":"topic_state_monitor","text":""},{"location":"system/topic_state_monitor/Readme/#purpose","title":"Purpose","text":"

This node monitors input topic for abnormalities such as timeout and low frequency. The result of topic status is published as diagnostics.

"},{"location":"system/topic_state_monitor/Readme/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

The types of topic status and corresponding diagnostic status are following.

Topic status Diagnostic status Description OK OK The topic has no abnormalities NotReceived ERROR The topic has not been received yet WarnRate WARN The frequency of the topic is dropped ErrorRate ERROR The frequency of the topic is significantly dropped Timeout ERROR The topic subscription is stopped for a certain time"},{"location":"system/topic_state_monitor/Readme/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"system/topic_state_monitor/Readme/#input","title":"Input","text":"Name Type Description any name any type Subscribe target topic to monitor"},{"location":"system/topic_state_monitor/Readme/#output","title":"Output","text":"Name Type Description /diagnostics diagnostic_msgs/DiagnosticArray Diagnostics outputs"},{"location":"system/topic_state_monitor/Readme/#parameters","title":"Parameters","text":""},{"location":"system/topic_state_monitor/Readme/#node-parameters","title":"Node Parameters","text":"Name Type Default Value Description topic string - Name of target topic topic_type string - Type of target topic (used if the topic is not transform) frame_id string - Frame ID of transform parent (used if the topic is transform) child_frame_id string - Frame ID of transform child (used if the topic is transform) transient_local bool false QoS policy of topic subscription (Transient Local/Volatile) best_effort bool false QoS policy of topic subscription (Best Effort/Reliable) diag_name string - Name used for the diagnostics to publish update_rate double 10.0 Timer callback period [Hz]"},{"location":"system/topic_state_monitor/Readme/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description warn_rate double 0.5 If the topic rate is lower than this value, the topic status becomes WarnRate error_rate double 0.1 If the topic rate is lower than this value, the topic status becomes ErrorRate timeout double 1.0 If the topic subscription is stopped for more than this time [s], the topic status becomes Timeout window_size int 10 Window size of target topic for calculating frequency"},{"location":"system/topic_state_monitor/Readme/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"},{"location":"system/velodyne_monitor/Readme/","title":"velodyne_monitor","text":""},{"location":"system/velodyne_monitor/Readme/#purpose","title":"Purpose","text":"

This node monitors the status of Velodyne LiDARs. The result of the status is published as diagnostics. Take care not to use this diagnostics to decide the lidar error. Please read Assumptions / Known limits for the detail reason.

"},{"location":"system/velodyne_monitor/Readme/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

The status of Velodyne LiDAR can be retrieved from http://[ip_address]/cgi/{info, settings, status, diag}.json.

The types of abnormal status and corresponding diagnostics status are following.

Abnormal status Diagnostic status No abnormality OK Top board temperature is too cold ERROR Top board temperature is cold WARN Top board temperature is too hot ERROR Top board temperature is hot WARN Bottom board temperature is too cold ERROR Bottom board temperature is cold WARN Bottom board temperature is too hot ERROR Bottom board temperature is hot WARN Rpm(Rotations per minute) of the motor is too low ERROR Rpm(Rotations per minute) of the motor is low WARN Connection error (cannot get Velodyne LiDAR status) ERROR"},{"location":"system/velodyne_monitor/Readme/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"system/velodyne_monitor/Readme/#input","title":"Input","text":"

None

"},{"location":"system/velodyne_monitor/Readme/#output","title":"Output","text":"Name Type Description /diagnostics diagnostic_msgs/DiagnosticArray Diagnostics outputs"},{"location":"system/velodyne_monitor/Readme/#parameters","title":"Parameters","text":""},{"location":"system/velodyne_monitor/Readme/#node-parameters","title":"Node Parameters","text":"Name Type Default Value Description timeout double 0.5 Timeout for HTTP request to get Velodyne LiDAR status [s]"},{"location":"system/velodyne_monitor/Readme/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description ip_address string \"192.168.1.201\" IP address of target Velodyne LiDAR temp_cold_warn double -5.0 If the temperature of Velodyne LiDAR is lower than this value, the diagnostics status becomes WARN [\u00b0C] temp_cold_error double -10.0 If the temperature of Velodyne LiDAR is lower than this value, the diagnostics status becomes ERROR [\u00b0C] temp_hot_warn double 75.0 If the temperature of Velodyne LiDAR is higher than this value, the diagnostics status becomes WARN [\u00b0C] temp_hot_error double 80.0 If the temperature of Velodyne LiDAR is higher than this value, the diagnostics status becomes ERROR [\u00b0C] rpm_ratio_warn double 0.80 If the rpm rate of the motor (= current rpm / default rpm) is lower than this value, the diagnostics status becomes WARN rpm_ratio_error double 0.70 If the rpm rate of the motor (= current rpm / default rpm) is lower than this value, the diagnostics status becomes ERROR"},{"location":"system/velodyne_monitor/Readme/#config-files","title":"Config files","text":"

Config files for several velodyne models are prepared. The temp_*** parameters are set with reference to the operational temperature from each datasheet. Moreover, the temp_hot_*** of each model are set highly as 20 from operational temperature. Now, VLP-16.param.yaml is used as default argument because it is lowest spec.

Model Name Config name Operational Temperature [\u2103] VLP-16 VLP-16.param.yaml -10 to 60 VLP-32C VLP-32C.param.yaml -20 to 60 VLS-128 VLS-128.param.yaml -20 to 60 Velarray M1600 Velarray_M1600.param.yaml -40 to 85 HDL-32E HDL-32E.param.yaml -10 to 60"},{"location":"system/velodyne_monitor/Readme/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

This node uses the http_client and request results by GET method. It takes a few seconds to get results, or generate a timeout exception if it does not succeed the GET request. This occurs frequently and the diagnostics aggregator output STALE. Therefore I recommend to stop using this results to decide the lidar error, and only monitor it to confirm lidar status.

"},{"location":"tools/simulator_test/simulator_compatibility_test/","title":"simulator_compatibility_test","text":""},{"location":"tools/simulator_test/simulator_compatibility_test/#purpose","title":"Purpose","text":"

Test procedures (e.g. test codes) to check whether a certain simulator is compatible with Autoware

"},{"location":"tools/simulator_test/simulator_compatibility_test/#overview-of-the-test-codes","title":"Overview of the test codes","text":"

File structure

  1. test_base provides shared methods for testing. Other test codes are created based on functions defined here.
  2. test_sim_common_manual_testing provides the most basic functions. Any simulator can be tested using codes here. However, to make these codes usable with any simulators, the codes do not include any features for test automation.
  3. test_morai_sim is an automated version of test_sim_common_manual_testing for MORAI SIM: Drive. Thus it includes 'MORAI SIM: Drive'-specific codes. Users of the other simulators may create similar version for their simulator of interest.
"},{"location":"tools/simulator_test/simulator_compatibility_test/#test-procedures-for-test_sim_common_manual_testing","title":"Test Procedures for test_sim_common_manual_testing","text":""},{"location":"tools/simulator_test/simulator_compatibility_test/#build-process-before-test","title":"Build process before test","text":"
source install/setup.bash\ncolcon build --packages-select simulator_compatibility_test\ncd src/universe/autoware.universe/tools/simulator_test/simulator_compatibility_test/test_sim_common_manual_testing\n

To run each test case manually

"},{"location":"tools/simulator_test/simulator_compatibility_test/#test-case-1","title":"Test Case #1","text":"
  1. Run your simulator
  2. Load a map and an ego vehicle for the test
  3. Run the test using the following command

    python -m pytest test_01_control_mode_and_report.py\n
  4. Check if expected behavior is created within the simulator

    • Ego vehicle control mode is changed into Manual (If the simulator has a GUI for this one, it should display the ego is in Manual)
    • Ego vehicle control mode is changed into Auto (If the simulator has a GUI for this one, it should display the ego is in Auto)
  5. Check if pytest output is passed or failure
"},{"location":"tools/simulator_test/simulator_compatibility_test/#test-case-2","title":"Test Case #2","text":"
  1. Run your simulator (If the simulator is already running, skip this part)
  2. Load a map and an ego vehicle for the test (If a map and an ego are loaded already, skip this part)
  3. Run the test using the following command

    python -m pytest test_02_change_gear_and_report.py\n
  4. Check if expected behavior is created within the simulator

    • Ego vehicle gear mode is changed into \"P\" (If the simulator has a GUI for this one, it should display the gear mode is in \"P\")
    • Ego vehicle gear mode is changed into \"N\" (If the simulator has a GUI for this one, it should display the gear mode is in \"N\")
    • Ego vehicle gear mode is changed into \"R\" (If the simulator has a GUI for this one, it should display the gear mode is in \"R\")
    • Ego vehicle gear mode is changed into \"D\" (If the simulator has a GUI for this one, it should display the gear mode is in \"D\")
  5. Check if pytest output is passed or failure
"},{"location":"tools/simulator_test/simulator_compatibility_test/#test-case-3","title":"Test Case #3","text":"
  1. Run your simulator (If the simulator is already running, skip this part)
  2. Load a map and an ego vehicle for the test (If a map and an ego are loaded already, skip this part)
  3. Run the test using the following command

    python -m pytest test_03_longitudinal_command_and_report.py\n
  4. Check if expected behavior is created within the simulator

    • Ego vehicle longitudinal velocity is greater than 10 kph (If the simulator has a GUI for this one, it should display the longitudinal velocity is greater than 10 kph)
    • Ego vehicle longitudinal velocity is going below 10 kph. This is an ego vehicle initialize process to ensure the following acceleration is made by longitudinal.acceleration value (If the simulator has a GUI for this one, it should display the longitudinal velocity is less than 10 kph)
    • Ego vehicle longitudinal velocity is greater than 10 kph (If the simulator has a GUI for this one, it should display the longitudinal velocity is greater than 10 kph)
    • Ego vehicle longitudinal velocity is going below 10 kph. This is an ego vehicle reset process to tear down this test case.
  5. Check if pytest output is passed or failure
"},{"location":"tools/simulator_test/simulator_compatibility_test/#test-case-4","title":"Test Case #4","text":"
  1. Run your simulator (If the simulator is already running, skip this part)
  2. Load a map and an ego vehicle for the test (If a map and an ego are loaded already, skip this part)
  3. Run the test using the following command

    python -m pytest test_04_lateral_command_and_report.py\n
  4. Check if expected behavior is created within the simulator

    • Ego vehicle steering and/or tire value is greater than 0 degree (If the simulator has a GUI for this one, it should display the steering and/or tire is greater than 0 degree)
    • Ego vehicle steering and/or tire value is 0 degree. This is a reset process. (If the simulator has a GUI for this one, it should display the steering and/or tire is 0 degree)
    • Ego vehicle steering and/or tire value is less than 0 degree (If the simulator has a GUI for this one, it should display the steering and/or tire is less than 0 degree)
    • Ego vehicle steering and/or tire value is 0 degree. This is a reset process. (If the simulator has a GUI for this one, it should display the steering and/or tire is 0 degree)
  5. Check if pytest output is passed or failure
"},{"location":"tools/simulator_test/simulator_compatibility_test/#test-case-5","title":"Test Case #5","text":"
  1. Run your simulator (If the simulator is already running, skip this part)
  2. Load a map and an ego vehicle for the test (If a map and an ego are loaded already, skip this part)
  3. Run the test using the following command

    python -m pytest test_05_turn_indicators_cmd_and_report.py\n
  4. Check if expected behavior is created within the simulator

    • Ego vehicle left turn indicator is turned on (If the simulator has a GUI for this one, it should display the left turn indicator is turned on)
    • Ego vehicle right turn indicator is turned on (If the simulator has a GUI for this one, it should display the right turn indicator is turned on)
    • Ego vehicle both turn indicators are turned off. This is a reset process. (If the simulator has a GUI for this one, it should display both left and right turn indicators are turned off)
  5. Check if pytest output is passed or failure
"},{"location":"tools/simulator_test/simulator_compatibility_test/#test-case-6","title":"Test Case #6","text":"
  1. Run your simulator (If the simulator is already running, skip this part)
  2. Load a map and an ego vehicle for the test (If a map and an ego are loaded already, skip this part)
  3. Run the test using the following command

    python -m pytest test_06_hazard_lights_cmd_and_report.py\n
  4. Check if expected behavior is created within the simulator

    • Ego vehicle hazard lights are turned on (If the simulator has a GUI for this one, it should display the hazard lights are turned on or blinking)
    • Ego vehicle hazard lights are turned off. This is a reset process. (If the simulator has a GUI for this one, it should display the hazard lights are turned off)
  5. Check if pytest output is passed or failure
"},{"location":"tools/simulator_test/simulator_compatibility_test/#test-procedures-for-test_morai_sim","title":"Test Procedures for test_morai_sim","text":""},{"location":"tools/simulator_test/simulator_compatibility_test/#build-process-before-test_1","title":"Build process before test","text":"
source install/setup.bash\ncolcon build --packages-select simulator_compatibility_test\ncd src/universe/autoware.universe/tools/simulator_test/simulator_compatibility_test/test_morai_sim\n

Detailed process

(WIP)

"},{"location":"tools/simulator_test/simulator_compatibility_test/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"tools/simulator_test/simulator_compatibility_test/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"tools/simulator_test/simulator_compatibility_test/#input","title":"Input","text":"Name Type Description /vehicle/status/control_mode autoware_auto_vehicle_msgs::msg::ControlModeReport for [Test Case #1] /vehicle/status/gear_status autoware_auto_vehicle_msgs::msg::GearReport for [Test Case #2] /vehicle/status/velocity_status autoware_auto_vehicle_msgs::msg::VelocityReport for [Test Case #3] /vehicle/status/steering_status autoware_auto_vehicle_msgs::msg::SteeringReport for [Test Case #4] /vehicle/status/turn_indicators_status autoware_auto_vehicle_msgs::msg::TurnIndicatorsReport for [Test Case #5] /vehicle/status/hazard_lights_status autoware_auto_vehicle_msgs::msg::HazardLightsReport for [Test Case #6]"},{"location":"tools/simulator_test/simulator_compatibility_test/#output","title":"Output","text":"Name Type Description /control/command/control_mode_cmd autoware_auto_vehicle_msgs/ControlModeCommand for [Test Case #1] /control/command/gear_cmd autoware_auto_vehicle_msgs/GearCommand for [Test Case #2] /control/command/control_cmd autoware_auto_vehicle_msgs/AckermannControlCommand for [Test Case #3, #4] /vehicle/status/steering_status autoware_auto_vehicle_msgs/TurnIndicatorsCommand for [Test Case #5] /control/command/turn_indicators_cmd autoware_auto_vehicle_msgs/HazardLightsCommand for [Test Case #6]"},{"location":"tools/simulator_test/simulator_compatibility_test/#parameters","title":"Parameters","text":"

None.

"},{"location":"tools/simulator_test/simulator_compatibility_test/#node-parameters","title":"Node Parameters","text":"

None.

"},{"location":"tools/simulator_test/simulator_compatibility_test/#core-parameters","title":"Core Parameters","text":"

None.

"},{"location":"tools/simulator_test/simulator_compatibility_test/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

None.

"},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/","title":"accel_brake_map_calibrator","text":"

The role of this node is to automatically calibrate accel_map.csv / brake_map.csv used in the raw_vehicle_cmd_converter node.

The base map, which is lexus's one by default, is updated iteratively with the loaded driving data.

"},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#how-to-calibrate","title":"How to calibrate","text":""},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#launch-calibrator","title":"Launch Calibrator","text":"

After launching Autoware, run the accel_brake_map_calibrator by the following command and then perform autonomous driving. Note: You can collect data with manual driving if it is possible to use the same vehicle interface as during autonomous driving (e.g. using a joystick).

ros2 launch accel_brake_map_calibrator accel_brake_map_calibrator.launch.xml rviz:=true\n

Or if you want to use rosbag files, run the following commands.

ros2 launch accel_brake_map_calibrator accel_brake_map_calibrator.launch.xml rviz:=true use_sim_time:=true\nros2 bag play <rosbag_file> --clock\n

During the calibration with setting the parameter progress_file_output to true, the log file is output in [directory of accel_brake_map_calibrator]/config/ . You can also see accel and brake maps in [directory of accel_brake_map_calibrator]/config/accel_map.csv and [directory of accel_brake_map_calibrator]/config/brake_map.csv after calibration.

"},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#calibration-plugin","title":"Calibration plugin","text":"

The rviz:=true option displays the RViz with a calibration plugin as below.

The current status (velocity and pedal) is shown in the plugin. The color on the current cell varies green/red depending on the current data is valid/invalid. The data that doesn't satisfy the following conditions are considered invalid and will not be used for estimation since aggressive data (e.g. when the pedal is moving fast) causes bad calibration accuracy.

The detailed parameters are described in the parameter section.

Note: You don't need to worry about whether the current state is red or green during calibration. Just keep getting data until all the cells turn red.

The value of each cell in the map is gray at first, and it changes from blue to red as the number of valid data in the cell accumulates. It is preferable to continue the calibration until each cell of the map becomes close to red. In particular, the performance near the stop depends strongly on the velocity of 0 ~ 6m/s range and the pedal value of +0.2 ~ -0.4, range so it is desirable to focus on those areas.

"},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#diagnostics","title":"Diagnostics","text":"

The accel brake map_calibrator publishes diagnostics message depending on the calibration status. Diagnostic type WARN indicates that the current accel/brake map is estimated to be inaccurate. In this situation, it is strongly recommended to perform a re-calibration of the accel/brake map.

Status Diagnostics Type Diagnostics message Description No calibration required OK \"OK\" Calibration Required WARN \"Accel/brake map Calibration is required.\" The accuracy of current accel/brake map may be low.

This diagnostics status can be also checked on the following ROS topic.

ros2 topic echo /accel_brake_map_calibrator/output/update_suggest\n

When the diagnostics type is WARN, True is published on this topic and the update of the accel/brake map is suggested.

"},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#evaluation-of-the-accel-brake-map-accuracy","title":"Evaluation of the accel / brake map accuracy","text":"

The accuracy of map is evaluated by the Root Mean Squared Error (RMSE) between the observed acceleration and predicted acceleration.

TERMS:

You can check additional error information with the following topics.

"},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#how-to-visualize-calibration-data","title":"How to visualize calibration data","text":"

The process of calibration can be visualized as below. Since these scripts need the log output of the calibration, the pedal_accel_graph_output parameter must be set to true while the calibration is running for the visualization.

"},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#visualize-plot-of-relation-between-acceleration-and-pedal","title":"Visualize plot of relation between acceleration and pedal","text":"

The following command shows the plot of used data in the calibration. In each plot of velocity ranges, you can see the distribution of the relationship between pedal and acceleration, and raw data points with colors according to their pitch angles.

ros2 run accel_brake_map_calibrator view_plot.py\n

"},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#visualize-statistics-about-accelerationvelocitypedal-data","title":"Visualize statistics about acceleration/velocity/pedal data","text":"

The following command shows the statistics of the calibration:

of all data in each map cell.

ros2 run accel_brake_map_calibrator view_statistics.py\n

"},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#how-to-save-the-calibrated-accel-brake-map-anytime-you-want","title":"How to save the calibrated accel / brake map anytime you want","text":"

You can save accel and brake map anytime with the following command.

ros2 service call /accel_brake_map_calibrator/update_map_dir tier4_vehicle_msgs/srv/UpdateAccelBrakeMap \"path: '<accel/brake map directory>'\"\n

You can also save accel and brake map in the default directory where Autoware reads accel_map.csv/brake_map.csv using the RViz plugin (AccelBrakeMapCalibratorButtonPanel) as following.

  1. Click Panels tab, and select AccelBrakeMapCalibratorButtonPanel.

  2. Select the panel, and the button will appear at the bottom of RViz.

  3. Press the button, and the accel / brake map will be saved. (The button cannot be pressed in certain situations, such as when the calibrator node is not running.)

"},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#parameters","title":"Parameters","text":""},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#system-parameters","title":"System Parameters","text":"Name Type Description Default value update_method string you can select map calibration method. \"update_offset_each_cell\" calculates offsets for each grid cells on the map. \"update_offset_total\" calculates the total offset of the map. \"update_offset_each_cell\" get_pitch_method string \"tf\": get pitch from tf, \"none\": unable to perform pitch validation and pitch compensation \"tf\" pedal_accel_graph_output bool if true, it will output a log of the pedal accel graph. true progress_file_output bool if true, it will output a log and csv file of the update process. false default_map_dir str directory of default map [directory of raw_vehicle_cmd_converter]/data/default/ calibrated_map_dir str directory of calibrated map [directory of accel_brake_map_calibrator]/config/ update_hz double hz for update 10.0"},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#algorithm-parameters","title":"Algorithm Parameters","text":"Name Type Description Default value initial_covariance double Covariance of initial acceleration map (larger covariance makes the update speed faster) 0.05 velocity_min_threshold double Speeds smaller than this are not used for updating. 0.1 velocity_diff_threshold double When the velocity data is more than this threshold away from the grid reference speed (center value), the associated data is not used for updating. 0.556 max_steer_threshold double If the steer angle is greater than this value, the associated data is not used for updating. 0.2 max_pitch_threshold double If the pitch angle is greater than this value, the associated data is not used for updating. 0.02 max_jerk_threshold double If the ego jerk calculated from ego acceleration is greater than this value, the associated data is not used for updating. 0.7 pedal_velocity_thresh double If the pedal moving speed is greater than this value, the associated data is not used for updating. 0.15 pedal_diff_threshold double If the current pedal value is more then this threshold away from the previous value, the associated data is not used for updating. 0.03 max_accel double Maximum value of acceleration calculated from velocity source. 5.0 min_accel double Minimum value of acceleration calculated from velocity source. -5.0 pedal_to_accel_delay double The delay time between actuation_cmd to acceleration, considered in the update logic. 0.3 update_suggest_thresh double threshold of RMSE ratio that update suggest flag becomes true. ( RMSE ratio: [RMSE of new map] / [RMSE of original map] ) 0.7 max_data_count int For visualization. When the data num of each grid gets this value, the grid color gets red. 100"},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#test-utility-scripts","title":"Test utility scripts","text":""},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#constant-accelbrake-command-test","title":"Constant accel/brake command test","text":"

These scripts are useful to test for accel brake map calibration. These generate an ActuationCmd with a constant accel/brake value given interactively by a user through CLI.

The accel/brake_tester.py receives a target accel/brake command from CLI. It sends a target value to actuation_cmd_publisher.py which generates the ActuationCmd. You can run these scripts by the following commands in the different terminals, and it will be as in the screenshot below.

ros2 run accel_brake_map_calibrator accel_tester.py\nros2 run accel_brake_map_calibrator brake_tester.py\nros2 run accel_brake_map_calibrator actuation_cmd_publisher.py\n

"},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#calibration-method","title":"Calibration Method","text":"

Two algorithms are selectable for the acceleration map update, update_offset_four_cell_around and update_offset_each_cell. Please see the link for datails.

"},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#data-preprocessing","title":"Data Preprocessing","text":"

Before calibration, missing or unusable data (e.g., too large handle angles) must first be eliminated. The following parameters are used to determine which data to remove.

"},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#parameters_1","title":"Parameters","text":"Name Description Default Value velocity_min_threshold Exclude minimal velocity 0.1 max_steer_threshold Exclude large steering angle 0.2 max_pitch_threshold Exclude large pitch angle 0.02 max_jerk_threshold Exclude large jerk 0.7 pedal_velocity_thresh Exclude large pedaling speed 0.15"},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#update_offset_each_cell","title":"update_offset_each_cell","text":"

Update by Recursive Least Squares(RLS) method using data close enough to each grid.

Advantage : Only data close enough to each grid is used for calibration, allowing accurate updates at each point.

Disadvantage : Calibration is time-consuming due to a large amount of data to be excluded.

"},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#parameters_2","title":"Parameters","text":"

Data selection is determined by the following thresholds. | Name | Default Value | | -------- | -------- | |velocity_diff_threshold|0.556| |pedal_diff_threshold|0.03|

"},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#update-formula","title":"Update formula","text":"\\begin{align} \\theta[n]=& \\theta[n-1]+\\frac{p[n-1]x^{(n)}}{\\lambda+p[n-1]{(x^{(n)})}^2}(y^{(n)}-\\theta[n-1]x^{(n)})\\\\ p[n]=&\\frac{p[n-1]}{\\lambda+p[n-1]{(x^{(n)})}^2} \\end{align}"},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#variables","title":"Variables","text":"Variable name Symbol covariance p[n-1] map_offset \\theta[n] forgettingfactor \\lambda phi x(=1) measured_acc y"},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#update_offset_four_cell_around-1","title":"update_offset_four_cell_around [1]","text":"

Update the offsets by RLS in four grids around newly obtained data. By considering linear interpolation, the update takes into account appropriate weights. Therefore, there is no need to remove data by thresholding.

Advantage : No data is wasted because updates are performed on the 4 grids around the data with appropriate weighting. Disadvantage : Accuracy may be degraded due to extreme bias of the data. For example, if data z(k) is biased near Z_{RR} in Fig. 2, updating is performed at the four surrounding points ( Z_{RR}, Z_{RL}, Z_{LR}, and Z_{LL}), but accuracy at Z_{LL} is not expected.

"},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#implementation","title":"Implementation","text":"

See eq.(7)-(10) in [1] for the updated formula. In addition, eq.(17),(18) from [1] are used for Anti-Windup.

"},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#references","title":"References","text":"

[1] Gabrielle Lochrie, Michael Doljevic, Mario Nona, Yongsoon Yoon, Anti-Windup Recursive Least Squares Method for Adaptive Lookup Tables with Application to Automotive Powertrain Control Systems, IFAC-PapersOnLine, Volume 54, Issue 20, 2021, Pages 840-845

"},{"location":"vehicle/external_cmd_converter/","title":"external_cmd_converter","text":"

external_cmd_converter is a node that converts desired mechanical input to acceleration and velocity by using accel/brake map.

"},{"location":"vehicle/external_cmd_converter/#input-topics","title":"Input topics","text":"Name Type Description ~/in/external_control_cmd tier4_external_api_msgs::msg::ControlCommand target throttle/brake/steering_angle/steering_angle_velocity is necessary to calculate desired control command. ~/input/shift_cmd\" autoware_auto_vehicle_msgs::GearCommand current command of gear. ~/input/emergency_stop tier4_external_api_msgs::msg::Heartbeat emergency heart beat for external command. ~/input/current_gate_mode tier4_control_msgs::msg::GateMode topic for gate mode. ~/input/odometry navigation_msgs::Odometry twist topic in odometry is used."},{"location":"vehicle/external_cmd_converter/#output-topics","title":"Output topics","text":"Name Type Description ~/out/control_cmd autoware_auto_control_msgs::msg::AckermannControlCommand ackermann control command converted from selected external command"},{"location":"vehicle/external_cmd_converter/#parameters","title":"Parameters","text":"Parameter Type Description timer_rate double timer's update rate wait_for_first_topic double if time out check is done after receiving first topic control_command_timeout double time out check for control command emergency_stop_timeout double time out check for emergency stop command"},{"location":"vehicle/external_cmd_converter/#limitation","title":"Limitation","text":"

tbd.

"},{"location":"vehicle/raw_vehicle_cmd_converter/","title":"raw_vehicle_cmd_converter","text":"

raw_vehicle_command_converter is a node that converts desired acceleration and velocity to mechanical input by using feed forward + feed back control (optional).

"},{"location":"vehicle/raw_vehicle_cmd_converter/#input-topics","title":"Input topics","text":"Name Type Description ~/input/control_cmd autoware_auto_control_msgs::msg::AckermannControlCommand target velocity/acceleration/steering_angle/steering_angle_velocity is necessary to calculate actuation command. ~/input/steering\" autoware_auto_vehicle_msgs::SteeringReport current status of steering used for steering feed back control ~/input/twist navigation_msgs::Odometry twist topic in odometry is used."},{"location":"vehicle/raw_vehicle_cmd_converter/#output-topics","title":"Output topics","text":"Name Type Description ~/output/actuation_cmd tier4_vehicle_msgs::msg::ActuationCommandStamped actuation command for vehicle to apply mechanical input"},{"location":"vehicle/raw_vehicle_cmd_converter/#parameters","title":"Parameters","text":"Parameter Type Description update_rate double timer's update rate th_max_message_delay_sec double threshold time of input messages' maximum delay th_arrived_distance_m double threshold distance to check if vehicle has arrived at the trajectory's endpoint th_stopped_time_sec double threshold time to check if vehicle is stopped th_stopped_velocity_mps double threshold velocity to check if vehicle is stopped"},{"location":"vehicle/raw_vehicle_cmd_converter/#limitation","title":"Limitation","text":"

The current feed back implementation is only applied to steering control.

"},{"location":"vehicle/steer_offset_estimator/Readme/","title":"steer_offset_estimator","text":""},{"location":"vehicle/steer_offset_estimator/Readme/#purpose","title":"Purpose","text":"

The role of this node is to automatically calibrate steer_offset used in the vehicle_interface node.

The base steer offset value is 0 by default, which is standard, is updated iteratively with the loaded driving data. This module is supposed to be used in below straight driving situation.

"},{"location":"vehicle/steer_offset_estimator/Readme/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"

Estimates sequential steering offsets from kinematic model and state observations. Calculate yaw rate error and then calculate steering error recursively by least squared method, for more details see updateSteeringOffset() function.

"},{"location":"vehicle/steer_offset_estimator/Readme/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"vehicle/steer_offset_estimator/Readme/#input","title":"Input","text":"Name Type Description ~/input/twist geometry_msgs::msg::TwistStamped vehicle twist ~/input/steer autoware_auto_vehicle_msgs::msg::SteeringReport steering"},{"location":"vehicle/steer_offset_estimator/Readme/#output","title":"Output","text":"Name Type Description ~/output/steering_offset tier4_debug_msgs::msg::Float32Stamped steering offset ~/output/steering_offset_covariance tier4_debug_msgs::msg::Float32Stamped covariance of steering offset"},{"location":"vehicle/steer_offset_estimator/Readme/#launch-calibrator","title":"Launch Calibrator","text":"

After launching Autoware, run the steer_offset_estimator by the following command and then perform autonomous driving. Note: You can collect data with manual driving if it is possible to use the same vehicle interface as during autonomous driving (e.g. using a joystick).

ros2 launch steer_offset_estimator steer_offset_estimator.launch.xml\n

Or if you want to use rosbag files, run the following commands.

ros2 param set /use_sim_time true\nros2 bag play <rosbag_file> --clock\n
"},{"location":"vehicle/steer_offset_estimator/Readme/#parameters","title":"Parameters","text":"Params Description steer_update_hz update hz initial_covariance steer offset is larger than tolerance forgetting_factor weight of using previous value valid_min_velocity velocity below this value is not used valid_max_steer steer above this value is not used warn_steer_offset_deg warn if offset is above this value"},{"location":"vehicle/steer_offset_estimator/Readme/#diagnostics","title":"Diagnostics","text":"

The steer_offset_estimator publishes diagnostics message depending on the calibration status. Diagnostic type WARN indicates that the current steer_offset is estimated to be inaccurate. In this situation, it is strongly recommended to perform a re-calibration of the steer_offset.

Status Diagnostics Type Diagnostics message No calibration required OK \"Preparation\" Calibration Required WARN \"Steer offset is larger than tolerance\"

This diagnostics status can be also checked on the following ROS topic.

ros2 topic echo /vehicle/status/steering_offset\n
"},{"location":"vehicle/vehicle_info_util/Readme/","title":"Vehicle Info Util","text":""},{"location":"vehicle/vehicle_info_util/Readme/#purpose","title":"Purpose","text":"

This package is to get vehicle info parameters.

"},{"location":"vehicle/vehicle_info_util/Readme/#description","title":"Description","text":"

In here, you can check the vehicle dimensions with more detail.

"},{"location":"vehicle/vehicle_info_util/Readme/#scripts","title":"Scripts","text":""},{"location":"vehicle/vehicle_info_util/Readme/#minimum-turning-radius","title":"Minimum turning radius","text":"
$ ros2 run vehicle_info_util min_turning_radius_calculator.py\nyaml path is /home/autoware/pilot-auto/install/vehicle_info_util/share/vehicle_info_util/config/vehicle_info.param.yaml\nMinimum turning radius is 3.253042620027102 [m] for rear, 4.253220695862465 [m] for front.\n

You can designate yaml file with -y option as follows.

ros2 run vehicle_info_util min_turning_radius_calculator.py -y <path-to-yaml>\n
"},{"location":"vehicle/vehicle_info_util/Readme/#assumptions-known-limits","title":"Assumptions / Known limits","text":"

TBD.

"}]} \ No newline at end of file diff --git a/latest/sitemap.xml.gz b/latest/sitemap.xml.gz index 02b255325d92c7db08981dbd9a4316efd91640b0..5d70c8946c1dc68c1296f46209b9eb02f7e73576 100644 GIT binary patch delta 15 WcmbOvHc5<4zMF%?{@zA59&P|3-vj&r delta 15 WcmbOvHc5<4zMF$1;p#><9&P|4cmxyx