diff --git a/latest/perception/radar_tracks_msgs_converter/index.html b/latest/perception/radar_tracks_msgs_converter/index.html index 08093ae065e2f..78e5f0363c301 100644 --- a/latest/perception/radar_tracks_msgs_converter/index.html +++ b/latest/perception/radar_tracks_msgs_converter/index.html @@ -14687,15 +14687,15 @@
~/input/radar_objects
(radar_msgs/msg/RadarTracks.msg): Input radar topic~/input/odometry
(nav_msgs/msg/Odometry.msg): Ego vehicle odometry topic~/output/radar_detected_objects
(autoware_auto_perception_msgs/msg/DetectedObject.idl): The topic converted to Autoware's message. This is used for radar sensor fusion detection and radar detection.~/output/radar_tracked_objects
(autoware_auto_perception_msgs/msg/TrackedObject.idl): The topic converted to Autoware's message. This is used for tracking layer sensor fusion.update_rate_hz
(double): The update rate [hz].new_frame_id
(string): The header frame of the output topic.use_twist_compensation
(bool): If the parameter is true, then the twist of the output objects' topic is compensated by ego vehicle motion.use_twist_yaw_compensation
(bool): If the parameter is true, then the ego motion compensation will also consider yaw motion of the ego vehicle.static_object_speed_threshold
(float): Specify the threshold for static object speed which determines the flag is_stationary
[m/s].This package convert the label from radar_msgs/msg/RadarTrack.msg
to Autoware label.
+
Autoware uses radar_msgs/msg/RadarTracks.msg as radar objects input data.
+To use radar objects data for Autoware perception module easily, radar_tracks_msgs_converter
converts message type from radar_msgs/msg/RadarTracks.msg
to autoware_auto_perception_msgs/msg/DetectedObject
.
+In addition, because many detection module have an assumption on base_link frame, radar_tracks_msgs_converter
provide the functions of transform frame_id.
Radar_tracks_msgs_converter
converts the label from radar_msgs/msg/RadarTrack.msg
to Autoware label.
Label id is defined as below.
Additional vendor-specific classifications are permitted starting from 32000 in radar_msgs/msg/RadarTrack.msg. +Autoware objects label is defined in ObjectClassification.idl
+~/input/radar_objects
(radar_msgs/msg/RadarTracks.msg
)~/input/odometry
(nav_msgs/msg/Odometry.msg
)~/output/radar_detected_objects
(autoware_auto_perception_msgs/msg/DetectedObject.idl
)~/output/radar_tracked_objects
(autoware_auto_perception_msgs/msg/TrackedObject.idl
)update_rate_hz
(double) [hz]This parameter is update rate for the onTimer
function.
+This parameter should be same as the frame rate of input topics.
new_frame_id
(string)This parameter is the header frame_id of the output topic.
+use_twist_compensation
(bool)This parameter is the flag to use the compensation to linear of ego vehicle's twist. +If the parameter is true, then the twist of the output objects' topic is compensated by the ego vehicle linear motion.
+use_twist_yaw_compensation
(bool)This parameter is the flag to use the compensation to yaw rotation of ego vehicle's twist. +If the parameter is true, then the ego motion compensation will also consider yaw motion of the ego vehicle.
+static_object_speed_threshold
(float) [m/s]This parameter is the threshold to determine the flag is_stationary
.
+If the velocity is lower than this parameter, the flag is_stationary
of DetectedObject is set to true
and dealt as a static object.
For Autoware's general documentation, see Autoware Documentation.
For detailed documents of Autoware Universe components, see Autoware Universe Documentation.
"},{"location":"CODE_OF_CONDUCT/","title":"Contributor Covenant Code of Conduct","text":""},{"location":"CODE_OF_CONDUCT/#contributor-covenant-code-of-conduct","title":"Contributor Covenant Code of Conduct","text":""},{"location":"CODE_OF_CONDUCT/#our-pledge","title":"Our Pledge","text":"We as members, contributors, and leaders pledge to make participation in our community a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation.
We pledge to act and interact in ways that contribute to an open, welcoming, diverse, inclusive, and healthy community.
"},{"location":"CODE_OF_CONDUCT/#our-standards","title":"Our Standards","text":"Examples of behavior that contributes to a positive environment for our community include:
Examples of unacceptable behavior include:
Community leaders are responsible for clarifying and enforcing our standards of acceptable behavior and will take appropriate and fair corrective action in response to any behavior that they deem inappropriate, threatening, offensive, or harmful.
Community leaders have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, and will communicate reasons for moderation decisions when appropriate.
"},{"location":"CODE_OF_CONDUCT/#scope","title":"Scope","text":"This Code of Conduct applies within all community spaces, and also applies when an individual is officially representing the community in public spaces. Examples of representing our community include using an official e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event.
"},{"location":"CODE_OF_CONDUCT/#enforcement","title":"Enforcement","text":"Instances of abusive, harassing, or otherwise unacceptable behavior may be reported to the community leaders responsible for enforcement at conduct@autoware.org. All complaints will be reviewed and investigated promptly and fairly.
All community leaders are obligated to respect the privacy and security of the reporter of any incident.
"},{"location":"CODE_OF_CONDUCT/#enforcement-guidelines","title":"Enforcement Guidelines","text":"Community leaders will follow these Community Impact Guidelines in determining the consequences for any action they deem in violation of this Code of Conduct:
"},{"location":"CODE_OF_CONDUCT/#1-correction","title":"1. Correction","text":"Community Impact: Use of inappropriate language or other behavior deemed unprofessional or unwelcome in the community.
Consequence: A private, written warning from community leaders, providing clarity around the nature of the violation and an explanation of why the behavior was inappropriate. A public apology may be requested.
"},{"location":"CODE_OF_CONDUCT/#2-warning","title":"2. Warning","text":"Community Impact: A violation through a single incident or series of actions.
Consequence: A warning with consequences for continued behavior. No interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, for a specified period of time. This includes avoiding interactions in community spaces as well as external channels like social media. Violating these terms may lead to a temporary or permanent ban.
"},{"location":"CODE_OF_CONDUCT/#3-temporary-ban","title":"3. Temporary Ban","text":"Community Impact: A serious violation of community standards, including sustained inappropriate behavior.
Consequence: A temporary ban from any sort of interaction or public communication with the community for a specified period of time. No public or private interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, is allowed during this period. Violating these terms may lead to a permanent ban.
"},{"location":"CODE_OF_CONDUCT/#4-permanent-ban","title":"4. Permanent Ban","text":"Community Impact: Demonstrating a pattern of violation of community standards, including sustained inappropriate behavior, harassment of an individual, or aggression toward or disparagement of classes of individuals.
Consequence: A permanent ban from any sort of public interaction within the community.
"},{"location":"CODE_OF_CONDUCT/#attribution","title":"Attribution","text":"This Code of Conduct is adapted from the Contributor Covenant, version 2.1, available at https://www.contributor-covenant.org/version/2/1/code_of_conduct.html.
Community Impact Guidelines were inspired by Mozilla's code of conduct enforcement ladder.
For answers to common questions about this code of conduct, see the FAQ at https://www.contributor-covenant.org/faq. Translations are available at https://www.contributor-covenant.org/translations.
"},{"location":"CONTRIBUTING/","title":"Contributing","text":""},{"location":"CONTRIBUTING/#contributing","title":"Contributing","text":"See https://autowarefoundation.github.io/autoware-documentation/main/contributing/.
"},{"location":"DISCLAIMER/","title":"DISCLAIMER","text":"DISCLAIMER
\u201cAutoware\u201d will be provided by The Autoware Foundation under the Apache License 2.0. This \u201cDISCLAIMER\u201d will be applied to all users of Autoware (a \u201cUser\u201d or \u201cUsers\u201d) with the Apache License 2.0 and Users shall hereby approve and acknowledge all the contents specified in this disclaimer below and will be deemed to consent to this disclaimer without any objection upon utilizing or downloading Autoware.
Disclaimer and Waiver of Warranties
AUTOWARE FOUNDATION MAKES NO REPRESENTATION OR WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, WITH RESPECT TO PROVIDING AUTOWARE (the \u201cService\u201d) including but not limited to any representation or warranty (i) of fitness or suitability for a particular purpose contemplated by the Users, (ii) of the expected functions, commercial value, accuracy, or usefulness of the Service, (iii) that the use by the Users of the Service complies with the laws and regulations applicable to the Users or any internal rules established by industrial organizations, (iv) that the Service will be free of interruption or defects, (v) of the non-infringement of any third party's right and (vi) the accuracy of the content of the Services and the software itself.
The Autoware Foundation shall not be liable for any damage incurred by the User that are attributable to the Autoware Foundation for any reasons whatsoever. UNDER NO CIRCUMSTANCES SHALL THE AUTOWARE FOUNDATION BE LIABLE FOR INCIDENTAL, INDIRECT, SPECIAL OR FUTURE DAMAGES OR LOSS OF PROFITS.
A User shall be entirely responsible for the content posted by the User and its use of any content of the Service or the Website. If the User is held responsible in a civil action such as a claim for damages or even in a criminal case, the Autoware Foundation and member companies, governments and academic & non-profit organizations and their directors, officers, employees and agents (collectively, the \u201cIndemnified Parties\u201d) shall be completely discharged from any rights or assertions the User may have against the Indemnified Parties, or from any legal action, litigation or similar procedures.
Indemnity
A User shall indemnify and hold the Indemnified Parties harmless from any of their damages, losses, liabilities, costs or expenses (including attorneys' fees or criminal compensation), or any claims or demands made against the Indemnified Parties by any third party, due to or arising out of, or in connection with utilizing Autoware (including the representations and warranties), the violation of applicable Product Liability Law of each country (including criminal case) or violation of any applicable laws by the Users, or the content posted by the User or its use of any content of the Service or the Website.
"},{"location":"common/autoware_ad_api_specs/","title":"autoware_adapi_specs","text":""},{"location":"common/autoware_ad_api_specs/#autoware_adapi_specs","title":"autoware_adapi_specs","text":"This package is a specification of Autoware AD API.
"},{"location":"common/autoware_auto_common/design/comparisons/","title":"Comparisons","text":""},{"location":"common/autoware_auto_common/design/comparisons/#comparisons","title":"Comparisons","text":"The float_comparisons.hpp
library is a simple set of functions for performing approximate numerical comparisons. There are separate functions for performing comparisons using absolute bounds and relative bounds. Absolute comparison checks are prefixed with abs_
and relative checks are prefixed with rel_
.
The bool_comparisons.hpp
library additionally contains an XOR operator.
The intent of the library is to improve readability of code and reduce likelihood of typographical errors when using numerical and boolean comparisons.
"},{"location":"common/autoware_auto_common/design/comparisons/#target-use-cases","title":"Target use cases","text":"The approximate comparisons are intended to be used to check whether two numbers lie within some absolute or relative interval. The exclusive_or
function will test whether two values cast to different boolean values.
epsilon
parameter. The value of this parameter must be >= 0.#include \"autoware_auto_common/common/bool_comparisons.hpp\"\n#include \"autoware_auto_common/common/float_comparisons.hpp\"\n\n#include <iostream>\n\n// using-directive is just for illustration; don't do this in practice\nusing namespace autoware::common::helper_functions::comparisons;\n\nstatic constexpr auto epsilon = 0.2;\nstatic constexpr auto relative_epsilon = 0.01;\n\nstd::cout << exclusive_or(true, false) << \"\\n\";\n// Prints: true\n\nstd::cout << rel_eq(1.0, 1.1, relative_epsilon)) << \"\\n\";\n// Prints: false\n\nstd::cout << approx_eq(10000.0, 10010.0, epsilon, relative_epsilon)) << \"\\n\";\n// Prints: true\n\nstd::cout << abs_eq(4.0, 4.2, epsilon) << \"\\n\";\n// Prints: true\n\nstd::cout << abs_ne(4.0, 4.2, epsilon) << \"\\n\";\n// Prints: false\n\nstd::cout << abs_eq_zero(0.2, epsilon) << \"\\n\";\n// Prints: false\n\nstd::cout << abs_lt(4.0, 4.25, epsilon) << \"\\n\";\n// Prints: true\n\nstd::cout << abs_lte(1.0, 1.2, epsilon) << \"\\n\";\n// Prints: true\n\nstd::cout << abs_gt(1.25, 1.0, epsilon) << \"\\n\";\n// Prints: true\n\nstd::cout << abs_gte(0.75, 1.0, epsilon) << \"\\n\";\n// Prints: false\n
"},{"location":"common/autoware_auto_geometry/design/interval/","title":"Interval","text":""},{"location":"common/autoware_auto_geometry/design/interval/#interval","title":"Interval","text":"The interval is a standard 1D real-valued interval. The class implements a representation and operations on the interval type and guarantees interval validity on construction. Basic operations and accessors are implemented, as well as other common operations. See 'Example Usage' below.
"},{"location":"common/autoware_auto_geometry/design/interval/#target-use-cases","title":"Target use cases","text":"NaN
.#include \"autoware_auto_geometry/interval.hpp\"\n\n#include <iostream>\n\n// using-directive is just for illustration; don't do this in practice\nusing namespace autoware::common::geometry;\n\n// bounds for example interval\nconstexpr auto MIN = 0.0;\nconstexpr auto MAX = 1.0;\n\n//\n// Try to construct an invalid interval. This will give the following error:\n// 'Attempted to construct an invalid interval: {\"min\": 1.0, \"max\": 0.0}'\n//\n\ntry {\nconst auto i = Interval_d(MAX, MIN);\n} catch (const std::runtime_error& e) {\nstd::cerr << e.what();\n}\n\n//\n// Construct a double precision interval from 0 to 1\n//\n\nconst auto i = Interval_d(MIN, MAX);\n\n//\n// Test accessors and properties\n//\n\nstd::cout << Interval_d::min(i) << \" \" << Interval_d::max(i) << \"\\n\";\n// Prints: 0.0 1.0\n\nstd::cout << Interval_d::empty(i) << \" \" << Interval_d::length(i) << \"\\n\";\n// Prints: false 1.0\n\nstd::cout << Interval_d::contains(i, 0.3) << \"\\n\";\n// Prints: true\n\nstd::cout << Interval_d::is_subset_eq(Interval_d(0.2, 0.4), i) << \"\\n\";\n// Prints: true\n\n//\n// Test operations.\n//\n\nstd::cout << Interval_d::intersect(i, Interval(-1.0, 0.3)) << \"\\n\";\n// Prints: {\"min\": 0.0, \"max\": 0.3}\n\nstd::cout << Interval_d::project_to_interval(i, 0.5) << \" \"\n<< Interval_d::project_to_interval(i, -1.3) << \"\\n\";\n// Prints: 0.5 0.0\n\n//\n// Distinguish empty/zero measure\n//\n\nconst auto i_empty = Interval();\nconst auto i_zero_length = Interval(0.0, 0.0);\n\nstd::cout << Interval_d::empty(i_empty) << \" \"\n<< Interval_d::empty(i_zero_length) << \"\\n\";\n// Prints: true false\n\nstd::cout << Interval_d::zero_measure(i_empty) << \" \"\n<< Interval_d::zero_measure(i_zero_length) << \"\\n\";\n// Prints: false false\n
"},{"location":"common/autoware_auto_geometry/design/polygon_intersection_2d-design/","title":"2D Convex Polygon Intersection","text":""},{"location":"common/autoware_auto_geometry/design/polygon_intersection_2d-design/#2d-convex-polygon-intersection","title":"2D Convex Polygon Intersection","text":"Two convex polygon's intersection can be visualized on the image below as the blue area:
"},{"location":"common/autoware_auto_geometry/design/polygon_intersection_2d-design/#purpose-use-cases","title":"Purpose / Use cases","text":"Computing the intersection between two polygons can be useful in many applications of scene understanding. It can be used to estimate collision detection, shape alignment, shape association and in any application that deals with the objects around the perceiving agent.
"},{"location":"common/autoware_auto_geometry/design/polygon_intersection_2d-design/#design","title":"Design","text":"\\(Livermore, Calif, 1977\\) mention the following observations about convex polygon intersection:
With the observation mentioned above, the current algorithm operates in the following way:
Inputs:
Outputs:
The spatial hash is a data structure designed for efficient fixed-radius near-neighbor queries in low dimensions.
The fixed-radius near-neighbors problem is defined as follows:
For point p, find all points p' s.t. d(p, p') < r
Where in this case d(p, p')
is euclidean distance, and r
is the fixed radius.
For n
points with an average of k
neighbors each, this data structure can perform m
near-neighbor queries (to generate lists of near-neighbors for m
different points) in O(mk)
time.
By contrast, using a k-d tree for successive nearest-neighbor queries results in a running time of O(m log n)
.
The spatial hash works as follows:
x_min/x_max
and y_min/y_max
x_min
and y_min
as index (0, 0)
Under the hood, an std::unordered_multimap
is used, where the key is a bin/voxel index. The bin size was computed to be the same as the lookup distance.
In addition, this data structure can support 2D or 3D queries. This is determined during configuration, and baked into the data structure via the configuration class. The purpose of this was to avoid if statements in tight loops. The configuration class specializations themselves use CRTP (Curiously Recurring Template Patterns) to do \"static polymorphism\", and avoid a dispatching call.
"},{"location":"common/autoware_auto_geometry/design/spatial-hash-design/#performance-characterization","title":"Performance characterization","text":""},{"location":"common/autoware_auto_geometry/design/spatial-hash-design/#time","title":"Time","text":"Insertion is O(n)
because lookup time for the underlying hashmap is O(n)
for hashmaps. In practice, lookup time for hashmaps and thus insertion time should be O(1)
.
Removing a point is O(1)
because the current API only supports removal via direct reference to a node.
Finding k
near-neighbors is worst case O(n)
in the case of an adversarial example, but in practice O(k)
.
The module consists of the following components:
O(n + n + A * n)
, where A
is an arbitrary constant (load factor)O(n + n)
This results in O(n)
space complexity.
The spatial hash's state is dictated by the status of the underlying unordered_multimap.
The data structure is wholly configured by a config class. The constructor of the class determines in the data structure accepts strictly 2D or strictly 3D queries.
"},{"location":"common/autoware_auto_geometry/design/spatial-hash-design/#inputs","title":"Inputs","text":"The primary method of introducing data into the data structure is via the insert method.
"},{"location":"common/autoware_auto_geometry/design/spatial-hash-design/#outputs","title":"Outputs","text":"The primary method of retrieving data from the data structure is via the near\\(2D configuration\\) or near \\(3D configuration\\) method.
The whole data structure can also be traversed using standard constant iterators.
"},{"location":"common/autoware_auto_geometry/design/spatial-hash-design/#future-work","title":"Future Work","text":"It is an rviz plugin for visualizing the result from perception module. This package is based on the implementation of the rviz plugin developed by Autoware.Auto.
See Autoware.Auto design documentation for the original design philosophy. [1]
"},{"location":"common/autoware_auto_perception_rviz_plugin/#input-types-visualization-results","title":"Input Types / Visualization Results","text":""},{"location":"common/autoware_auto_perception_rviz_plugin/#detectedobjects","title":"DetectedObjects","text":""},{"location":"common/autoware_auto_perception_rviz_plugin/#input-types","title":"Input Types","text":"Name Type Descriptionautoware_auto_perception_msgs::msg::DetectedObjects
detection result array"},{"location":"common/autoware_auto_perception_rviz_plugin/#visualization-result","title":"Visualization Result","text":""},{"location":"common/autoware_auto_perception_rviz_plugin/#trackedobjects","title":"TrackedObjects","text":""},{"location":"common/autoware_auto_perception_rviz_plugin/#input-types_1","title":"Input Types","text":"Name Type Description autoware_auto_perception_msgs::msg::TrackedObjects
tracking result array"},{"location":"common/autoware_auto_perception_rviz_plugin/#visualization-result_1","title":"Visualization Result","text":"Overwrite tracking results with detection results.
"},{"location":"common/autoware_auto_perception_rviz_plugin/#predictedobjects","title":"PredictedObjects","text":""},{"location":"common/autoware_auto_perception_rviz_plugin/#input-types_2","title":"Input Types","text":"Name Type Descriptionautoware_auto_perception_msgs::msg::PredictedObjects
prediction result array"},{"location":"common/autoware_auto_perception_rviz_plugin/#visualization-result_2","title":"Visualization Result","text":"Overwrite prediction results with tracking results.
"},{"location":"common/autoware_auto_perception_rviz_plugin/#referencesexternal-links","title":"References/External links","text":"[1] https://gitlab.com/autowarefoundation/autoware.auto/AutowareAuto/-/tree/master/src/tools/visualization/autoware_rviz_plugins
"},{"location":"common/autoware_auto_perception_rviz_plugin/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":""},{"location":"common/autoware_auto_tf2/design/autoware-auto-tf2-design/","title":"autoware_auto_tf2","text":""},{"location":"common/autoware_auto_tf2/design/autoware-auto-tf2-design/#autoware_auto_tf2","title":"autoware_auto_tf2","text":"This is the design document for the autoware_auto_tf2
package.
In general, users of ROS rely on tf (and its successor, tf2) for publishing and utilizing coordinate frame transforms. This is true even to the extent that the tf2 contains the packages tf2_geometry_msgs
and tf2_sensor_msgs
which allow for easy conversion to and from the message types defined in geometry_msgs
and sensor_msgs
, respectively. However, AutowareAuto contains some specialized message types which are not transformable between frames using the ROS 2 library. The autoware_auto_tf2
package aims to provide developers with tools to transform applicable autoware_auto_msgs
types. In addition to this, this package also provides transform tools for messages types in geometry_msgs
missing in tf2_geometry_msgs
.
While writing tf2_some_msgs
or contributing to tf2_geometry_msgs
, compatibility and design intent was ensured with the following files in the existing tf2 framework:
tf2/convert.h
tf2_ros/buffer_interface.h
For example:
void tf2::convert( const A & a,B & b)\n
The method tf2::convert
is dependent on the following:
template<typename A, typename B>\nB tf2::toMsg(const A& a);\ntemplate<typename A, typename B>\nvoid tf2::fromMsg(const A&, B& b);\n\n// New way to transform instead of using tf2::doTransform() directly\ntf2_ros::BufferInterface::transform(...)\n
Which, in turn, is dependent on the following:
void tf2::convert( const A & a,B & b)\nconst std::string& tf2::getFrameId(const T& t)\nconst ros::Time& tf2::getTimestamp(const T& t);\n
"},{"location":"common/autoware_auto_tf2/design/autoware-auto-tf2-design/#current-implementation-of-tf2_geometry_msgs","title":"Current Implementation of tf2_geometry_msgs","text":"In both ROS 1 and ROS 2 stamped msgs like Vector3Stamped
, QuaternionStamped
have associated functions like:
getTimestamp
getFrameId
doTransform
toMsg
fromMsg
In ROS 1, to support tf2::convert
and need in doTransform
of the stamped data, non-stamped underlying data like Vector3
, Point
, have implementations of the following functions:
toMsg
fromMsg
In ROS 2, much of the doTransform
method is not using toMsg
and fromMsg
as data types from tf2 are not used. Instead doTransform
is done using KDL
, thus functions relating to underlying data were not added; such as Vector3
, Point
, or ported in this commit ros/geometry2/commit/6f2a82. The non-stamped data with toMsg
and fromMsg
are Quaternion
, Transform
. Pose
has the modified toMsg
and not used by PoseStamped
.
The initial rough plan was to implement some of the common tf2 functions like toMsg
, fromMsg
, and doTransform
, as needed for all the underlying data types in BoundingBoxArray
. Examples of the data types include: BoundingBox
, Quaternion32
, and Point32
. In addition, the implementation should be done such that upstream contributions could also be made to geometry_msgs
.
Due to conflicts in a function signatures, the predefined template of convert.h
/ transform_functions.h
is not followed and compatibility with tf2::convert(..)
is broken and toMsg
is written differently.
// Old style\ngeometry_msgs::Vector3 toMsg(const tf2::Vector3& in)\ngeometry_msgs::Point& toMsg(const tf2::Vector3& in)\n\n// New style\ngeometry_msgs::Point& toMsg(const tf2::Vector3& in, geometry_msgs::Point& out)\n
"},{"location":"common/autoware_auto_tf2/design/autoware-auto-tf2-design/#inputs-outputs-api","title":"Inputs / Outputs / API","text":"The library provides API doTransform
for the following data-types that are either not available in tf2_geometry_msgs
or the messages types are part of autoware_auto_msgs
and are therefore custom and not inherently supported by any of the tf2 libraries. The following APIs are provided for the following data types:
Point32
inline void doTransform(\nconst geometry_msgs::msg::Point32 & t_in,\ngeometry_msgs::msg::Point32 & t_out,\nconst geometry_msgs::msg::TransformStamped & transform)\n
Quaternion32
(autoware_auto_msgs
)inline void doTransform(\nconst autoware_auto_geometry_msgs::msg::Quaternion32 & t_in,\nautoware_auto_geometry_msgs::msg::Quaternion32 & t_out,\nconst geometry_msgs::msg::TransformStamped & transform)\n
BoundingBox
(autoware_auto_msgs
)inline void doTransform(\nconst BoundingBox & t_in, BoundingBox & t_out,\nconst geometry_msgs::msg::TransformStamped & transform)\n
BoundingBoxArray
inline void doTransform(\nconst BoundingBoxArray & t_in,\nBoundingBoxArray & t_out,\nconst geometry_msgs::msg::TransformStamped & transform)\n
In addition, the following helper methods are also added:
BoundingBoxArray
inline tf2::TimePoint getTimestamp(const BoundingBoxArray & t)\n\ninline std::string getFrameId(const BoundingBoxArray & t)\n
"},{"location":"common/autoware_auto_tf2/design/autoware-auto-tf2-design/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":""},{"location":"common/autoware_auto_tf2/design/autoware-auto-tf2-design/#challenges","title":"Challenges","text":"tf2_geometry_msgs
does not implement doTransform
for any non-stamped data types, but it is possible with the same function template. It is needed when transforming sub-data, with main data that does have a stamp and can call doTransform on the sub-data with the same transform. Is this a useful upstream contribution?tf2_geometry_msgs
does not have Point
, Point32
, does not seem it needs one, also the implementation of non-standard toMsg
would not help the convert.BoundingBox
uses 32-bit float like Quaternion32
and Point32
to save space, as they are used repeatedly in BoundingBoxArray
. While transforming is it better to convert to 64-bit Quaternion
, Point
, or PoseStamped
, to re-use existing implementation of doTransform
, or does it need to be implemented? It may not be simple to template.This is the design document for the autoware_testing
package.
The package aims to provide a unified way to add standard testing functionality to the package, currently supporting:
add_smoke_test
): launch a node with default configuration and ensure that it starts up and does not crash.Uses ros_testing
(which is an extension of launch_testing
) and provides some parametrized, reusable standard tests to run.
Parametrization is limited to package, executable names, parameters filename and executable arguments. Test namespace is set as 'test'. Parameters file for the package is expected to be in param
directory inside package.
To add a smoke test to your package tests, add test dependency on autoware_testing
to package.xml
<test_depend>autoware_testing</test_depend>\n
and add the following two lines to CMakeLists.txt
in the IF (BUILD_TESTING)
section:
find_package(autoware_testing REQUIRED)\nadd_smoke_test(<package_name> <executable_name> [PARAM_FILENAME <param_filename>] [EXECUTABLE_ARGUMENTS <arguments>])\n
Where
<package_name>
- [required] tested node package name.
<executable_name>
- [required] tested node executable name.
<param_filename>
- [optional] param filename. Default value is test.param.yaml
. Required mostly in situation where there are multiple smoke tests in a package and each requires different parameters set
<arguments>
- [optional] arguments passed to executable. By default no arguments are passed.
which adds <executable_name>_smoke_test
test to suite.
Example test result:
build/<package_name>/test_results/<package_name>/<executable_name>_smoke_test.xunit.xml: 1 test, 0 errors, 0 failures, 0 skipped\n
"},{"location":"common/autoware_testing/design/autoware_testing-design/#references-external-links","title":"References / External links","text":"autoware_testing
Plugin for displaying 2D overlays over the RViz2 3D scene.
Based on the jsk_visualization package, under the 3-Clause BSD license.
"},{"location":"common/awf_vehicle_rviz_plugin/awf_2d_overlay_vehicle/#purpose","title":"Purpose","text":"This plugin provides a visual and easy-to-understand display of vehicle speed, turn signal, steering status and gears.
"},{"location":"common/awf_vehicle_rviz_plugin/awf_2d_overlay_vehicle/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"common/awf_vehicle_rviz_plugin/awf_2d_overlay_vehicle/#input","title":"Input","text":"Name Type Description/vehicle/status/velocity_status
autoware_auto_vehicle_msgs::msg::VelocityReport
The topic is vehicle twist /vehicle/status/turn_indicators_status
autoware_auto_vehicle_msgs::msg::TurnIndicatorsReport
The topic is status of turn signal /vehicle/status/hazard_status
autoware_auto_vehicle_msgs::msg::HazardReport
The topic is status of hazard /vehicle/status/steering_status
autoware_auto_vehicle_msgs::msg::SteeringReport
The topic is status of steering /vehicle/status/gear_status
autoware_auto_vehicle_msgs::msg::GearReport
The topic is status of gear"},{"location":"common/awf_vehicle_rviz_plugin/awf_2d_overlay_vehicle/#parameter","title":"Parameter","text":""},{"location":"common/awf_vehicle_rviz_plugin/awf_2d_overlay_vehicle/#core-parameters","title":"Core Parameters","text":""},{"location":"common/awf_vehicle_rviz_plugin/awf_2d_overlay_vehicle/#signaldisplay","title":"SignalDisplay","text":"Name Type Default Value Description property_width_
int 128 Width of the plotter window [px] property_height_
int 128 Height of the plotter window [px] property_left_
int 128 Left of the plotter window [px] property_top_
int 128 Top of the plotter window [px] property_signal_color_
QColor QColor(25, 255, 240) Turn Signal color"},{"location":"common/awf_vehicle_rviz_plugin/awf_2d_overlay_vehicle/#assumptions-known-limits","title":"Assumptions / Known limits","text":"TBD.
"},{"location":"common/awf_vehicle_rviz_plugin/awf_2d_overlay_vehicle/#usage","title":"Usage","text":"Start rviz and select Add under the Displays panel.
Select any one of the tier4_vehicle_rviz_plugin and press OK.
Enter the name of the topic where you want to view the status.
This plugin allows publishing and controlling the ros bag time.
"},{"location":"common/bag_time_manager_rviz_plugin/#output","title":"Output","text":"tbd.
"},{"location":"common/bag_time_manager_rviz_plugin/#howtouse","title":"HowToUse","text":"Start rviz and select panels/Add new panel.
Select BagTimeManagerPanel and press OK.
See bag_time_manager_rviz_plugin/BagTimeManagerPanel is added.
This package is a specification of component interfaces.
"},{"location":"common/component_interface_tools/","title":"component_interface_tools","text":""},{"location":"common/component_interface_tools/#component_interface_tools","title":"component_interface_tools","text":"This package provides the following tools for component interface.
"},{"location":"common/component_interface_tools/#service_log_checker","title":"service_log_checker","text":"Monitor the service log of component_interface_utils and display if the response status is an error.
"},{"location":"common/component_interface_utils/","title":"component_interface_utils","text":""},{"location":"common/component_interface_utils/#component_interface_utils","title":"component_interface_utils","text":""},{"location":"common/component_interface_utils/#features","title":"Features","text":"This is a utility package that provides the following features:
This package provides the wrappers for the interface classes of rclcpp. The wrappers limit the usage of the original class to enforce the processing recommended by the component interface. Do not inherit the class of rclcpp, and forward or wrap the member function that is allowed to be used.
"},{"location":"common/component_interface_utils/#instantiation-of-the-wrapper-class","title":"Instantiation of the wrapper class","text":"The wrapper class requires interface information in this format.
struct SampleService\n{\nusing Service = sample_msgs::srv::ServiceType;\nstatic constexpr char name[] = \"/sample/service\";\n};\n\nstruct SampleMessage\n{\nusing Message = sample_msgs::msg::MessageType;\nstatic constexpr char name[] = \"/sample/message\";\nstatic constexpr size_t depth = 1;\nstatic constexpr auto reliability = RMW_QOS_POLICY_RELIABILITY_RELIABLE;\nstatic constexpr auto durability = RMW_QOS_POLICY_DURABILITY_TRANSIENT_LOCAL;\n};\n
Create the wrapper using the above definition as follows.
// header file\ncomponent_interface_utils::Service<SampleService>::SharedPtr srv_;\ncomponent_interface_utils::Client<SampleService>::SharedPtr cli_;\ncomponent_interface_utils::Publisher<SampleMessage>::SharedPtr pub_;\ncomponent_interface_utils::Subscription<SampleMessage>::SharedPtr sub_;\n\n// source file\nconst auto node = component_interface_utils::NodeAdaptor(this);\nnode.init_srv(srv_, callback);\nnode.init_cli(cli_);\nnode.init_pub(pub_);\nnode.init_sub(sub_, callback);\n
"},{"location":"common/component_interface_utils/#logging-for-service-and-client","title":"Logging for service and client","text":"If the wrapper class is used, logging is automatically enabled. The log level is RCLCPP_INFO
.
If the wrapper class is used and the service response has status, throwing ServiceException
will automatically catch and set it to status. This is useful when returning an error from a function called from the service callback.
void service_callback(Request req, Response res)\n{\nfunction();\nres->status.success = true;\n}\n\nvoid function()\n{\nthrow ServiceException(ERROR_CODE, \"message\");\n}\n
If the wrapper class is not used or the service response has no status, manually catch the ServiceException
as follows.
void service_callback(Request req, Response res)\n{\ntry {\nfunction();\nres->status.success = true;\n} catch (const ServiceException & error) {\nres->status = error.status();\n}\n}\n
"},{"location":"common/component_interface_utils/#relays-for-topic-and-service","title":"Relays for topic and service","text":"There are utilities for relaying services and messages of the same type.
const auto node = component_interface_utils::NodeAdaptor(this);\nservice_callback_group_ = create_callback_group(rclcpp::CallbackGroupType::MutuallyExclusive);\nnode.relay_message(pub_, sub_);\nnode.relay_service(cli_, srv_, service_callback_group_); // group is for avoiding deadlocks\n
"},{"location":"common/cuda_utils/","title":"cuda_utils","text":""},{"location":"common/cuda_utils/#cuda_utils","title":"cuda_utils","text":""},{"location":"common/cuda_utils/#purpose","title":"Purpose","text":"This package contains a library of common functions related to CUDA.
"},{"location":"common/fake_test_node/design/fake_test_node-design/","title":"Fake Test Node","text":""},{"location":"common/fake_test_node/design/fake_test_node-design/#fake-test-node","title":"Fake Test Node","text":""},{"location":"common/fake_test_node/design/fake_test_node-design/#what-this-package-provides","title":"What this package provides","text":"When writing an integration test for a node in C++ using GTest, there is quite some boilerplate code that needs to be written to set up a fake node that would publish expected messages on an expected topic and subscribes to messages on some other topic. This is usually implemented as a custom GTest fixture.
This package contains a library that introduces two utility classes that can be used in place of custom fixtures described above to write integration tests for a node:
autoware::tools::testing::FakeTestNode
- to use as a custom test fixture with TEST_F
testsautoware::tools::testing::FakeTestNodeParametrized
- to use a custom test fixture with the parametrized TEST_P
tests (accepts a template parameter that gets forwarded to testing::TestWithParam<T>
)These fixtures take care of initializing and re-initializing rclcpp as well as of checking that all subscribers and publishers have a match, thus reducing the amount of boilerplate code that the user needs to write.
"},{"location":"common/fake_test_node/design/fake_test_node-design/#how-to-use-this-library","title":"How to use this library","text":"After including the relevant header the user can use a typedef to use a custom fixture name and use the provided classes as fixtures in TEST_F
and TEST_P
tests directly.
Let's say there is a node NodeUnderTest
that requires testing. It just subscribes to std_msgs::msg::Int32
messages and publishes a std_msgs::msg::Bool
to indicate that the input is positive. To test such a node the following code can be used utilizing the autoware::tools::testing::FakeTestNode
:
using FakeNodeFixture = autoware::tools::testing::FakeTestNode;\n\n/// @test Test that we can use a non-parametrized test.\nTEST_F(FakeNodeFixture, Test) {\nInt32 msg{};\nmsg.data = 15;\nconst auto node = std::make_shared<NodeUnderTest>();\n\nBool::SharedPtr last_received_msg{};\nauto fake_odom_publisher = create_publisher<Int32>(\"/input_topic\");\nauto result_odom_subscription = create_subscription<Bool>(\"/output_topic\", *node,\n[&last_received_msg](const Bool::SharedPtr msg) {last_received_msg = msg;});\n\nconst auto dt{std::chrono::milliseconds{100LL}};\nconst auto max_wait_time{std::chrono::seconds{10LL}};\nauto time_passed{std::chrono::milliseconds{0LL}};\nwhile (!last_received_msg) {\nfake_odom_publisher->publish(msg);\nrclcpp::spin_some(node);\nrclcpp::spin_some(get_fake_node());\nstd::this_thread::sleep_for(dt);\ntime_passed += dt;\nif (time_passed > max_wait_time) {\nFAIL() << \"Did not receive a message soon enough.\";\n}\n}\nEXPECT_TRUE(last_received_msg->data);\nSUCCEED();\n}\n
Here only the TEST_F
example is shown but a TEST_P
usage is very similar with a little bit more boilerplate to set up all the parameter values, see test_fake_test_node.cpp
for an example usage.
This package contains geography-related functions used by other packages, so please refer to them as needed.
"},{"location":"common/global_parameter_loader/Readme/","title":"Autoware Global Parameter Loader","text":""},{"location":"common/global_parameter_loader/Readme/#autoware-global-parameter-loader","title":"Autoware Global Parameter Loader","text":"This package is to set common ROS parameters to each node.
"},{"location":"common/global_parameter_loader/Readme/#usage","title":"Usage","text":"Add the following lines to the launch file of the node in which you want to get global parameters.
<!-- Global parameters -->\n<include file=\"$(find-pkg-share global_parameter_loader)/launch/global_params.launch.py\">\n<arg name=\"vehicle_model\" value=\"$(var vehicle_model)\"/>\n</include>\n
The vehicle model parameter is read from config/vehicle_info.param.yaml
in vehicle_model
_description package.
Currently only vehicle_info is loaded by this launcher.
"},{"location":"common/glog_component/","title":"glog_component","text":""},{"location":"common/glog_component/#glog_component","title":"glog_component","text":"This package provides the glog (google logging library) feature as a ros2 component library. This is used to dynamically load the glog feature with container.
See the glog github for the details of its features.
"},{"location":"common/glog_component/#example","title":"Example","text":"When you load the glog_component
in container, the launch file can be like below:
glog_component = ComposableNode(\n package=\"glog_component\",\n plugin=\"GlogComponent\",\n name=\"glog_component\",\n)\n\ncontainer = ComposableNodeContainer(\n name=\"my_container\",\n namespace=\"\",\n package=\"rclcpp_components\",\n executable=LaunchConfiguration(\"container_executable\"),\n composable_node_descriptions=[\n component1,\n component2,\n glog_component,\n ],\n)\n
"},{"location":"common/goal_distance_calculator/Readme/","title":"goal_distance_calculator","text":""},{"location":"common/goal_distance_calculator/Readme/#goal_distance_calculator","title":"goal_distance_calculator","text":""},{"location":"common/goal_distance_calculator/Readme/#purpose","title":"Purpose","text":"This node publishes deviation of self-pose from goal pose.
"},{"location":"common/goal_distance_calculator/Readme/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"common/goal_distance_calculator/Readme/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"common/goal_distance_calculator/Readme/#input","title":"Input","text":"Name Type Description/planning/mission_planning/route
autoware_auto_planning_msgs::msg::Route
Used to get goal pose /tf
tf2_msgs/TFMessage
TF (self-pose)"},{"location":"common/goal_distance_calculator/Readme/#output","title":"Output","text":"Name Type Description deviation/lateral
tier4_debug_msgs::msg::Float64Stamped
publish lateral deviation of self-pose from goal pose[m] deviation/longitudinal
tier4_debug_msgs::msg::Float64Stamped
publish longitudinal deviation of self-pose from goal pose[m] deviation/yaw
tier4_debug_msgs::msg::Float64Stamped
publish yaw deviation of self-pose from goal pose[rad] deviation/yaw_deg
tier4_debug_msgs::msg::Float64Stamped
publish yaw deviation of self-pose from goal pose[deg]"},{"location":"common/goal_distance_calculator/Readme/#parameters","title":"Parameters","text":""},{"location":"common/goal_distance_calculator/Readme/#node-parameters","title":"Node Parameters","text":"Name Type Default Value Explanation update_rate
double 10.0 Timer callback period. [Hz]"},{"location":"common/goal_distance_calculator/Readme/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Explanation oneshot
bool true publish deviations just once or repeatedly"},{"location":"common/goal_distance_calculator/Readme/#assumptions-known-limits","title":"Assumptions / Known limits","text":"TBD.
"},{"location":"common/grid_map_utils/","title":"Grid Map Utils","text":""},{"location":"common/grid_map_utils/#grid-map-utils","title":"Grid Map Utils","text":""},{"location":"common/grid_map_utils/#overview","title":"Overview","text":"This packages contains a re-implementation of the grid_map::PolygonIterator
used to iterate over all cells of a grid map contained inside some polygon.
This implementation uses the scan line algorithm, a common algorithm used to draw polygons on a rasterized image. The main idea of the algorithm adapted to a grid map is as follow:
(row, column)
indexes are inside of the polygon.More details on the scan line algorithm can be found in the References.
"},{"location":"common/grid_map_utils/#api","title":"API","text":"The grid_map_utils::PolygonIterator
follows the same API as the original grid_map::PolygonIterator
.
The behavior of the grid_map_utils::PolygonIterator
is only guaranteed to match the grid_map::PolygonIterator
if edges of the polygon do not exactly cross any cell center. In such a case, whether the crossed cell is considered inside or outside of the polygon can vary due to floating precision error.
Benchmarking code is implemented in test/benchmarking.cpp
and is also used to validate that the grid_map_utils::PolygonIterator
behaves exactly like the grid_map::PolygonIterator
.
The following figure shows a comparison of the runtime between the implementation of this package (grid_map_utils
) and the original implementation (grid_map
). The time measured includes the construction of the iterator and the iteration over all indexes and is shown using a logarithmic scale. Results were obtained varying the side size of a square grid map with 100 <= n <= 1000
(size=n
means a grid of n x n
cells), random polygons with a number of vertices 3 <= m <= 100
and with each parameter (n,m)
repeated 10 times.
There exists variations of the scan line algorithm for multiple polygons. These can be implemented if we want to iterate over the cells contained in at least one of multiple polygons.
The current implementation imitate the behavior of the original grid_map::PolygonIterator
where a cell is selected if its center position is inside the polygon. This behavior could be changed for example to only return all cells overlapped by the polygon.
This package supplies linear and spline interpolation functions.
"},{"location":"common/interpolation/#linear-interpolation","title":"Linear Interpolation","text":"lerp(src_val, dst_val, ratio)
(for scalar interpolation) interpolates src_val
and dst_val
with ratio
. This will be replaced with std::lerp(src_val, dst_val, ratio)
in C++20
.
lerp(base_keys, base_values, query_keys)
(for vector interpolation) applies linear regression to each two continuous points whose x values arebase_keys
and whose y values are base_values
. Then it calculates interpolated values on y-axis for query_keys
on x-axis.
spline(base_keys, base_values, query_keys)
(for vector interpolation) applies spline regression to each two continuous points whose x values arebase_keys
and whose y values are base_values
. Then it calculates interpolated values on y-axis for query_keys
on x-axis.
We evaluated calculation cost of spline interpolation for 100 points, and adopted the best one which is tridiagonal matrix algorithm. Methods except for tridiagonal matrix algorithm exists in spline_interpolation
package, which has been removed from Autoware.
Assuming that the size of base_keys
(\\(x_i\\)) and base_values
(\\(y_i\\)) are \\(N + 1\\), we aim to calculate spline interpolation with the following equation to interpolate between \\(y_i\\) and \\(y_{i+1}\\).
Constraints on spline interpolation are as follows. The number of constraints is \\(4N\\), which is equal to the number of variables of spline interpolation.
\\[ \\begin{align} Y_i (x_i) & = y_i \\ \\ \\ (i = 0, \\dots, N-1) \\\\ Y_i (x_{i+1}) & = y_{i+1} \\ \\ \\ (i = 0, \\dots, N-1) \\\\ Y'_i (x_{i+1}) & = Y'_{i+1} (x_{i+1}) \\ \\ \\ (i = 0, \\dots, N-2) \\\\ Y''_i (x_{i+1}) & = Y''_{i+1} (x_{i+1}) \\ \\ \\ (i = 0, \\dots, N-2) \\\\ Y''_0 (x_0) & = 0 \\\\ Y''_{N-1} (x_N) & = 0 \\end{align} \\]According to this article, spline interpolation is formulated as the following linear equation.
\\[ \\begin{align} \\begin{pmatrix} 2(h_0 + h_1) & h_1 \\\\ h_0 & 2 (h_1 + h_2) & h_2 & & O \\\\ & & & \\ddots \\\\ O & & & & h_{N-2} & 2 (h_{N-2} + h_{N-1}) \\end{pmatrix} \\begin{pmatrix} v_1 \\\\ v_2 \\\\ v_3 \\\\ \\vdots \\\\ v_{N-1} \\end{pmatrix}= \\begin{pmatrix} w_1 \\\\ w_2 \\\\ w_3 \\\\ \\vdots \\\\ w_{N-1} \\end{pmatrix} \\end{align} \\]where
\\[ \\begin{align} h_i & = x_{i+1} - x_i \\ \\ \\ (i = 0, \\dots, N-1) \\\\ w_i & = 6 \\left(\\frac{y_{i+1} - y_{i+1}}{h_i} - \\frac{y_i - y_{i-1}}{h_{i-1}}\\right) \\ \\ \\ (i = 1, \\dots, N-1) \\end{align} \\]The coefficient matrix of this linear equation is tridiagonal matrix. Therefore, it can be solve with tridiagonal matrix algorithm, which can solve linear equations without gradient descent methods.
Solving this linear equation with tridiagonal matrix algorithm, we can calculate coefficients of spline interpolation as follows.
\\[ \\begin{align} a_i & = \\frac{v_{i+1} - v_i}{6 (x_{i+1} - x_i)} \\ \\ \\ (i = 0, \\dots, N-1) \\\\ b_i & = \\frac{v_i}{2} \\ \\ \\ (i = 0, \\dots, N-1) \\\\ c_i & = \\frac{y_{i+1} - y_i}{x_{i+1} - x_i} - \\frac{1}{6}(x_{i+1} - x_i)(2 v_i + v_{i+1}) \\ \\ \\ (i = 0, \\dots, N-1) \\\\ d_i & = y_i \\ \\ \\ (i = 0, \\dots, N-1) \\end{align} \\]"},{"location":"common/interpolation/#tridiagonal-matrix-algorithm","title":"Tridiagonal Matrix Algorithm","text":"We solve tridiagonal linear equation according to this article where variables of linear equation are expressed as follows in the implementation.
\\[ \\begin{align} \\begin{pmatrix} b_0 & c_0 & & \\\\ a_0 & b_1 & c_2 & O \\\\ & & \\ddots \\\\ O & & a_{N-2} & b_{N-1} \\end{pmatrix} x = \\begin{pmatrix} d_0 \\\\ d_2 \\\\ d_3 \\\\ \\vdots \\\\ d_{N-1} \\end{pmatrix} \\end{align} \\]"},{"location":"common/kalman_filter/","title":"kalman_filter","text":""},{"location":"common/kalman_filter/#kalman_filter","title":"kalman_filter","text":""},{"location":"common/kalman_filter/#purpose","title":"Purpose","text":"This common package contains the kalman filter with time delay and the calculation of the kalman filter.
"},{"location":"common/kalman_filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":"TBD.
"},{"location":"common/motion_utils/","title":"Motion Utils package","text":""},{"location":"common/motion_utils/#motion-utils-package","title":"Motion Utils package","text":""},{"location":"common/motion_utils/#definition-of-terms","title":"Definition of terms","text":""},{"location":"common/motion_utils/#segment","title":"Segment","text":"Segment
in Autoware is the line segment between two successive points as follows.
The nearest segment index and nearest point index to a certain position is not always th same. Therefore, we prepare two different utility functions to calculate a nearest index for points and segments.
"},{"location":"common/motion_utils/#nearest-index-search","title":"Nearest index search","text":"In this section, the nearest index and nearest segment index search is explained.
We have the same functions for the nearest index search and nearest segment index search. Taking for the example the nearest index search, we have two types of functions.
The first function finds the nearest index with distance and yaw thresholds.
template <class T>\nsize_t findFirstNearestIndexWithSoftConstraints(\nconst T & points, const geometry_msgs::msg::Pose & pose,\nconst double dist_threshold = std::numeric_limits<double>::max(),\nconst double yaw_threshold = std::numeric_limits<double>::max());\n
This function finds the first local solution within thresholds. The reason to find the first local one is to deal with some edge cases explained in the next subsection.
There are default parameters for thresholds arguments so that you can decide which thresholds to pass to the function.
The second function finds the nearest index in the lane whose id is lane_id
.
size_t findNearestIndexFromLaneId(\nconst autoware_auto_planning_msgs::msg::PathWithLaneId & path,\nconst geometry_msgs::msg::Point & pos, const int64_t lane_id);\n
"},{"location":"common/motion_utils/#application-to-various-object","title":"Application to various object","text":"Many node packages often calculate the nearest index of objects. We will explain the recommended method to calculate it.
"},{"location":"common/motion_utils/#nearest-index-for-the-ego","title":"Nearest index for the ego","text":"Assuming that the path length before the ego is short enough, we expect to find the correct nearest index in the following edge cases by findFirstNearestIndexWithSoftConstraints
with both distance and yaw thresholds. Blue circles describes the distance threshold from the base link position and two blue lines describe the yaw threshold against the base link orientation. Among points in these cases, the correct nearest point which is red can be found.
Therefore, the implementation is as follows.
const size_t ego_nearest_idx = findFirstNearestIndexWithSoftConstraints(points, ego_pose, ego_nearest_dist_threshold, ego_nearest_yaw_threshold);\nconst size_t ego_nearest_seg_idx = findFirstNearestIndexWithSoftConstraints(points, ego_pose, ego_nearest_dist_threshold, ego_nearest_yaw_threshold);\n
"},{"location":"common/motion_utils/#nearest-index-for-dynamic-objects","title":"Nearest index for dynamic objects","text":"For the ego nearest index, the orientation is considered in addition to the position since the ego is supposed to follow the points. However, for the dynamic objects (e.g., predicted object), sometimes its orientation may be different from the points order, e.g. the dynamic object driving backward although the ego is driving forward.
Therefore, the yaw threshold should not be considered for the dynamic object. The implementation is as follows.
const size_t dynamic_obj_nearest_idx = findFirstNearestIndexWithSoftConstraints(points, dynamic_obj_pose, dynamic_obj_nearest_dist_threshold);\nconst size_t dynamic_obj_nearest_seg_idx = findFirstNearestIndexWithSoftConstraints(points, dynamic_obj_pose, dynamic_obj_nearest_dist_threshold);\n
"},{"location":"common/motion_utils/#nearest-index-for-traffic-objects","title":"Nearest index for traffic objects","text":"In lanelet maps, traffic objects belong to the specific lane. With this specific lane's id, the correct nearest index can be found.
The implementation is as follows.
// first extract `lane_id` which the traffic object belong to.\nconst size_t traffic_obj_nearest_idx = findNearestIndexFromLaneId(path_with_lane_id, traffic_obj_pos, lane_id);\nconst size_t traffic_obj_nearest_seg_idx = findNearestSegmentIndexFromLaneId(path_with_lane_id, traffic_obj_pos, lane_id);\n
"},{"location":"common/motion_utils/#pathtrajectory-length-calculation-between-designated-points","title":"Path/Trajectory length calculation between designated points","text":"Based on the discussion so far, the nearest index search algorithm is different depending on the object type. Therefore, we recommended using the wrapper utility functions which require the nearest index search (e.g., calculating the path length) with each nearest index search.
For example, when we want to calculate the path length between the ego and the dynamic object, the implementation is as follows.
const size_t ego_nearest_seg_idx = findFirstNearestSegmentIndex(points, ego_pose, ego_nearest_dist_threshold, ego_nearest_yaw_threshold);\nconst size_t dyn_obj_nearest_seg_idx = findFirstNearestSegmentIndex(points, dyn_obj_pose, dyn_obj_nearest_dist_threshold);\nconst double length_from_ego_to_obj = calcSignedArcLength(points, ego_pose, ego_nearest_seg_idx, dyn_obj_pose, dyn_obj_nearest_seg_idx);\n
"},{"location":"common/motion_utils/#for-developers","title":"For developers","text":"Some of the template functions in trajectory.hpp
are mostly used for specific types (autoware_auto_planning_msgs::msg::PathPoint
, autoware_auto_planning_msgs::msg::PathPoint
, autoware_auto_planning_msgs::msg::TrajectoryPoint
), so they are exported as extern template
functions to speed-up compilation time.
motion_utils.hpp
header file was removed because the source files that directly/indirectly include this file took a long time for preprocessing.
Vehicle utils provides a convenient library used to check vehicle status.
"},{"location":"common/motion_utils/docs/vehicle/vehicle/#feature","title":"Feature","text":"The library contains following classes.
"},{"location":"common/motion_utils/docs/vehicle/vehicle/#vehicle_stop_checker","title":"vehicle_stop_checker","text":"This class check whether the vehicle is stopped or not based on localization result.
"},{"location":"common/motion_utils/docs/vehicle/vehicle/#subscribed-topics","title":"Subscribed Topics","text":"Name Type Description/localization/kinematic_state
nav_msgs::msg::Odometry
vehicle odometry"},{"location":"common/motion_utils/docs/vehicle/vehicle/#parameters","title":"Parameters","text":"Name Type Default Value Explanation velocity_buffer_time_sec
double 10.0 odometry buffering time [s]"},{"location":"common/motion_utils/docs/vehicle/vehicle/#member-functions","title":"Member functions","text":"bool isVehicleStopped(const double stop_duration)\n
true
if the vehicle is stopped, even if system outputs a non-zero target velocity.Necessary includes:
#include <tier4_autoware_utils/vehicle/vehicle_state_checker.hpp>\n
1.Create a checker instance.
class SampleNode : public rclcpp::Node\n{\npublic:\nSampleNode() : Node(\"sample_node\")\n{\nvehicle_stop_checker_ = std::make_unique<VehicleStopChecker>(this);\n}\n\nstd::unique_ptr<VehicleStopChecker> vehicle_stop_checker_;\n\nbool sampleFunc();\n\n...\n}\n
2.Check the vehicle state.
bool SampleNode::sampleFunc()\n{\n...\n\nconst auto result_1 = vehicle_stop_checker_->isVehicleStopped();\n\n...\n\nconst auto result_2 = vehicle_stop_checker_->isVehicleStopped(3.0);\n\n...\n}\n
"},{"location":"common/motion_utils/docs/vehicle/vehicle/#vehicle_arrival_checker","title":"vehicle_arrival_checker","text":"This class check whether the vehicle arrive at stop point based on localization and planning result.
"},{"location":"common/motion_utils/docs/vehicle/vehicle/#subscribed-topics_1","title":"Subscribed Topics","text":"Name Type Description/localization/kinematic_state
nav_msgs::msg::Odometry
vehicle odometry /planning/scenario_planning/trajectory
autoware_auto_planning_msgs::msg::Trajectory
trajectory"},{"location":"common/motion_utils/docs/vehicle/vehicle/#parameters_1","title":"Parameters","text":"Name Type Default Value Explanation velocity_buffer_time_sec
double 10.0 odometry buffering time [s] th_arrived_distance_m
double 1.0 threshold distance to check if vehicle has arrived at target point [m]"},{"location":"common/motion_utils/docs/vehicle/vehicle/#member-functions_1","title":"Member functions","text":"bool isVehicleStopped(const double stop_duration)\n
true
if the vehicle is stopped, even if system outputs a non-zero target velocity.bool isVehicleStoppedAtStopPoint(const double stop_duration)\n
true
if the vehicle is not only stopped but also arrived at stop point.Necessary includes:
#include <tier4_autoware_utils/vehicle/vehicle_state_checker.hpp>\n
1.Create a checker instance.
class SampleNode : public rclcpp::Node\n{\npublic:\nSampleNode() : Node(\"sample_node\")\n{\nvehicle_arrival_checker_ = std::make_unique<VehicleArrivalChecker>(this);\n}\n\nstd::unique_ptr<VehicleArrivalChecker> vehicle_arrival_checker_;\n\nbool sampleFunc();\n\n...\n}\n
2.Check the vehicle state.
bool SampleNode::sampleFunc()\n{\n...\n\nconst auto result_1 = vehicle_arrival_checker_->isVehicleStopped();\n\n...\n\nconst auto result_2 = vehicle_arrival_checker_->isVehicleStopped(3.0);\n\n...\n\nconst auto result_3 = vehicle_arrival_checker_->isVehicleStoppedAtStopPoint();\n\n...\n\nconst auto result_4 = vehicle_arrival_checker_->isVehicleStoppedAtStopPoint(3.0);\n\n...\n}\n
"},{"location":"common/motion_utils/docs/vehicle/vehicle/#assumptions-known-limits","title":"Assumptions / Known limits","text":"vehicle_stop_checker
and vehicle_arrival_checker
cannot check whether the vehicle is stopped more than velocity_buffer_time_sec
second.
This package contains a library of common functions that are useful across the object recognition module. This package may include functions for converting between different data types, msg types, and performing common operations on them.
"},{"location":"common/osqp_interface/design/osqp_interface-design/","title":"Interface for the OSQP library","text":""},{"location":"common/osqp_interface/design/osqp_interface-design/#interface-for-the-osqp-library","title":"Interface for the OSQP library","text":"This is the design document for the osqp_interface
package.
This packages provides a C++ interface for the OSQP library.
"},{"location":"common/osqp_interface/design/osqp_interface-design/#design","title":"Design","text":"The class OSQPInterface
takes a problem formulation as Eigen matrices and vectors, converts these objects into C-style Compressed-Column-Sparse matrices and dynamic arrays, loads the data into the OSQP workspace dataholder, and runs the optimizer.
The interface can be used in several ways:
Initialize the interface WITHOUT data. Load the problem formulation at the optimization call.
osqp_interface = OSQPInterface();\nosqp_interface.optimize(P, A, q, l, u);\n
Initialize the interface WITH data.
osqp_interface = OSQPInterface(P, A, q, l, u);\nosqp_interface.optimize();\n
WARM START OPTIMIZATION by modifying the problem formulation between optimization runs.
osqp_interface = OSQPInterface(P, A, q, l, u);\nosqp_interface.optimize();\nosqp.initializeProblem(P_new, A_new, q_new, l_new, u_new);\nosqp_interface.optimize();\n
The optimization results are returned as a vector by the optimization function.
std::tuple<std::vector<double>, std::vector<double>> result = osqp_interface.optimize();\nstd::vector<double> param = std::get<0>(result);\ndouble x_0 = param[0];\ndouble x_1 = param[1];\n
This node publishes a distance from the closest path point from the self-position to the end point of the path. Note that the distance means the arc-length along the path, not the Euclidean distance between the two points.
"},{"location":"common/path_distance_calculator/Readme/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"common/path_distance_calculator/Readme/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"common/path_distance_calculator/Readme/#input","title":"Input","text":"Name Type Description/planning/scenario_planning/lane_driving/behavior_planning/path
autoware_auto_planning_msgs::msg::Path
Reference path /tf
tf2_msgs/TFMessage
TF (self-pose)"},{"location":"common/path_distance_calculator/Readme/#output","title":"Output","text":"Name Type Description ~/distance
tier4_debug_msgs::msg::Float64Stamped
Publish a distance from the closest path point from the self-position to the end point of the path[m]"},{"location":"common/path_distance_calculator/Readme/#parameters","title":"Parameters","text":""},{"location":"common/path_distance_calculator/Readme/#node-parameters","title":"Node Parameters","text":"None.
"},{"location":"common/path_distance_calculator/Readme/#core-parameters","title":"Core Parameters","text":"None.
"},{"location":"common/path_distance_calculator/Readme/#assumptions-known-limits","title":"Assumptions / Known limits","text":"TBD.
"},{"location":"common/perception_utils/","title":"perception_utils","text":""},{"location":"common/perception_utils/#perception_utils","title":"perception_utils","text":""},{"location":"common/perception_utils/#purpose","title":"Purpose","text":"This package contains a library of common functions that are useful across the perception module.
"},{"location":"common/polar_grid/Readme/","title":"Polar Grid","text":""},{"location":"common/polar_grid/Readme/#polar-grid","title":"Polar Grid","text":""},{"location":"common/polar_grid/Readme/#purpose","title":"Purpose","text":"This plugin displays polar grid around ego vehicle in Rviz.
"},{"location":"common/polar_grid/Readme/#core-parameters","title":"Core Parameters","text":"Name Type Default Value ExplanationMax Range
float 200.0f max range for polar grid. [m] Wave Velocity
float 100.0f wave ring velocity. [m/s] Delta Range
float 10.0f wave ring distance for polar grid. [m]"},{"location":"common/polar_grid/Readme/#assumptions-known-limits","title":"Assumptions / Known limits","text":"TBD.
"},{"location":"common/qp_interface/design/qp_interface-design/","title":"Interface for QP solvers","text":""},{"location":"common/qp_interface/design/qp_interface-design/#interface-for-qp-solvers","title":"Interface for QP solvers","text":"This is the design document for the qp_interface
package.
This packages provides a C++ interface for QP solvers. Currently, supported QP solvers are
The class QPInterface
takes a problem formulation as Eigen matrices and vectors, converts these objects into C-style Compressed-Column-Sparse matrices and dynamic arrays, loads the data into the QP workspace dataholder, and runs the optimizer.
The interface can be used in several ways:
Initialize the interface, and load the problem formulation at the optimization call.
QPInterface qp_interface;\nqp_interface.optimize(P, A, q, l, u);\n
WARM START OPTIMIZATION by modifying the problem formulation between optimization runs.
QPInterface qp_interface(true);\nqp_interface.optimize(P, A, q, l, u);\nqp_interface.optimize(P_new, A_new, q_new, l_new, u_new);\n
The optimization results are returned as a vector by the optimization function.
const auto solution = qp_interface.optimize();\ndouble x_0 = solution[0];\ndouble x_1 = solution[1];\n
The purpose of this Rviz plugin is
To display each content of RTC status.
To switch each module of RTC auto mode.
To change RTC cooperate commands by button.
/api/external/get/rtc_status
tier4_rtc_msgs::msg::CooperateStatusArray
The statuses of each Cooperate Commands"},{"location":"common/rtc_manager_rviz_plugin/#output","title":"Output","text":"Name Type Description /api/external/set/rtc_commands
tier4_rtc_msgs::src::CooperateCommands
The Cooperate Commands for each planning /planning/enable_auto_mode/*
tier4_rtc_msgs::src::AutoMode
The Cooperate Commands mode for each planning module"},{"location":"common/rtc_manager_rviz_plugin/#howtouse","title":"HowToUse","text":"Start rviz and select panels/Add new panel.
tier4_state_rviz_plugin/RTCManagerPanel and press OK.
In this package, we present signal processing related methods for the Autoware applications. The following functionalities are available in the current version.
low-pass filter currently supports only the 1-D low pass filtering.
"},{"location":"common/signal_processing/#assumptions-known-limits","title":"Assumptions / Known limits","text":"TBD.
"},{"location":"common/signal_processing/documentation/ButterworthFilter/","title":"ButterworthFilter","text":""},{"location":"common/signal_processing/documentation/ButterworthFilter/#butterworth-low-pass-filter-design-tool-class","title":"Butterworth Low-pass Filter Design Tool Class","text":"This Butterworth low-pass filter design tool can be used to design a Butterworth filter in continuous and discrete-time from the given specifications of the filter performance. The Butterworth filter is a class implementation. A default constructor creates the object without any argument.
The filter can be prepared in three ways. If the filter specifications are known, such as the pass-band, and stop-band frequencies (Wp and Ws) together with the pass-band and stop-band ripple magnitudes (Ap and As), one can call the filter's buttord method with these arguments to obtain the recommended filter order (N) and cut-off frequency (Wc_rad_sec [rad/s]).
Figure 1. Butterworth Low-pass filter specification from [1].
An example call is demonstrated below;
ButterworthFilter bf();\n\nWp = 2.0; // pass-band frequency [rad/sec]\nWs = 3.0; // stop-band frequency [rad/sec]\nAp = 6.0; // pass-band ripple mag or loss [dB]\nAs = 20.0; // stop band ripple attenuation [dB]\n\n// Computing filter coefficients from the specs\nbf.Buttord(Wp, Ws, Ap, As);\n\n// Get the computed order and Cut-Off frequency\nsOrderCutOff NWc = bf.getOrderCutOff();]\n\ncout << \" The computed order is ;\" << NWc.N << endl;\ncout << \" The computed cut-off frequency is ;\" << NWc.Wc_rad_sec << endl;\n
The filter order and cut-off frequency can be obtained in a struct using bf.getOrderCutoff() method. These specs can be printed on the screen by calling PrintFilterSpecs() method. If the user would like to define the order and cut-off frequency manually, the setter methods for these variables can be called to set the filter order (N) and the desired cut-off frequency (Wc_rad_sec [rad/sec]) for the filter.
"},{"location":"common/signal_processing/documentation/ButterworthFilter/#obtaining-filter-transfer-functions","title":"Obtaining Filter Transfer Functions","text":"The discrete transfer function of the filter requires the roots and gain of the continuous-time transfer function. Therefore, it is a must to call the first computeContinuousTimeTF() to create the continuous-time transfer function of the filter using;
bf.computeContinuousTimeTF();\n
The computed continuous-time transfer function roots can be printed on the screen using the methods;
bf.PrintFilter_ContinuousTimeRoots();\nbf.PrintContinuousTimeTF();\n
The resulting screen output for a 5th order filter is demonstrated below.
Roots of Continuous Time Filter Transfer Function Denominator are :\n-0.585518 + j 1.80204\n-1.53291 + j 1.11372\n-1.89478 + j 2.32043e-16\n-1.53291 + j -1.11372\n-0.585518 + j -1.80204\n\n\nThe Continuous-Time Transfer Function of the Filter is ;\n\n 24.422\n-------------------------------------------------------------------------------\n1.000 *s[5] + 6.132 *s[4] + 18.798 *s[3] + 35.619 *s[2] + 41.711 *s[1] + 24.422\n
"},{"location":"common/signal_processing/documentation/ButterworthFilter/#discrete-time-transfer-function-difference-equations","title":"Discrete Time Transfer Function (Difference Equations)","text":"The digital filter equivalent of the continuous-time definitions is produced by using the bi-linear transformation. When creating the discrete-time function of the ButterworthFilter object, its Numerator (Bn) and Denominator (An ) coefficients are stored in a vector of filter order size N.
The discrete transfer function method is called using ;
bf.computeDiscreteTimeTF();\nbf.PrintDiscreteTimeTF();\n
The results are printed on the screen like; The Discrete-Time Transfer Function of the Filter is ;
0.191 *z[-5] + 0.956 *z[-4] + 1.913 *z[-3] + 1.913 *z[-2] + 0.956 *z[-1] + 0.191\n--------------------------------------------------------------------------------\n1.000 *z[-5] + 1.885 *z[-4] + 1.888 *z[-3] + 1.014 *z[-2] + 0.298 *z[-1] + 0.037\n
and the associated difference coefficients An and Bn by withing a struct ;
sDifferenceAnBn AnBn = bf.getAnBn();\n
The difference coefficients appear in the filtering equation in the form of.
An * Y_filtered = Bn * Y_unfiltered\n
To filter a signal given in a vector form ;
"},{"location":"common/signal_processing/documentation/ButterworthFilter/#calling-filter-by-a-specified-cut-off-and-sampling-frequencies-in-hz","title":"Calling Filter by a specified cut-off and sampling frequencies [in Hz]","text":"The Butterworth filter can also be created by defining the desired order (N), a cut-off frequency (fc in [Hz]), and a sampling frequency (fs in [Hz]). In this method, the cut-off frequency is pre-warped with respect to the sampling frequency [1, 2] to match the continuous and digital filter frequencies.
The filter is prepared by the following calling options;
// 3rd METHOD defining a sampling frequency together with the cut-off fc, fs\n bf.setOrder(2);\n bf.setCutOffFrequency(10, 100);\n
At this step, we define a boolean variable whether to use the pre-warping option or not.
// Compute Continuous Time TF\nbool use_sampling_frequency = true;\nbf.computeContinuousTimeTF(use_sampling_frequency);\nbf.PrintFilter_ContinuousTimeRoots();\nbf.PrintContinuousTimeTF();\n\n// Compute Discrete Time TF\nbf.computeDiscreteTimeTF(use_sampling_frequency);\nbf.PrintDiscreteTimeTF();\n
References:
Manolakis, Dimitris G., and Vinay K. Ingle. Applied digital signal processing: theory and practice. Cambridge University Press, 2011.
https://en.wikibooks.org/wiki/Digital_Signal_Processing/Bilinear_Transform
This package contains a library of common functions related to TensorRT. This package may include functions for handling TensorRT engine and calibration algorithm used for quantization
"},{"location":"common/tier4_adapi_rviz_plugin/","title":"tier4_adapi_rviz_plugin","text":""},{"location":"common/tier4_adapi_rviz_plugin/#tier4_adapi_rviz_plugin","title":"tier4_adapi_rviz_plugin","text":""},{"location":"common/tier4_adapi_rviz_plugin/#routepanel","title":"RoutePanel","text":"To use the panel, set the topic name from 2D Goal Pose Tool to /rviz/routing/pose
. By default, when a tool publish a pose, the panel immediately sets a route with that as the goal. Enable or disable of allow_goal_modification option can be set with the check box.
Push the mode button in the waypoint to enter waypoint mode. In this mode, the pose is added to waypoints. Press the apply button to set the route using the saved waypoints (the last one is a goal). Reset the saved waypoints with the reset button.
"},{"location":"common/tier4_api_utils/","title":"tier4_api_utils","text":""},{"location":"common/tier4_api_utils/#tier4_api_utils","title":"tier4_api_utils","text":"This is an old implementation of a class that logs when calling a service. Please use component_interface_utils instead.
"},{"location":"common/tier4_automatic_goal_rviz_plugin/","title":"tier4_automatic_goal_rviz_plugin","text":""},{"location":"common/tier4_automatic_goal_rviz_plugin/#tier4_automatic_goal_rviz_plugin","title":"tier4_automatic_goal_rviz_plugin","text":""},{"location":"common/tier4_automatic_goal_rviz_plugin/#purpose","title":"Purpose","text":"Defining a GoalsList
by adding goals using RvizTool
(Pose on the map).
Automatic execution of the created GoalsList
from the selected goal - it can be stopped and restarted.
Looping the current GoalsList
.
Saving achieved goals to a file.
Plan the route to one (single) selected goal and starting that route - it can be stopped and restarted.
Remove any goal from the list or clear the current route.
Save the current GoalsList
to a file and load the list from the file.
The application enables/disables access to options depending on the current state.
The saved GoalsList
can be executed without using a plugin - using a node automatic_goal_sender
.
/api/operation_mode/state
autoware_adapi_v1_msgs::msg::OperationModeState
The topic represents the state of operation mode /api/routing/state
autoware_adapi_v1_msgs::msg::RouteState
The topic represents the state of route /rviz2/automatic_goal/goal
geometry_msgs::msgs::PoseStamped
The topic for adding goals to GoalsList"},{"location":"common/tier4_automatic_goal_rviz_plugin/#output","title":"Output","text":"Name Type Description /api/operation_mode/change_to_autonomous
autoware_adapi_v1_msgs::srv::ChangeOperationMode
The service to change operation mode to autonomous /api/operation_mode/change_to_stop
autoware_adapi_v1_msgs::srv::ChangeOperationMode
The service to change operation mode to stop /api/routing/set_route_points
autoware_adapi_v1_msgs::srv::SetRoutePoints
The service to set route /api/routing/clear_route
autoware_adapi_v1_msgs::srv::ClearRoute
The service to clear route state /rviz2/automatic_goal/markers
visualization_msgs::msg::MarkerArray
The topic to visualize goals as rviz markers"},{"location":"common/tier4_automatic_goal_rviz_plugin/#howtouse","title":"HowToUse","text":"Start rviz and select panels/Add new panel.
Select tier4_automatic_goal_rviz_plugin/AutowareAutomaticGoalPanel
and press OK.
Select Add a new tool.
Select tier4_automatic_goal_rviz_plugin/AutowareAutomaticGoalTool
and press OK.
Add goals visualization as markers to Displays
.
Append goals to the GoalsList
to be achieved using 2D Append Goal
- in such a way that routes can be planned.
Start sequential planning and goal achievement by clicking Send goals automatically
You can save GoalsList
by clicking Save to file
.
After saving, you can run the GoalsList
without using a plugin also:
ros2 launch tier4_automatic_goal_rviz_plugin automatic_goal_sender.launch.xml goals_list_file_path:=\"/tmp/goals_list.yaml\" goals_achieved_dir_path:=\"/tmp/\"
goals_list_file_path
- is the path to the saved GoalsList
file to be loadedgoals_achieved_dir_path
- is the path to the directory where the file goals_achieved.log
will be created and the achieved goals will be written to itIf the application (Engagement) goes into ERROR
mode (usually returns to EDITING
later), it means that one of the services returned a calling error (code!=0
). In this situation, check the terminal output for more information.
This package contains many common functions used by other packages, so please refer to them as needed.
"},{"location":"common/tier4_autoware_utils/#for-developers","title":"For developers","text":"tier4_autoware_utils.hpp
header file was removed because the source files that directly/indirectly include this file took a long time for preprocessing.
Add the tier4_camera_view_rviz_plugin/ThirdPersonViewTool
tool to the RViz. Push the button, the camera will focus on the vehicle and set the target frame to base_link
. Short cut key 'o'.
Add the tier4_camera_view_rviz_plugin/BirdEyeViewTool
tool to the RViz. Push the button, the camera will turn to the BEV view, the target frame is consistent with the latest frame. Short cut key 'r'.
This package is to mimic external control for simulation.
"},{"location":"common/tier4_control_rviz_plugin/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"common/tier4_control_rviz_plugin/#input","title":"Input","text":"Name Type Description/control/current_gate_mode
tier4_control_msgs::msg::GateMode
Current GATE mode /vehicle/status/velocity_status
autoware_auto_vehicle_msgs::msg::VelocityReport
Current velocity status /api/autoware/get/engage
tier4_external_api_msgs::srv::Engage
Getting Engage /vehicle/status/gear_status
autoware_auto_vehicle_msgs::msg::GearReport
The state of GEAR"},{"location":"common/tier4_control_rviz_plugin/#output","title":"Output","text":"Name Type Description /control/gate_mode_cmd
tier4_control_msgs::msg::GateMode
GATE mode /external/selected/control_cmd
autoware_auto_control_msgs::msg::AckermannControlCommand
AckermannControlCommand /external/selected/gear_cmd
autoware_auto_vehicle_msgs::msg::GearCommand
GEAR"},{"location":"common/tier4_control_rviz_plugin/#usage","title":"Usage","text":"Start rviz and select Panels.
Select tier4_control_rviz_plugin/ManualController and press OK.
Enter velocity in \"Set Cruise Velocity\" and Press the button to confirm. You can notice that GEAR shows D (DRIVE).
Press \"Enable Manual Control\" and you can notice that \"GATE\" and \"Engage\" turn \"Ready\" and the vehicle starts!
This plugin displays the ROS Time and Wall Time in rviz.
"},{"location":"common/tier4_datetime_rviz_plugin/#assumptions-known-limits","title":"Assumptions / Known limits","text":"TBD.
"},{"location":"common/tier4_datetime_rviz_plugin/#usage","title":"Usage","text":"This package is including jsk code. Note that jsk_overlay_utils.cpp and jsk_overlay_utils.hpp are BSD license.
"},{"location":"common/tier4_debug_rviz_plugin/#plugins","title":"Plugins","text":""},{"location":"common/tier4_debug_rviz_plugin/#float32multiarraystampedpiechart","title":"Float32MultiArrayStampedPieChart","text":"Pie chart from tier4_debug_msgs::msg::Float32MultiArrayStamped
.
This package provides useful features for debugging Autoware.
"},{"location":"common/tier4_debug_tools/#usage","title":"Usage","text":""},{"location":"common/tier4_debug_tools/#tf2pose","title":"tf2pose","text":"This tool converts any tf
to pose
topic. With this tool, for example, you can plot x
values of tf
in rqt_multiplot
.
ros2 run tier4_debug_tools tf2pose {tf_from} {tf_to} {hz}\n
Example:
$ ros2 run tier4_debug_tools tf2pose base_link ndt_base_link 100\n\n$ ros2 topic echo /tf2pose/pose -n1\nheader:\n seq: 13\nstamp:\n secs: 1605168366\nnsecs: 549174070\nframe_id: \"base_link\"\npose:\n position:\n x: 0.0387684271191\n y: -0.00320360406477\n z: 0.000276674520819\n orientation:\n x: 0.000335221893885\n y: 0.000122020672186\n z: -0.00539673212896\n w: 0.999985368502\n---\n
"},{"location":"common/tier4_debug_tools/#pose2tf","title":"pose2tf","text":"This tool converts any pose
topic to tf
.
ros2 run tier4_debug_tools pose2tf {pose_topic_name} {tf_name}\n
Example:
$ ros2 run tier4_debug_tools pose2tf /localization/pose_estimator/pose ndt_pose\n\n$ ros2 run tf tf_echo ndt_pose ndt_base_link 100\nAt time 1605168365.449\n- Translation: [0.000, 0.000, 0.000]\n- Rotation: in Quaternion [0.000, 0.000, 0.000, 1.000]\nin RPY (radian) [0.000, -0.000, 0.000]\nin RPY (degree) [0.000, -0.000, 0.000]\n
"},{"location":"common/tier4_debug_tools/#stop_reason2pose","title":"stop_reason2pose","text":"This tool extracts pose
from stop_reasons
. Topics without numbers such as /stop_reason2pose/pose/detection_area
are the nearest stop_reasons, and topics with numbers are individual stop_reasons that are roughly matched with previous ones.
ros2 run tier4_debug_tools stop_reason2pose {stop_reason_topic_name}\n
Example:
$ ros2 run tier4_debug_tools stop_reason2pose /planning/scenario_planning/status/stop_reasons\n\n$ ros2 topic list | ag stop_reason2pose\n/stop_reason2pose/pose/detection_area\n/stop_reason2pose/pose/detection_area_1\n/stop_reason2pose/pose/obstacle_stop\n/stop_reason2pose/pose/obstacle_stop_1\n\n$ ros2 topic echo /stop_reason2pose/pose/detection_area -n1\nheader:\n seq: 1\nstamp:\n secs: 1605168355\nnsecs: 821713\nframe_id: \"map\"\npose:\n position:\n x: 60608.8433457\n y: 43886.2410876\n z: 44.9078212441\n orientation:\n x: 0.0\n y: 0.0\n z: -0.190261378408\n w: 0.981733470901\n---\n
"},{"location":"common/tier4_debug_tools/#stop_reason2tf","title":"stop_reason2tf","text":"This is an all-in-one script that uses tf2pose
, pose2tf
, and stop_reason2pose
. With this tool, you can view the relative position from base_link to the nearest stop_reason.
ros2 run tier4_debug_tools stop_reason2tf {stop_reason_name}\n
Example:
$ ros2 run tier4_debug_tools stop_reason2tf obstacle_stop\nAt time 1605168359.501\n- Translation: [0.291, -0.095, 0.266]\n- Rotation: in Quaternion [0.007, 0.011, -0.005, 1.000]\nin RPY (radian) [0.014, 0.023, -0.010]\nin RPY (degree) [0.825, 1.305, -0.573]\n
"},{"location":"common/tier4_debug_tools/#lateral_error_publisher","title":"lateral_error_publisher","text":"This node calculate the control error and localization error in the trajectory normal direction as shown in the figure below.
Set the reference trajectory, vehicle pose and ground truth pose in the launch file.
ros2 launch tier4_debug_tools lateral_error_publisher.launch.xml\n
"},{"location":"common/tier4_localization_rviz_plugin/","title":"tier4_localization_rviz_plugin","text":""},{"location":"common/tier4_localization_rviz_plugin/#tier4_localization_rviz_plugin","title":"tier4_localization_rviz_plugin","text":""},{"location":"common/tier4_localization_rviz_plugin/#purpose","title":"Purpose","text":"This plugin can display the history of the localization obtained by ekf_localizer or ndt_scan_matching.
"},{"location":"common/tier4_localization_rviz_plugin/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"common/tier4_localization_rviz_plugin/#input","title":"Input","text":"Name Type Descriptioninput/pose
geometry_msgs::msg::PoseStamped
In input/pose, put the result of localization calculated by ekf_localizer or ndt_scan_matching"},{"location":"common/tier4_localization_rviz_plugin/#parameters","title":"Parameters","text":""},{"location":"common/tier4_localization_rviz_plugin/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description property_buffer_size_
int 100 Buffer size of topic property_line_view_
bool true Use Line property or not property_line_width_
float 0.1 Width of Line property [m] property_line_alpha_
float 1.0 Alpha of Line property property_line_color_
QColor Qt::white Color of Line property"},{"location":"common/tier4_localization_rviz_plugin/#assumptions-known-limits","title":"Assumptions / Known limits","text":"TBD.
"},{"location":"common/tier4_localization_rviz_plugin/#usage","title":"Usage","text":"This package provides an rviz_plugin that can easily change the logger level of each node
This plugin dispatches services to the \"logger name\" associated with \"nodes\" specified in YAML, adjusting the logger level.
As of November 2023, in ROS 2 Humble, users are required to initiate a service server in the node to use this feature. (This might be integrated into ROS standards in the future.) For easy service server generation, you can use the LoggerLevelConfigure utility.
"},{"location":"common/tier4_perception_rviz_plugin/","title":"tier4_perception_rviz_plugin","text":""},{"location":"common/tier4_perception_rviz_plugin/#tier4_perception_rviz_plugin","title":"tier4_perception_rviz_plugin","text":""},{"location":"common/tier4_perception_rviz_plugin/#purpose","title":"Purpose","text":"This plugin is used to generate dummy pedestrians, cars, and obstacles in planning simulator.
"},{"location":"common/tier4_perception_rviz_plugin/#overview","title":"Overview","text":"The CarInitialPoseTool sends a topic for generating a dummy car. The PedestrianInitialPoseTool sends a topic for generating a dummy pedestrian. The UnknownInitialPoseTool sends a topic for generating a dummy obstacle. The DeleteAllObjectsTool deletes the dummy cars, pedestrians, and obstacles displayed by the above three tools.
"},{"location":"common/tier4_perception_rviz_plugin/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"common/tier4_perception_rviz_plugin/#output","title":"Output","text":"Name Type Description/simulation/dummy_perception_publisher/object_info
dummy_perception_publisher::msg::Object
The topic on which to publish dummy object info"},{"location":"common/tier4_perception_rviz_plugin/#parameter","title":"Parameter","text":""},{"location":"common/tier4_perception_rviz_plugin/#core-parameters","title":"Core Parameters","text":""},{"location":"common/tier4_perception_rviz_plugin/#carpose","title":"CarPose","text":"Name Type Default Value Description topic_property_
string /simulation/dummy_perception_publisher/object_info
The topic on which to publish dummy object info std_dev_x_
float 0.03 X standard deviation for initial pose [m] std_dev_y_
float 0.03 Y standard deviation for initial pose [m] std_dev_z_
float 0.03 Z standard deviation for initial pose [m] std_dev_theta_
float 5.0 * M_PI / 180.0 Theta standard deviation for initial pose [rad] length_
float 4.0 X standard deviation for initial pose [m] width_
float 1.8 Y standard deviation for initial pose [m] height_
float 2.0 Z standard deviation for initial pose [m] position_z_
float 0.0 Z position for initial pose [m] velocity_
float 0.0 Velocity [m/s]"},{"location":"common/tier4_perception_rviz_plugin/#buspose","title":"BusPose","text":"Name Type Default Value Description topic_property_
string /simulation/dummy_perception_publisher/object_info
The topic on which to publish dummy object info std_dev_x_
float 0.03 X standard deviation for initial pose [m] std_dev_y_
float 0.03 Y standard deviation for initial pose [m] std_dev_z_
float 0.03 Z standard deviation for initial pose [m] std_dev_theta_
float 5.0 * M_PI / 180.0 Theta standard deviation for initial pose [rad] length_
float 10.5 X standard deviation for initial pose [m] width_
float 2.5 Y standard deviation for initial pose [m] height_
float 3.5 Z standard deviation for initial pose [m] position_z_
float 0.0 Z position for initial pose [m] velocity_
float 0.0 Velocity [m/s]"},{"location":"common/tier4_perception_rviz_plugin/#pedestrianpose","title":"PedestrianPose","text":"Name Type Default Value Description topic_property_
string /simulation/dummy_perception_publisher/object_info
The topic on which to publish dummy object info std_dev_x_
float 0.03 X standard deviation for initial pose [m] std_dev_y_
float 0.03 Y standard deviation for initial pose [m] std_dev_z_
float 0.03 Z standard deviation for initial pose [m] std_dev_theta_
float 5.0 * M_PI / 180.0 Theta standard deviation for initial pose [rad] position_z_
float 0.0 Z position for initial pose [m] velocity_
float 0.0 Velocity [m/s]"},{"location":"common/tier4_perception_rviz_plugin/#unknownpose","title":"UnknownPose","text":"Name Type Default Value Description topic_property_
string /simulation/dummy_perception_publisher/object_info
The topic on which to publish dummy object info std_dev_x_
float 0.03 X standard deviation for initial pose [m] std_dev_y_
float 0.03 Y standard deviation for initial pose [m] std_dev_z_
float 0.03 Z standard deviation for initial pose [m] std_dev_theta_
float 5.0 * M_PI / 180.0 Theta standard deviation for initial pose [rad] position_z_
float 0.0 Z position for initial pose [m] velocity_
float 0.0 Velocity [m/s]"},{"location":"common/tier4_perception_rviz_plugin/#deleteallobjects","title":"DeleteAllObjects","text":"Name Type Default Value Description topic_property_
string /simulation/dummy_perception_publisher/object_info
The topic on which to publish dummy object info"},{"location":"common/tier4_perception_rviz_plugin/#assumptions-known-limits","title":"Assumptions / Known limits","text":"Using a planning simulator
"},{"location":"common/tier4_perception_rviz_plugin/#usage","title":"Usage","text":"You can interactively manipulate the object.
This package is including jsk code. Note that jsk_overlay_utils.cpp and jsk_overlay_utils.hpp are BSD license.
"},{"location":"common/tier4_planning_rviz_plugin/#purpose","title":"Purpose","text":"This plugin displays the path, trajectory, and maximum speed.
"},{"location":"common/tier4_planning_rviz_plugin/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"common/tier4_planning_rviz_plugin/#input","title":"Input","text":"Name Type Description/input/path
autoware_auto_planning_msgs::msg::Path
The topic on which to subscribe path /input/trajectory
autoware_auto_planning_msgs::msg::Trajectory
The topic on which to subscribe trajectory /planning/scenario_planning/current_max_velocity
tier4_planning_msgs/msg/VelocityLimit
The topic on which to publish max velocity"},{"location":"common/tier4_planning_rviz_plugin/#output","title":"Output","text":"Name Type Description /planning/mission_planning/checkpoint
geometry_msgs/msg/PoseStamped
The topic on which to publish checkpoint"},{"location":"common/tier4_planning_rviz_plugin/#parameter","title":"Parameter","text":""},{"location":"common/tier4_planning_rviz_plugin/#core-parameters","title":"Core Parameters","text":""},{"location":"common/tier4_planning_rviz_plugin/#missioncheckpoint","title":"MissionCheckpoint","text":"Name Type Default Value Description pose_topic_property_
string mission_checkpoint
The topic on which to publish checkpoint std_dev_x_
float 0.5 X standard deviation for checkpoint pose [m] std_dev_y_
float 0.5 Y standard deviation for checkpoint pose [m] std_dev_theta_
float M_PI / 12.0 Theta standard deviation for checkpoint pose [rad] position_z_
float 0.0 Z position for checkpoint pose [m]"},{"location":"common/tier4_planning_rviz_plugin/#path","title":"Path","text":"Name Type Default Value Description property_path_view_
bool true Use Path property or not property_path_width_view_
bool false Use Constant Width or not property_path_width_
float 2.0 Width of Path property [m] property_path_alpha_
float 1.0 Alpha of Path property property_path_color_view_
bool false Use Constant Color or not property_path_color_
QColor Qt::black Color of Path property property_velocity_view_
bool true Use Velocity property or not property_velocity_alpha_
float 1.0 Alpha of Velocity property property_velocity_scale_
float 0.3 Scale of Velocity property property_velocity_color_view_
bool false Use Constant Color or not property_velocity_color_
QColor Qt::black Color of Velocity property property_vel_max_
float 3.0 Max velocity [m/s]"},{"location":"common/tier4_planning_rviz_plugin/#drivablearea","title":"DrivableArea","text":"Name Type Default Value Description color_scheme_property_
int 0 Color scheme of DrivableArea property alpha_property_
float 0.2 Alpha of DrivableArea property draw_under_property_
bool false Draw as background or not"},{"location":"common/tier4_planning_rviz_plugin/#pathfootprint","title":"PathFootprint","text":"Name Type Default Value Description property_path_footprint_view_
bool true Use Path Footprint property or not property_path_footprint_alpha_
float 1.0 Alpha of Path Footprint property property_path_footprint_color_
QColor Qt::black Color of Path Footprint property property_vehicle_length_
float 4.77 Vehicle length [m] property_vehicle_width_
float 1.83 Vehicle width [m] property_rear_overhang_
float 1.03 Rear overhang [m]"},{"location":"common/tier4_planning_rviz_plugin/#trajectory","title":"Trajectory","text":"Name Type Default Value Description property_path_view_
bool true Use Path property or not property_path_width_
float 2.0 Width of Path property [m] property_path_alpha_
float 1.0 Alpha of Path property property_path_color_view_
bool false Use Constant Color or not property_path_color_
QColor Qt::black Color of Path property property_velocity_view_
bool true Use Velocity property or not property_velocity_alpha_
float 1.0 Alpha of Velocity property property_velocity_scale_
float 0.3 Scale of Velocity property property_velocity_color_view_
bool false Use Constant Color or not property_velocity_color_
QColor Qt::black Color of Velocity property property_velocity_text_view_
bool false View text Velocity property_velocity_text_scale_
float 0.3 Scale of Velocity property property_vel_max_
float 3.0 Max velocity [m/s]"},{"location":"common/tier4_planning_rviz_plugin/#trajectoryfootprint","title":"TrajectoryFootprint","text":"Name Type Default Value Description property_trajectory_footprint_view_
bool true Use Trajectory Footprint property or not property_trajectory_footprint_alpha_
float 1.0 Alpha of Trajectory Footprint property property_trajectory_footprint_color_
QColor QColor(230, 230, 50) Color of Trajectory Footprint property property_vehicle_length_
float 4.77 Vehicle length [m] property_vehicle_width_
float 1.83 Vehicle width [m] property_rear_overhang_
float 1.03 Rear overhang [m] property_trajectory_point_view_
bool false Use Trajectory Point property or not property_trajectory_point_alpha_
float 1.0 Alpha of Trajectory Point property property_trajectory_point_color_
QColor QColor(0, 60, 255) Color of Trajectory Point property property_trajectory_point_radius_
float 0.1 Radius of Trajectory Point property"},{"location":"common/tier4_planning_rviz_plugin/#maxvelocity","title":"MaxVelocity","text":"Name Type Default Value Description property_topic_name_
string /planning/scenario_planning/current_max_velocity
The topic on which to subscribe max velocity property_text_color_
QColor QColor(255, 255, 255) Text color property_left_
int 128 Left of the plotter window [px] property_top_
int 128 Top of the plotter window [px] property_length_
int 96 Length of the plotter window [px] property_value_scale_
float 1.0 / 4.0 Value scale"},{"location":"common/tier4_planning_rviz_plugin/#usage","title":"Usage","text":"This plugin captures the screen of rviz.
"},{"location":"common/tier4_screen_capture_rviz_plugin/#assumptions-known-limits","title":"Assumptions / Known limits","text":"This is only for debug or analyze. The capture screen
button is still beta version which can slow frame rate. set lower frame rate according to PC spec.
This plugin allows publishing and controlling the simulated ROS time.
"},{"location":"common/tier4_simulated_clock_rviz_plugin/#output","title":"Output","text":"Name Type Description/clock
rosgraph_msgs::msg::Clock
the current simulated time"},{"location":"common/tier4_simulated_clock_rviz_plugin/#howtouse","title":"HowToUse","text":"Use the added panel to control how the simulated clock is published.
This plugin displays the current status of autoware. This plugin also can engage from the panel.
"},{"location":"common/tier4_state_rviz_plugin/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"common/tier4_state_rviz_plugin/#input","title":"Input","text":"Name Type Description/api/operation_mode/state
autoware_adapi_v1_msgs::msg::OperationModeState
The topic represents the state of operation mode /api/routing/state
autoware_adapi_v1_msgs::msg::RouteState
The topic represents the state of route /api/localization/initialization_state
autoware_adapi_v1_msgs::msg::LocalizationInitializationState
The topic represents the state of localization initialization /api/motion/state
autoware_adapi_v1_msgs::msg::MotionState
The topic represents the state of motion /api/autoware/get/emergency
tier4_external_api_msgs::msg::Emergency
The topic represents the state of external emergency /vehicle/status/gear_status
autoware_auto_vehicle_msgs::msg::GearReport
The topic represents the state of gear"},{"location":"common/tier4_state_rviz_plugin/#output","title":"Output","text":"Name Type Description /api/operation_mode/change_to_autonomous
autoware_adapi_v1_msgs::srv::ChangeOperationMode
The service to change operation mode to autonomous /api/operation_mode/change_to_stop
autoware_adapi_v1_msgs::srv::ChangeOperationMode
The service to change operation mode to stop /api/operation_mode/change_to_local
autoware_adapi_v1_msgs::srv::ChangeOperationMode
The service to change operation mode to local /api/operation_mode/change_to_remote
autoware_adapi_v1_msgs::srv::ChangeOperationMode
The service to change operation mode to remote /api/operation_mode/enable_autoware_control
autoware_adapi_v1_msgs::srv::ChangeOperationMode
The service to enable vehicle control by Autoware /api/operation_mode/disable_autoware_control
autoware_adapi_v1_msgs::srv::ChangeOperationMode
The service to disable vehicle control by Autoware /api/routing/clear_route
autoware_adapi_v1_msgs::srv::ClearRoute
The service to clear route state /api/motion/accept_start
autoware_adapi_v1_msgs::srv::AcceptStart
The service to accept the vehicle to start /api/autoware/set/emergency
tier4_external_api_msgs::srv::SetEmergency
The service to set external emergency /planning/scenario_planning/max_velocity_default
tier4_planning_msgs::msg::VelocityLimit
The topic to set maximum speed of the vehicle"},{"location":"common/tier4_state_rviz_plugin/#howtouse","title":"HowToUse","text":"Start rviz and select panels/Add new panel.
Select tier4_state_rviz_plugin/AutowareStatePanel and press OK.
If the auto button is activated, can engage by clicking it.
This plugin display the Hazard information from Autoware; and output notices when emergencies are from initial localization and route setting.
"},{"location":"common/tier4_system_rviz_plugin/#input","title":"Input","text":"Name Type Description/system/emergency/hazard_status
autoware_auto_system_msgs::msg::HazardStatusStamped
The topic represents the emergency information from Autoware"},{"location":"common/tier4_target_object_type_rviz_plugin/","title":"tier4_target_object_type_rviz_plugin","text":""},{"location":"common/tier4_target_object_type_rviz_plugin/#tier4_target_object_type_rviz_plugin","title":"tier4_target_object_type_rviz_plugin","text":"This plugin allows you to check which types of the dynamic object is being used by each planner.
"},{"location":"common/tier4_target_object_type_rviz_plugin/#limitations","title":"Limitations","text":"Currently, which parameters of which module to check are hardcoded. In the future, this will be parameterized using YAML.
"},{"location":"common/tier4_traffic_light_rviz_plugin/","title":"tier4_traffic_light_rviz_plugin","text":""},{"location":"common/tier4_traffic_light_rviz_plugin/#tier4_traffic_light_rviz_plugin","title":"tier4_traffic_light_rviz_plugin","text":""},{"location":"common/tier4_traffic_light_rviz_plugin/#purpose","title":"Purpose","text":"This plugin panel publishes dummy traffic light signals.
"},{"location":"common/tier4_traffic_light_rviz_plugin/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"common/tier4_traffic_light_rviz_plugin/#output","title":"Output","text":"Name Type Description/perception/traffic_light_recognition/traffic_signals
autoware_perception_msgs::msg::TrafficSignalArray
Publish traffic light signals"},{"location":"common/tier4_traffic_light_rviz_plugin/#howtouse","title":"HowToUse","text":"Traffic Light ID
& Traffic Light Status
and press SET
button.PUBLISH
button is pushed.This package is including jsk code. Note that jsk_overlay_utils.cpp and jsk_overlay_utils.hpp are BSD license.
"},{"location":"common/tier4_vehicle_rviz_plugin/#purpose","title":"Purpose","text":"This plugin provides a visual and easy-to-understand display of vehicle speed, turn signal, steering status and acceleration.
"},{"location":"common/tier4_vehicle_rviz_plugin/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"common/tier4_vehicle_rviz_plugin/#input","title":"Input","text":"Name Type Description/vehicle/status/velocity_status
autoware_auto_vehicle_msgs::msg::VelocityReport
The topic is vehicle twist /control/turn_signal_cmd
autoware_auto_vehicle_msgs::msg::TurnIndicatorsReport
The topic is status of turn signal /vehicle/status/steering_status
autoware_auto_vehicle_msgs::msg::SteeringReport
The topic is status of steering /localization/acceleration
geometry_msgs::msg::AccelWithCovarianceStamped
The topic is the acceleration"},{"location":"common/tier4_vehicle_rviz_plugin/#parameter","title":"Parameter","text":""},{"location":"common/tier4_vehicle_rviz_plugin/#core-parameters","title":"Core Parameters","text":""},{"location":"common/tier4_vehicle_rviz_plugin/#consolemeter","title":"ConsoleMeter","text":"Name Type Default Value Description property_text_color_
QColor QColor(25, 255, 240) Text color property_left_
int 128 Left of the plotter window [px] property_top_
int 128 Top of the plotter window [px] property_length_
int 256 Height of the plotter window [px] property_value_height_offset_
int 0 Height offset of the plotter window [px] property_value_scale_
float 1.0 / 6.667 Value scale"},{"location":"common/tier4_vehicle_rviz_plugin/#steeringangle","title":"SteeringAngle","text":"Name Type Default Value Description property_text_color_
QColor QColor(25, 255, 240) Text color property_left_
int 128 Left of the plotter window [px] property_top_
int 128 Top of the plotter window [px] property_length_
int 256 Height of the plotter window [px] property_value_height_offset_
int 0 Height offset of the plotter window [px] property_value_scale_
float 1.0 / 6.667 Value scale property_handle_angle_scale_
float 3.0 Scale is steering angle to handle angle"},{"location":"common/tier4_vehicle_rviz_plugin/#turnsignal","title":"TurnSignal","text":"Name Type Default Value Description property_left_
int 128 Left of the plotter window [px] property_top_
int 128 Top of the plotter window [px] property_width_
int 256 Left of the plotter window [px] property_height_
int 256 Width of the plotter window [px]"},{"location":"common/tier4_vehicle_rviz_plugin/#velocityhistory","title":"VelocityHistory","text":"Name Type Default Value Description property_velocity_timeout_
float 10.0 Timeout of velocity [s] property_velocity_alpha_
float 1.0 Alpha of velocity property_velocity_scale_
float 0.3 Scale of velocity property_velocity_color_view_
bool false Use Constant Color or not property_velocity_color_
QColor Qt::black Color of velocity history property_vel_max_
float 3.0 Color Border Vel Max [m/s]"},{"location":"common/tier4_vehicle_rviz_plugin/#accelerationmeter","title":"AccelerationMeter","text":"Name Type Default Value Description property_normal_text_color_
QColor QColor(25, 255, 240) Normal text color property_emergency_text_color_
QColor QColor(255, 80, 80) Emergency acceleration color property_left_
int 896 Left of the plotter window [px] property_top_
int 128 Top of the plotter window [px] property_length_
int 256 Height of the plotter window [px] property_value_height_offset_
int 0 Height offset of the plotter window [px] property_value_scale_
float 1 / 6.667 Value text scale property_emergency_threshold_max_
float 1.0 Max acceleration threshold for emergency [m/s^2] property_emergency_threshold_min_
float -2.5 Min acceleration threshold for emergency [m/s^2]"},{"location":"common/tier4_vehicle_rviz_plugin/#assumptions-known-limits","title":"Assumptions / Known limits","text":"TBD.
"},{"location":"common/tier4_vehicle_rviz_plugin/#usage","title":"Usage","text":"This node publishes a marker array for visualizing traffic signal recognition results on Rviz.
"},{"location":"common/traffic_light_recognition_marker_publisher/Readme/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"common/traffic_light_recognition_marker_publisher/Readme/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"common/traffic_light_recognition_marker_publisher/Readme/#input","title":"Input","text":"Name Type Description/map/vector_map
autoware_auto_mapping_msgs::msg::HADMapBin
Vector map for getting traffic signal information /perception/traffic_light_recognition/traffic_signals
autoware_auto_perception_msgs::msg::TrafficSignalArray
The result of traffic signal recognition"},{"location":"common/traffic_light_recognition_marker_publisher/Readme/#output","title":"Output","text":"Name Type Description /perception/traffic_light_recognition/traffic_signals_marker
visualization_msgs::msg::MarkerArray
Publish a marker array for visualization of traffic signal recognition results"},{"location":"common/traffic_light_recognition_marker_publisher/Readme/#parameters","title":"Parameters","text":"None.
"},{"location":"common/traffic_light_recognition_marker_publisher/Readme/#node-parameters","title":"Node Parameters","text":"None.
"},{"location":"common/traffic_light_recognition_marker_publisher/Readme/#core-parameters","title":"Core Parameters","text":"None.
"},{"location":"common/traffic_light_recognition_marker_publisher/Readme/#assumptions-known-limits","title":"Assumptions / Known limits","text":"TBD.
"},{"location":"common/traffic_light_utils/","title":"traffic_light_utils","text":""},{"location":"common/traffic_light_utils/#traffic_light_utils","title":"traffic_light_utils","text":""},{"location":"common/traffic_light_utils/#purpose","title":"Purpose","text":"This package contains a library of common functions that are useful across the traffic light recognition module. This package may include functions for handling ROI types, converting between different data types and message types, as well as common functions related to them.
"},{"location":"common/tvm_utility/","title":"TVM Utility","text":""},{"location":"common/tvm_utility/#tvm-utility","title":"TVM Utility","text":"This is the design document for the tvm_utility
package. For instructions on how to build the tests for YOLOv2 Tiny, see the YOLOv2 Tiny Example Pipeline. For information about where to store test artifacts see the TVM Utility Artifacts.
A set of c++ utilities to help build a TVM based machine learning inference pipeline. The library contains a pipeline class which helps building the pipeline and a number of utility functions that are common in machine learning.
"},{"location":"common/tvm_utility/#design","title":"Design","text":"The Pipeline Class is a standardized way to write an inference pipeline. The pipeline class contains 3 different stages: the pre-processor, the inference engine and the post-processor. The TVM implementation of an inference engine stage is provided.
"},{"location":"common/tvm_utility/#api","title":"API","text":"The pre-processor and post-processor need to be implemented by the user before instantiating the pipeline. You can see example usage in the example pipeline at test/yolo_v2_tiny
.
Each stage in the pipeline has a schedule
function which takes input data as a parameter and return the output data. Once the pipeline object is created, pipeline.schedule
is called to run the pipeline.
int main() {\n create_subscription<sensor_msgs::msg::PointCloud2>(\"points_raw\",\n rclcpp::QoS{1}, [this](const sensor_msgs::msg::PointCloud2::SharedPtr msg)\n {pipeline.schedule(msg);});\n}\n
"},{"location":"common/tvm_utility/#version-checking","title":"Version checking","text":"The InferenceEngineTVM::version_check
function can be used to check the version of the neural network in use against the range of earliest to latest supported versions.
The InferenceEngineTVM
class holds the latest supported version, which needs to be updated when the targeted version changes; after having tested the effect of the version change on the packages dependent on this one.
The earliest supported version depends on each package making use of the inference, and so should be defined (and maintained) in those packages.
"},{"location":"common/tvm_utility/#models","title":"Models","text":"Dependent packages are expected to use the get_neural_network
cmake function from this package in order to build proper external dependency.
std::runtime_error
should be thrown whenever an error is encountered. It should be populated with an appropriate text error description.
The neural networks are compiled as part of the Model Zoo CI pipeline and saved to an S3 bucket.
The get_neural_network
function creates an abstraction for the artifact management. Users should check if model configuration header file is under \"data/user/${MODEL_NAME}/\". Otherwise, nothing happens and compilation of the package will be skipped.
The structure inside of the source directory of the package making use of the function is as follow:
.\n\u251c\u2500\u2500 data\n\u2502 \u2514\u2500\u2500 models\n\u2502 \u251c\u2500\u2500 ${MODEL 1}\n\u2502 \u2502 \u2514\u2500\u2500 inference_engine_tvm_config.hpp\n\u2502 \u251c\u2500\u2500 ...\n\u2502 \u2514\u2500\u2500 ${MODEL ...}\n\u2502 \u2514\u2500\u2500 ...\n
The inference_engine_tvm_config.hpp
file needed for compilation by dependent packages should be available under \"data/models/${MODEL_NAME}/inference_engine_tvm_config.hpp\". Dependent packages can use the cmake add_dependencies
function with the name provided in the DEPENDENCY
output parameter of get_neural_network
to ensure this file is created before it gets used.
The other deploy_*
files are installed to \"models/${MODEL_NAME}/\" under the share
directory of the package.
The other model files should be stored in autoware_data folder under package folder with the structure:
$HOME/autoware_data\n| \u2514\u2500\u2500${package}\n| \u2514\u2500\u2500models\n| \u251c\u2500\u2500 ${MODEL 1}\n| | \u251c\u2500\u2500 deploy_graph.json\n| | \u251c\u2500\u2500 deploy_lib.so\n| | \u2514\u2500\u2500 deploy_param.params\n| \u251c\u2500\u2500 ...\n| \u2514\u2500\u2500 ${MODEL ...}\n| \u2514\u2500\u2500 ...\n
"},{"location":"common/tvm_utility/#inputs-outputs","title":"Inputs / Outputs","text":"Outputs:
get_neural_network
cmake function; create proper external dependency for a package with use of the model provided by the user.In/Out:
DEPENDENCY
argument of get_neural_network
can be checked for the outcome of the function. It is an empty string when the neural network wasn't provided by the user.Both the input and output are controlled by the same actor, so the following security concerns are out-of-scope:
Leaking data to another actor would require a flaw in TVM or the host operating system that allows arbitrary memory to be read, a significant security flaw in itself. This is also true for an external actor operating the pipeline early: only the object that initiated the pipeline can run the methods to receive its output.
A Denial-of-Service attack could make the target hardware unusable for other pipelines but would require being able to run code on the CPU, which would already allow a more severe Denial-of-Service attack.
No elevation of privilege is required for this package.
"},{"location":"common/tvm_utility/#network-provider","title":"Network provider","text":"The pre-compiled networks are downloaded from an S3 bucket and are under threat of spoofing, tampering and denial of service. Spoofing is mitigated by using an https connection. Mitigations for tampering and denial of service are left to AWS.
The user-provided networks are installed as they are on the host system. The user is in charge of securing the files they provide with regard to information disclosure.
"},{"location":"common/tvm_utility/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":"Future packages will use tvm_utility as part of the perception stack to run machine learning workloads.
"},{"location":"common/tvm_utility/#related-issues","title":"Related issues","text":"https://github.com/autowarefoundation/autoware/discussions/2557
"},{"location":"common/tvm_utility/tvm-utility-yolo-v2-tiny-tests/","title":"YOLOv2 Tiny Example Pipeline","text":""},{"location":"common/tvm_utility/tvm-utility-yolo-v2-tiny-tests/#yolov2-tiny-example-pipeline","title":"YOLOv2 Tiny Example Pipeline","text":"This is an example implementation of an inference pipeline using the pipeline framework. This example pipeline executes the YOLO V2 Tiny model and decodes its output.
"},{"location":"common/tvm_utility/tvm-utility-yolo-v2-tiny-tests/#compiling-the-example","title":"Compiling the Example","text":"Check if model was downloaded during the env preparation step by ansible and models files exist in the folder $HOME/autoware_data/tvm_utility/models/yolo_v2_tiny.
If not you can download them manually, see Manual Artifacts Downloading.
Download an example image to be used as test input. This image needs to be saved in the artifacts/yolo_v2_tiny/
folder.
curl https://raw.githubusercontent.com/pjreddie/darknet/master/data/dog.jpg \\\n> artifacts/yolo_v2_tiny/test_image_0.jpg\n
Build.
colcon build --packages-up-to tvm_utility --cmake-args -DBUILD_EXAMPLE=ON\n
Run.
ros2 launch tvm_utility yolo_v2_tiny_example.launch.xml\n
image_filename
string $(find-pkg-share tvm_utility)/artifacts/yolo_v2_tiny/test_image_0.jpg
Filename of the image on which to run the inference. label_filename
string $(find-pkg-share tvm_utility)/artifacts/yolo_v2_tiny/labels.txt
Name of file containing the human readable names of the classes. One class on each line. anchor_filename
string $(find-pkg-share tvm_utility)/artifacts/yolo_v2_tiny/anchors.csv
Name of file containing the anchor values for the network. Each line is one anchor. each anchor has 2 comma separated floating point values. data_path
string $(env HOME)/autoware_data
Packages data and artifacts directory path."},{"location":"common/tvm_utility/tvm-utility-yolo-v2-tiny-tests/#gpu-backend","title":"GPU backend","text":"Vulkan is supported by default by the tvm_vendor package. It can be selected by setting the tvm_utility_BACKEND
variable:
colcon build --packages-up-to tvm_utility -Dtvm_utility_BACKEND=vulkan\n
"},{"location":"common/tvm_utility/artifacts/","title":"TVM Utility Artifacts","text":""},{"location":"common/tvm_utility/artifacts/#tvm-utility-artifacts","title":"TVM Utility Artifacts","text":"Place any test artifacts in subdirectories within this directory.
e.g.: ./artifacts/yolo_v2_tiny
"},{"location":"control/autonomous_emergency_braking/","title":"Autonomous Emergency Braking (AEB)","text":""},{"location":"control/autonomous_emergency_braking/#autonomous-emergency-braking-aeb","title":"Autonomous Emergency Braking (AEB)","text":""},{"location":"control/autonomous_emergency_braking/#purpose-role","title":"Purpose / Role","text":"autonomous_emergency_braking
is a module that prevents collisions with obstacles on the predicted path created by a control module or sensor values estimated from the control module.
This module has following assumptions.
AEB has the following steps before it outputs the emergency stop signal.
Activate AEB if necessary.
Generate a predicted path of the ego vehicle.
Get target obstacles from the input point cloud.
Collision check with target obstacles.
Send emergency stop signals to /diagnostics
.
We give more details of each section below.
"},{"location":"control/autonomous_emergency_braking/#1-activate-aeb-if-necessary","title":"1. Activate AEB if necessary","text":"We do not activate AEB module if it satisfies the following conditions.
AEB generates a predicted path based on current velocity and current angular velocity obtained from attached sensors. Note that if use_imu_path
is false
, it skips this step. This predicted path is generated as:
where \\(v\\) and \\(\\omega\\) are current longitudinal velocity and angular velocity respectively. \\(dt\\) is time interval that users can define in advance.
"},{"location":"control/autonomous_emergency_braking/#3-get-target-obstacles-from-the-input-point-cloud","title":"3. Get target obstacles from the input point cloud","text":"After generating the ego predicted path, we select target obstacles from the input point cloud. This obstacle filtering has two major steps, which are rough filtering and rigorous filtering.
"},{"location":"control/autonomous_emergency_braking/#rough-filtering","title":"Rough filtering","text":"In rough filtering step, we select target obstacle with simple filter. Create a search area up to a certain distance (default 5[m]) away from the predicted path of the ego vehicle and ignore the point cloud (obstacles) that are not within it. The image of the rough filtering is illustrated below.
"},{"location":"control/autonomous_emergency_braking/#rigorous-filtering","title":"Rigorous filtering","text":"After rough filtering, it performs a geometric collision check to determine whether the filtered obstacles actually have possibility to collide with the ego vehicle. In this check, the ego vehicle is represented as a rectangle, and the point cloud obstacles are represented as points.
"},{"location":"control/autonomous_emergency_braking/#4-collision-check-with-target-obstacles","title":"4. Collision check with target obstacles","text":"In the fourth step, it checks the collision with filtered obstacles using RSS distance. RSS is formulated as:
\\[ d = v_{ego}*t_{response} + v_{ego}^2/(2*a_{min}) - v_{obj}^2/(2*a_{obj_{min}}) + offset \\]where \\(v_{ego}\\) and \\(v_{obj}\\) is current ego and obstacle velocity, \\(a_{min}\\) and \\(a_{obj_{min}}\\) is ego and object minimum acceleration (maximum deceleration), \\(t_{response}\\) is response time of the ego vehicle to start deceleration. Therefore the distance from the ego vehicle to the obstacle is smaller than this RSS distance \\(d\\), the ego vehicle send emergency stop signals. This is illustrated in the following picture.
"},{"location":"control/autonomous_emergency_braking/#5-send-emergency-stop-signals-to-diagnostics","title":"5. Send emergency stop signals to/diagnostics
","text":"If AEB detects collision with point cloud obstacles in the previous step, it sends emergency signal to /diagnostics
in this step. Note that in order to enable emergency stop, it has to send ERROR level emergency. Moreover, AEB user should modify the setting file to keep the emergency level, otherwise Autoware does not hold the emergency state.
control_performance_analysis
is the package to analyze the tracking performance of a control module and monitor the driving status of the vehicle.
This package is used as a tool to quantify the results of the control module. That's why it doesn't interfere with the core logic of autonomous driving.
Based on the various input from planning, control, and vehicle, it publishes the result of analysis as control_performance_analysis::msg::ErrorStamped
defined in this package.
All results in ErrorStamped
message are calculated in Frenet Frame of curve. Errors and velocity errors are calculated by using paper below.
Werling, Moritz & Groell, Lutz & Bretthauer, Georg. (2010). Invariant Trajectory Tracking With a Full-Size Autonomous Road Vehicle. IEEE Transactions on Robotics. 26. 758 - 765. 10.1109/TRO.2010.2052325.
If you are interested in calculations, you can see the error and error velocity calculations in section C. Asymptotical Trajectory Tracking With Orientation Control
.
Error acceleration calculations are made based on the velocity calculations above. You can see below the calculation of error acceleration.
"},{"location":"control/control_performance_analysis/#input-output","title":"Input / Output","text":""},{"location":"control/control_performance_analysis/#input-topics","title":"Input topics","text":"Name Type Description/planning/scenario_planning/trajectory
autoware_auto_planning_msgs::msg::Trajectory Output trajectory from planning module. /control/command/control_cmd
autoware_auto_control_msgs::msg::AckermannControlCommand Output control command from control module. /vehicle/status/steering_status
autoware_auto_vehicle_msgs::msg::SteeringReport Steering information from vehicle. /localization/kinematic_state
nav_msgs::msg::Odometry Use twist from odometry. /tf
tf2_msgs::msg::TFMessage Extract ego pose from tf."},{"location":"control/control_performance_analysis/#output-topics","title":"Output topics","text":"Name Type Description /control_performance/performance_vars
control_performance_analysis::msg::ErrorStamped The result of the performance analysis. /control_performance/driving_status
control_performance_analysis::msg::DrivingMonitorStamped Driving status (acceleration, jerk etc.) monitoring"},{"location":"control/control_performance_analysis/#outputs","title":"Outputs","text":""},{"location":"control/control_performance_analysis/#control_performance_analysismsgdrivingmonitorstamped","title":"control_performance_analysis::msg::DrivingMonitorStamped","text":"Name Type Description longitudinal_acceleration
float [m / s^2] longitudinal_jerk
float [m / s^3] lateral_acceleration
float [m / s^2] lateral_jerk
float [m / s^3] desired_steering_angle
float [rad] controller_processing_time
float Timestamp between last two control command messages [ms]"},{"location":"control/control_performance_analysis/#control_performance_analysismsgerrorstamped","title":"control_performance_analysis::msg::ErrorStamped","text":"Name Type Description lateral_error
float [m] lateral_error_velocity
float [m / s] lateral_error_acceleration
float [m / s^2] longitudinal_error
float [m] longitudinal_error_velocity
float [m / s] longitudinal_error_acceleration
float [m / s^2] heading_error
float [rad] heading_error_velocity
float [rad / s] control_effort_energy
float [u * R * u^T] error_energy
float lateral_error^2 + heading_error^2 value_approximation
float V = xPx' ; Value function from DARE Lyap matrix P curvature_estimate
float [1 / m] curvature_estimate_pp
float [1 / m] vehicle_velocity_error
float [m / s] tracking_curvature_discontinuity_ability
float Measures the ability to tracking the curvature changes [abs(delta(curvature)) / (1 + abs(delta(lateral_error))
]"},{"location":"control/control_performance_analysis/#parameters","title":"Parameters","text":"Name Type Description curvature_interval_length
double Used for estimating current curvature prevent_zero_division_value
double Value to avoid zero division. Default is 0.001
odom_interval
unsigned integer Interval between odom messages, increase it for smoother curve. acceptable_max_distance_to_waypoint
double Maximum distance between trajectory point and vehicle [m] acceptable_max_yaw_difference_rad
double Maximum yaw difference between trajectory point and vehicle [rad] low_pass_filter_gain
double Low pass filter gain"},{"location":"control/control_performance_analysis/#usage","title":"Usage","text":"control_performance_analysis.launch.xml
.Plotjuggler
and use config/controller_monitor.xml
as layout.Plotjuggler
you can export the statistic (max, min, average) values as csv file. Use that statistics to compare the control modules.The control_validator
is a module that checks the validity of the output of the control component. The status of the validation can be viewed in the /diagnostics
topic.
The following features are supported for the validation and can have thresholds set by parameters:
Other features are to be implemented.
"},{"location":"control/control_validator/#inputsoutputs","title":"Inputs/Outputs","text":""},{"location":"control/control_validator/#inputs","title":"Inputs","text":"The control_validator
takes in the following inputs:
~/input/kinematics
nav_msgs/Odometry ego pose and twist ~/input/reference_trajectory
autoware_auto_control_msgs/Trajectory reference trajectory which is outputted from planning module to to be followed ~/input/predicted_trajectory
autoware_auto_control_msgs/Trajectory predicted trajectory which is outputted from control module"},{"location":"control/control_validator/#outputs","title":"Outputs","text":"It outputs the following:
Name Type Description~/output/validation_status
control_validator/ControlValidatorStatus validator status to inform the reason why the trajectory is valid/invalid /diagnostics
diagnostic_msgs/DiagnosticStatus diagnostics to report errors"},{"location":"control/control_validator/#parameters","title":"Parameters","text":"The following parameters can be set for the control_validator
:
publish_diag
bool if true, diagnostics msg is published. true diag_error_count_threshold
int the Diag will be set to ERROR when the number of consecutive invalid trajectory exceeds this threshold. (For example, threshold = 1 means, even if the trajectory is invalid, the Diag will not be ERROR if the next trajectory is valid.) true display_on_terminal
bool show error msg on terminal true"},{"location":"control/control_validator/#algorithm-parameters","title":"Algorithm parameters","text":""},{"location":"control/control_validator/#thresholds","title":"Thresholds","text":"The input trajectory is detected as invalid if the index exceeds the following thresholds.
Name Type Description Default valuethresholds.max_distance_deviation
double invalid threshold of the max distance deviation between the predicted path and the reference trajectory [m] 1.0"},{"location":"control/external_cmd_selector/","title":"external_cmd_selector","text":""},{"location":"control/external_cmd_selector/#external_cmd_selector","title":"external_cmd_selector","text":""},{"location":"control/external_cmd_selector/#purpose","title":"Purpose","text":"external_cmd_selector
is the package to publish external_control_cmd
, gear_cmd
, hazard_lights_cmd
, heartbeat
and turn_indicators_cmd
, according to the current mode, which is remote
or local
.
The current mode is set via service, remote
is remotely operated, local
is to use the values calculated by Autoware.
/api/external/set/command/local/control
TBD Local. Calculated control value. /api/external/set/command/local/heartbeat
TBD Local. Heartbeat. /api/external/set/command/local/shift
TBD Local. Gear shift like drive, rear and etc. /api/external/set/command/local/turn_signal
TBD Local. Turn signal like left turn, right turn and etc. /api/external/set/command/remote/control
TBD Remote. Calculated control value. /api/external/set/command/remote/heartbeat
TBD Remote. Heartbeat. /api/external/set/command/remote/shift
TBD Remote. Gear shift like drive, rear and etc. /api/external/set/command/remote/turn_signal
TBD Remote. Turn signal like left turn, right turn and etc."},{"location":"control/external_cmd_selector/#output-topics","title":"Output topics","text":"Name Type Description /control/external_cmd_selector/current_selector_mode
TBD Current selected mode, remote or local. /diagnostics
diagnostic_msgs::msg::DiagnosticArray Check if node is active or not. /external/selected/external_control_cmd
TBD Pass through control command with current mode. /external/selected/gear_cmd
autoware_auto_vehicle_msgs::msg::GearCommand Pass through gear command with current mode. /external/selected/hazard_lights_cmd
autoware_auto_vehicle_msgs::msg::HazardLightsCommand Pass through hazard light with current mode. /external/selected/heartbeat
TBD Pass through heartbeat with current mode. /external/selected/turn_indicators_cmd
autoware_auto_vehicle_msgs::msg::TurnIndicatorsCommand Pass through turn indicator with current mode."},{"location":"control/joy_controller/","title":"joy_controller","text":""},{"location":"control/joy_controller/#joy_controller","title":"joy_controller","text":""},{"location":"control/joy_controller/#role","title":"Role","text":"joy_controller
is the package to convert a joy msg to autoware commands (e.g. steering wheel, shift, turn signal, engage) for a vehicle.
~/input/joy
sensor_msgs::msg::Joy joy controller command ~/input/odometry
nav_msgs::msg::Odometry ego vehicle odometry to get twist"},{"location":"control/joy_controller/#output-topics","title":"Output topics","text":"Name Type Description ~/output/control_command
autoware_auto_control_msgs::msg::AckermannControlCommand lateral and longitudinal control command ~/output/external_control_command
tier4_external_api_msgs::msg::ControlCommandStamped lateral and longitudinal control command ~/output/shift
tier4_external_api_msgs::msg::GearShiftStamped gear command ~/output/turn_signal
tier4_external_api_msgs::msg::TurnSignalStamped turn signal command ~/output/gate_mode
tier4_control_msgs::msg::GateMode gate mode (Auto or External) ~/output/heartbeat
tier4_external_api_msgs::msg::Heartbeat heartbeat ~/output/vehicle_engage
autoware_auto_vehicle_msgs::msg::Engage vehicle engage"},{"location":"control/joy_controller/#parameters","title":"Parameters","text":"Parameter Type Description joy_type
string joy controller type (default: DS4) update_rate
double update rate to publish control commands accel_ratio
double ratio to calculate acceleration (commanded acceleration is ratio * operation) brake_ratio
double ratio to calculate deceleration (commanded acceleration is -ratio * operation) steer_ratio
double ratio to calculate deceleration (commanded steer is ratio * operation) steering_angle_velocity
double steering angle velocity for operation accel_sensitivity
double sensitivity to calculate acceleration for external API (commanded acceleration is pow(operation, 1 / sensitivity)) brake_sensitivity
double sensitivity to calculate deceleration for external API (commanded acceleration is pow(operation, 1 / sensitivity)) raw_control
bool skip input odometry if true velocity_gain
double ratio to calculate velocity by acceleration max_forward_velocity
double absolute max velocity to go forward max_backward_velocity
double absolute max velocity to go backward backward_accel_ratio
double ratio to calculate deceleration (commanded acceleration is -ratio * operation)"},{"location":"control/joy_controller/#p65-joystick-key-map","title":"P65 Joystick Key Map","text":"Action Button Acceleration R2 Brake L2 Steering Left Stick Left Right Shift up Cursor Up Shift down Cursor Down Shift Drive Cursor Left Shift Reverse Cursor Right Turn Signal Left L1 Turn Signal Right R1 Clear Turn Signal A Gate Mode B Emergency Stop Select Clear Emergency Stop Start Autoware Engage X Autoware Disengage Y Vehicle Engage PS Vehicle Disengage Right Trigger"},{"location":"control/joy_controller/#ds4-joystick-key-map","title":"DS4 Joystick Key Map","text":"Action Button Acceleration R2, \u00d7, or Right Stick Up Brake L2, \u25a1, or Right Stick Down Steering Left Stick Left Right Shift up Cursor Up Shift down Cursor Down Shift Drive Cursor Left Shift Reverse Cursor Right Turn Signal Left L1 Turn Signal Right R1 Clear Turn Signal SHARE Gate Mode OPTIONS Emergency Stop PS Clear Emergency Stop PS Autoware Engage \u25cb Autoware Disengage \u25cb Vehicle Engage \u25b3 Vehicle Disengage \u25b3"},{"location":"control/joy_controller/#xbox-joystick-key-map","title":"XBOX Joystick Key Map","text":"Action Button Acceleration RT Brake LT Steering Left Stick Left Right Shift up Cursor Up Shift down Cursor Down Shift Drive Cursor Left Shift Reverse Cursor Right Turn Signal Left LB Turn Signal Right RB Clear Turn Signal A Gate Mode B Emergency Stop View Clear Emergency Stop Menu Autoware Engage X Autoware Disengage Y Vehicle Engage Left Stick Button Vehicle Disengage Right Stick Button"},{"location":"control/lane_departure_checker/","title":"Lane Departure Checker","text":""},{"location":"control/lane_departure_checker/#lane-departure-checker","title":"Lane Departure Checker","text":"The Lane Departure Checker checks if vehicle follows a trajectory. If it does not follow the trajectory, it reports its status via diagnostic_updater
.
This package includes the following features:
Calculate the standard deviation of error ellipse(covariance) in vehicle coordinate.
1.Transform covariance into vehicle coordinate.
Calculate covariance in vehicle coordinate.
2.The longitudinal length we want to expand is correspond to marginal distribution of \\(x_{vehicle}\\), which is represented in \\(Cov_{vehicle}(0,0)\\). In the same way, the lateral length is represented in \\(Cov_{vehicle}(1,1)\\). Wikipedia reference here.
Expand footprint based on the standard deviation multiplied with footprint_margin_scale
.
nav_msgs::msg::Odometry
]autoware_auto_mapping_msgs::msg::HADMapBin
]autoware_planning_msgs::msg::LaneletRoute
]autoware_auto_planning_msgs::msg::Trajectory
]autoware_auto_planning_msgs::msg::Trajectory
]diagnostic_updater
] lane_departure : Update diagnostic level when ego vehicle is out of lane.diagnostic_updater
] trajectory_deviation : Update diagnostic level when ego vehicle deviates from trajectory.This is the design document for the lateral controller node in the trajectory_follower_node
package.
This node is used to general lateral control commands (steering angle and steering rate) when following a path.
"},{"location":"control/mpc_lateral_controller/#design","title":"Design","text":"The node uses an implementation of linear model predictive control (MPC) for accurate path tracking. The MPC uses a model of the vehicle to simulate the trajectory resulting from the control command. The optimization of the control command is formulated as a Quadratic Program (QP).
Different vehicle models are implemented:
For the optimization, a Quadratic Programming (QP) solver is used and two options are currently implemented:
Filtering is required for good noise reduction. A Butterworth filter is employed for processing the yaw and lateral errors, which are used as inputs for the MPC, as well as for refining the output steering angle. Other filtering methods can be considered as long as the noise reduction performances are good enough. The moving average filter for example is not suited and can yield worse results than without any filtering.
"},{"location":"control/mpc_lateral_controller/#assumptions-known-limits","title":"Assumptions / Known limits","text":"The tracking is not accurate if the first point of the reference trajectory is at or in front of the current ego pose.
"},{"location":"control/mpc_lateral_controller/#inputs-outputs-api","title":"Inputs / Outputs / API","text":""},{"location":"control/mpc_lateral_controller/#inputs","title":"Inputs","text":"Set the following from the controller_node
autoware_auto_planning_msgs/Trajectory
: reference trajectory to follow.nav_msgs/Odometry
: current odometryautoware_auto_vehicle_msgs/SteeringReport
: current steeringReturn LateralOutput which contains the following to the controller node
autoware_auto_control_msgs/AckermannLateralCommand
The MPC
class (defined in mpc.hpp
) provides the interface with the MPC algorithm. Once a vehicle model, a QP solver, and the reference trajectory to follow have been set (using setVehicleModel()
, setQPSolver()
, setReferenceTrajectory()
), a lateral control command can be calculated by providing the current steer, velocity, and pose to function calculateMPC()
.
The default parameters defined in param/lateral_controller_defaults.param.yaml
are adjusted to the AutonomouStuff Lexus RX 450h for under 40 km/h driving.
(*1) To prevent unnecessary steering movement, the steering command is fixed to the previous value in the stop state.
"},{"location":"control/mpc_lateral_controller/#steer-offset","title":"Steer Offset","text":"Defined in the steering_offset
namespace. This logic is designed as simple as possible, with minimum design parameters.
First, it's important to set the appropriate parameters for vehicle kinematics. This includes parameters like wheelbase
, which represents the distance between the front and rear wheels, and max_steering_angle
, which indicates the maximum tire steering angle. These parameters should be set in the vehicle_info.param.yaml
.
Next, you need to set the proper parameters for the dynamics model. These include the time constant steering_tau
and time delay steering_delay
for steering dynamics, and the maximum acceleration mpc_acceleration_limit
and the time constant mpc_velocity_time_constant
for velocity dynamics.
It's also important to make sure the input information is accurate. Information such as the velocity of the center of the rear wheel [m/s] and the steering angle of the tire [rad] is required. Please note that there have been frequent reports of performance degradation due to errors in input information. For instance, there are cases where the velocity of the vehicle is offset due to an unexpected difference in tire radius, or the tire angle cannot be accurately measured due to a deviation in the steering gear ratio or midpoint. It is suggested to compare information from multiple sensors (e.g., integrated vehicle speed and GNSS position, steering angle and IMU angular velocity), and ensure the input information for MPC is appropriate.
"},{"location":"control/mpc_lateral_controller/#mpc-weight-tuning","title":"MPC weight tuning","text":"Then, tune the weights of the MPC. One simple approach of tuning is to keep the weight for the lateral deviation (weight_lat_error
) constant, and vary the input weight (weight_steering_input
) while observing the trade-off between steering oscillation and control accuracy.
Here, weight_lat_error
acts to suppress the lateral error in path following, while weight_steering_input
works to adjust the steering angle to a standard value determined by the path's curvature. When weight_lat_error
is large, the steering moves significantly to improve accuracy, which can cause oscillations. On the other hand, when weight_steering_input
is large, the steering doesn't respond much to tracking errors, providing stable driving but potentially reducing tracking accuracy.
The steps are as follows:
weight_lat_error
= 0.1, weight_steering_input
= 1.0 and other weights to 0.weight_steering_input
larger.weight_steering_input
smaller.If you want to adjust the effect only in the high-speed range, you can use weight_steering_input_squared_vel
. This parameter corresponds to the steering weight in the high-speed range.
weight_lat_error
: Reduce lateral tracking error. This acts like P gain in PID.weight_heading_error
: Make a drive straight. This acts like D gain in PID.weight_heading_error_squared_vel_coeff
: Make a drive straight in high speed range.weight_steering_input
: Reduce oscillation of tracking.weight_steering_input_squared_vel_coeff
: Reduce oscillation of tracking in high speed range.weight_lat_jerk
: Reduce lateral jerk.weight_terminal_lat_error
: Preferable to set a higher value than normal lateral weight weight_lat_error
for stability.weight_terminal_heading_error
: Preferable to set a higher value than normal heading weight weight_heading_error
for stability.Here are some tips for adjusting other parameters:
weight_terminal_lat_error
and weight_terminal_heading_error
, can enhance the tracking stability. This method sometimes proves effective.prediction_horizon
and a smaller prediction_sampling_time
are efficient for tracking performance. However, these come at the cost of higher computational costs.mpc_low_curvature_thresh_curvature
and adjust mpc_low_curvature_weight_**
weights.steer_rate_lim_dps_list_by_curvature
, curvature_list_for_steer_rate_lim
, steer_rate_lim_dps_list_by_velocity
, velocity_list_for_steer_rate_lim
. By doing this, you can enforce the steering rate limit during high-speed driving or relax it while curving.curvature_smoothing
becomes critically important for accurate curvature calculations. A larger value yields a smooth curvature calculation which reduces noise but can cause delay in feedforward computation and potentially degrade performance.steering_lpf_cutoff_hz
value can also be effective to forcefully reduce computational noise. This refers to the cutoff frequency in the second order Butterworth filter installed in the final layer. The smaller the cutoff frequency, the stronger the noise reduction, but it also induce operation delay.enable_auto_steering_offset_removal
to true and activate the steering offset remover. The steering offset estimation logic works when driving at high speeds with the steering close to the center, applying offset removal.input_delay
and vehicle_model_steer_tau
. Additionally, as a part of its debug information, MPC outputs the current steering angle assumed by the MPC model, so please check if that steering angle matches the actual one.Model Predictive Control (MPC) is a control method that solves an optimization problem during each control cycle to determine an optimal control sequence based on a given vehicle model. The calculated sequence of control inputs is used to control the system.
In simpler terms, an MPC controller calculates a series of control inputs that optimize the state and output trajectories to achieve the desired behavior. The key characteristics of an MPC control system can be summarized as follows:
The choice between a linear or nonlinear model or constraint equation depends on the specific formulation of the MPC problem. If any nonlinear expressions are present in the motion equation or constraints, the optimization problem becomes nonlinear. In the following sections, we provide a step-by-step explanation of how linear and nonlinear optimization problems are solved within the MPC framework. Note that in this documentation, we utilize the linearization method to accommodate the nonlinear model.
"},{"location":"control/mpc_lateral_controller/model_predictive_control_algorithm/#linear-mpc-formulation","title":"Linear MPC formulation","text":""},{"location":"control/mpc_lateral_controller/model_predictive_control_algorithm/#formulate-as-an-optimization-problem","title":"Formulate as an optimization problem","text":"This section provides an explanation of MPC specifically for linear systems. In the following section, it also demonstrates the formulation of a vehicle path following problem as an application.
In the linear MPC formulation, all motion and constraint expressions are linear. For the path following problem, let's assume that the system's motion can be described by a set of equations, denoted as (1). The state evolution and measurements are presented in a discrete state space format, where matrices \\(A\\), \\(B\\), and \\(C\\) represent the state transition, control, and measurement matrices, respectively.
\\[ \\begin{gather} x_{k+1}=Ax_{k}+Bu_{k}+w_{k}, y_{k}=Cx_{k} \\tag{1} \\\\ x_{k}\\in R^{n},u_{k}\\in R^{m},w_{k}\\in R^{n}, y_{k}\\in R^{l}, A\\in R^{n\\times n}, B\\in R^{n\\times m}, C\\in R^{l \\times n} \\end{gather} \\]Equation (1) represents the state-space equation, where \\(x_k\\) represents the internal states, \\(u_k\\) denotes the input, and \\(w_k\\) represents a known disturbance caused by linearization or problem structure. The measurements are indicated by the variable \\(y_k\\).
It's worth noting that another advantage of MPC is its ability to effectively handle the disturbance term \\(w\\). While it is referred to as a disturbance here, it can take various forms as long as it adheres to the equation's structure.
The state transition and measurement equations in (1) are iterative, moving from time \\(k\\) to time \\(k+1\\). By propagating the equation starting from an initial state and control pair \\((x_0, u_0)\\) along with a specified horizon of \\(N\\) steps, one can predict the trajectories of states and measurements.
For simplicity, let's assume the initial state is \\(x_0\\) with \\(k=0\\).
To begin, we can compute the state \\(x_1\\) at \\(k=1\\) using equation (1) by substituting the initial state into the equation. Since we are seeking a solution for the input sequence, we represent the inputs as decision variables in the symbolic expressions.
\\[ \\begin{align} x_{1} = Ax_{0} + Bu_{0} + w_{0} \\tag{2} \\end{align} \\]Then, when \\(k=2\\), using also equation (2), we get
\\[ \\begin{align} x_{2} & = Ax_{1} + Bu_{1} + w_{1} \\\\ & = A(Ax_{0} + Bu_{0} + w_{0}) + Bu_{1} + w_{1} \\\\ & = A^{2}x_{0} + ABu_{0} + Aw_{0} + Bu_{1} + w_{1} \\\\ & = A^{2}x_{0} + \\begin{bmatrix}AB & B \\end{bmatrix}\\begin{bmatrix}u_{0}\\\\ u_{1} \\end{bmatrix} + \\begin{bmatrix}A & I \\end{bmatrix}\\begin{bmatrix}w_{0}\\\\ w_{1} \\end{bmatrix} \\tag{3} \\end{align} \\]When \\(k=3\\) , from equation (3)
\\[ \\begin{align} x_{3} & = Ax_{2} + Bu_{2} + w_{2} \\\\ & = A(A^{2}x_{0} + ABu_{0} + Bu_{1} + Aw_{0} + w_{1} ) + Bu_{2} + w_{2} \\\\ & = A^{3}x_{0} + A^{2}Bu_{0} + ABu_{1} + A^{2}w_{0} + Aw_{1} + Bu_{2} + w_{2} \\\\ & = A^{3}x_{0} + \\begin{bmatrix}A^{2}B & AB & B \\end{bmatrix}\\begin{bmatrix}u_{0}\\\\ u_{1} \\\\ u_{2} \\end{bmatrix} + \\begin{bmatrix} A^{2} & A & I \\end{bmatrix}\\begin{bmatrix}w_{0}\\\\ w_{1} \\\\ w_{2} \\end{bmatrix} \\tag{4} \\end{align} \\]If \\(k=n\\) , then
\\[ \\begin{align} x_{n} = A^{n}x_{0} + \\begin{bmatrix}A^{n-1}B & A^{n-2}B & \\dots & B \\end{bmatrix}\\begin{bmatrix}u_{0}\\\\ u_{1} \\\\ \\vdots \\\\ u_{n-1} \\end{bmatrix} + \\begin{bmatrix} A^{n-1} & A^{n-2} & \\dots & I \\end{bmatrix}\\begin{bmatrix}w_{0}\\\\ w_{1} \\\\ \\vdots \\\\ w_{n-1} \\end{bmatrix} \\tag{5} \\end{align} \\]Putting all of them together with (2) to (5) yields the following matrix equation;
\\[ \\begin{align} \\begin{bmatrix}x_{1}\\\\ x_{2} \\\\ x_{3} \\\\ \\vdots \\\\ x_{n} \\end{bmatrix} = \\begin{bmatrix}A^{1}\\\\ A^{2} \\\\ A^{3} \\\\ \\vdots \\\\ A^{n} \\end{bmatrix}x_{0} + \\begin{bmatrix}B & 0 & \\dots & & 0 \\\\ AB & B & 0 & \\dots & 0 \\\\ A^{2}B & AB & B & \\dots & 0 \\\\ \\vdots & \\vdots & & & 0 \\\\ A^{n-1}B & A^{n-2}B & \\dots & AB & B \\end{bmatrix}\\begin{bmatrix}u_{0}\\\\ u_{1} \\\\ u_{2} \\\\ \\vdots \\\\ u_{n-1} \\end{bmatrix} \\\\ + \\begin{bmatrix}I & 0 & \\dots & & 0 \\\\ A & I & 0 & \\dots & 0 \\\\ A^{2} & A & I & \\dots & 0 \\\\ \\vdots & \\vdots & & & 0 \\\\ A^{n-1} & A^{n-2} & \\dots & A & I \\end{bmatrix}\\begin{bmatrix}w_{0}\\\\ w_{1} \\\\ w_{2} \\\\ \\vdots \\\\ w_{n-1} \\end{bmatrix} \\tag{6} \\end{align} \\]In this case, the measurements (outputs) become; \\(y_{k}=Cx_{k}\\), so
\\[ \\begin{align} \\begin{bmatrix}y_{1}\\\\ y_{2} \\\\ y_{3} \\\\ \\vdots \\\\ y_{n} \\end{bmatrix} = \\begin{bmatrix}C & 0 & \\dots & & 0 \\\\ 0 & C & 0 & \\dots & 0 \\\\ 0 & 0 & C & \\dots & 0 \\\\ \\vdots & & & \\ddots & 0 \\\\ 0 & \\dots & 0 & 0 & C \\end{bmatrix}\\begin{bmatrix}x_{1}\\\\ x_{2} \\\\ x_{3} \\\\ \\vdots \\\\ x_{n} \\end{bmatrix} \\tag{7} \\end{align} \\]We can combine equations (6) and (7) into the following form:
\\[ \\begin{align} X = Fx_{0} + GU +SW, Y=HX \\tag{8} \\end{align} \\]This form is similar to the original state-space equations (1), but it introduces new matrices: the state transition matrix \\(F\\), control matrix \\(G\\), disturbance matrix \\(W\\), and measurement matrix \\(H\\). In these equations, \\(X\\) represents the predicted states, given by \\(\\begin{bmatrix}x_{1} & x_{2} & \\dots & x_{n} \\end{bmatrix}^{T}\\).
Now that \\(G\\), \\(S\\), \\(W\\), and \\(H\\) are known, we can express the output behavior \\(Y\\) for the next \\(n\\) steps as a function of the input \\(U\\). This allows us to calculate the control input \\(U\\) so that \\(Y(U)\\) follows the target trajectory \\(Y_{ref}\\).
The next step is to define a cost function. The cost function generally uses the following quadratic form;
\\[ \\begin{align} J = (Y - Y_{ref})^{T}Q(Y - Y_{ref}) + (U - U_{ref})^{T}R(U - U_{ref}) \\tag{9} \\end{align} \\]where \\(U_{ref}\\) is the target or steady-state input around which the system is linearized for \\(U\\).
This cost function is the same as that of the LQR controller. The first term of \\(J\\) penalizes the deviation from the reference trajectory. The second term penalizes the deviation from the reference (or steady-state) control trajectory. The \\(Q\\) and \\(R\\) are the cost weights Positive and Positive semi-semidefinite matrices.
Note: in some cases, \\(U_{ref}=0\\) is used, but this can mean the steering angle should be set to \\(0\\) even if the vehicle is turning a curve. Thus \\(U_{ref}\\) is used for the explanation here. This \\(U_{ref}\\) can be pre-calculated from the curvature of the target trajectory or the steady-state analyses.
As the resulting trajectory output is now \\(Y=Y(x_{0}, U)\\), the cost function depends only on U and the initial state conditions which yields the cost \\(J=J(x_{0}, U)\\). Let\u2019s find the \\(U\\) that minimizes this.
Substituting equation (8) into equation (9) and tidying up the equation for \\(U\\).
\\[ \\begin{align} J(U) &= (H(Fx_{0}+GU+SW)-Y_{ref})^{T}Q(H(Fx_{0}+GU+SW)-Y_{ref})+(U-U_{ref})^{T}R(U-U_{ref}) \\\\ & =U^{T}(G^{T}H^{T}QHG+R)U+2\\left\\{(H(Fx_{0}+SW)-Y_{ref})^{T}QHG-U_{ref}^{T}R\\right\\}U +(\\rm{constant}) \\tag{10} \\end{align} \\]This equation is a quadratic form of \\(U\\) (i.e. \\(U^{T}AU+B^{T}U\\))
The coefficient matrix of the quadratic term of \\(U\\), \\(G^{T}C^{T}QCG+R\\) , is positive definite due to the positive and semi-positive definiteness requirement for \\(Q\\) and \\(R\\). Therefore, the cost function is a convex quadratic function in U, which can efficiently be solved by convex optimization.
"},{"location":"control/mpc_lateral_controller/model_predictive_control_algorithm/#apply-to-vehicle-path-following-problem-nonlinear-problem","title":"Apply to vehicle path-following problem (nonlinear problem)","text":"Because the path-following problem with a kinematic vehicle model is nonlinear, we cannot directly use the linear MPC methods described in the preceding section. There are several ways to deal with a nonlinearity such as using the nonlinear optimization solver. Here, the linearization is applied to the nonlinear vehicle model along the reference trajectory, and consequently, the nonlinear model is converted into a linear time-varying model.
For a nonlinear kinematic vehicle model, the discrete-time update equations are as follows:
\\[ \\begin{align} x_{k+1} &= x_{k} + v\\cos\\theta_{k} \\text{d}t \\\\ y_{k+1} &= y_{k} + v\\sin\\theta_{k} \\text{d}t \\\\ \\theta_{k+1} &= \\theta_{k} + \\frac{v\\tan\\delta_{k}}{L} \\text{d}t \\tag{11} \\\\ \\delta_{k+1} &= \\delta_{k} - \\tau^{-1}\\left(\\delta_{k}-\\delta_{des}\\right)\\text{d}t \\end{align} \\]The vehicle reference is the center of the rear axle and all states are measured at this point. The states, parameters, and control variables are shown in the following table.
Symbol Represent \\(v\\) Vehicle speed measured at the center of rear axle \\(\\theta\\) Yaw (heading angle) in global coordinate system \\(\\delta\\) Vehicle steering angle \\(\\delta_{des}\\) Vehicle target steering angle \\(L\\) Vehicle wheelbase (distance between the rear and front axles) \\(\\tau\\) Time constant for the first order steering dynamicsWe assume in this example that the MPC only generates the steering control, and the trajectory generator gives the vehicle speed along the trajectory.
The kinematic vehicle model discrete update equations contain trigonometric functions; sin and cos, and the vehicle coordinates \\(x\\), \\(y\\), and yaw angles are global coordinates. In path tracking applications, it is common to reformulate the model in error dynamics to convert the control into a regulator problem in which the targets become zero (zero error).
We make small angle assumptions for the following derivations of linear equations. Given the nonlinear dynamics and omitting the longitudinal coordinate \\(x\\), the resulting set of equations become;
\\[ \\begin{align} y_{k+1} &= y_{k} + v\\sin\\theta_{k} \\text{d}t \\\\ \\theta_{k+1} &= \\theta_{k} + \\frac{v\\tan\\delta_{k}}{L} \\text{d}t - \\kappa_{r}v\\cos\\theta_{k}\\text{d}t \\tag{12} \\\\ \\delta_{k+1} &= \\delta_{k} - \\tau^{-1}\\left(\\delta_{k}-\\delta_{des}\\right)\\text{d}t \\end{align} \\]Where \\(\\kappa_{r}\\left(s\\right)\\) is the curvature along the trajectory parametrized by the arc length.
There are three expressions in the update equations that are subject to linear approximation: the lateral deviation (or lateral coordinate) \\(y\\), the heading angle (or the heading angle error) \\(\\theta\\), and the steering \\(\\delta\\). We can make a small angle assumption on the heading angle \\(\\theta\\).
In the path tracking problem, the curvature of the trajectory \\(\\kappa_{r}\\) is known in advance. At the lower speeds, the Ackermann formula approximates the reference steering angle \\(\\theta_{r}\\)(this value corresponds to the \\(U_{ref}\\) mentioned above). The Ackermann steering expression can be written as;
\\[ \\begin{align} \\delta_{r} = \\arctan\\left(L\\kappa_{r}\\right) \\end{align} \\]When the vehicle is turning a path, its steer angle \\(\\delta\\) should be close to the value \\(\\delta_{r}\\). Therefore, \\(\\delta\\) can be expressed,
\\[ \\begin{align} \\delta = \\delta_{r} + \\Delta \\delta, \\Delta\\delta \\ll 1 \\end{align} \\]Substituting this equation into equation (12), and approximate \\(\\Delta\\delta\\) to be small.
\\[ \\begin{align} \\tan\\delta &\\simeq \\tan\\delta_{r} + \\frac{\\text{d}\\tan\\delta}{\\text{d}\\delta} \\Biggm|_{\\delta=\\delta_{r}}\\Delta\\delta \\\\ &= \\tan \\delta_{r} + \\frac{1}{\\cos^{2}\\delta_{r}}\\Delta\\delta \\\\ &= \\tan \\delta_{r} + \\frac{1}{\\cos^{2}\\delta_{r}}\\left(\\delta-\\delta_{r}\\right) \\\\ &= \\tan \\delta_{r} - \\frac{\\delta_{r}}{\\cos^{2}\\delta_{r}} + \\frac{1}{\\cos^{2}\\delta_{r}}\\delta \\end{align} \\]Using this, \\(\\theta_{k+1}\\) can be expressed
\\[ \\begin{align} \\theta_{k+1} &= \\theta_{k} + \\frac{v\\tan\\delta_{k}}{L}\\text{d}t - \\kappa_{r}v\\cos\\delta_{k}\\text{d}t \\\\ &\\simeq \\theta_{k} + \\frac{v}{L}\\text{d}t\\left(\\tan\\delta_{r} - \\frac{\\delta_{r}}{\\cos^{2}\\delta_{r}} + \\frac{1}{\\cos^{2}\\delta_{r}}\\delta_{k} \\right) - \\kappa_{r}v\\text{d}t \\\\ &= \\theta_{k} + \\frac{v}{L}\\text{d}t\\left(L\\kappa_{r} - \\frac{\\delta_{r}}{\\cos^{2}\\delta_{r}} + \\frac{1}{\\cos^{2}\\delta_{r}}\\delta_{k} \\right) - \\kappa_{r}v\\text{d}t \\\\ &= \\theta_{k} + \\frac{v}{L}\\frac{\\text{d}t}{\\cos^{2}\\delta_{r}}\\delta_{k} - \\frac{v}{L}\\frac{\\delta_{r}\\text{d}t}{\\cos^{2}\\delta_{r}} \\end{align} \\]Finally, the linearized time-varying model equation becomes;
\\[ \\begin{align} \\begin{bmatrix} y_{k+1} \\\\ \\theta_{k+1} \\\\ \\delta_{k+1} \\end{bmatrix} = \\begin{bmatrix} 1 & v\\text{d}t & 0 \\\\ 0 & 1 & \\frac{v}{L}\\frac{\\text{d}t}{\\cos^{2}\\delta_{r}} \\\\ 0 & 0 & 1 - \\tau^{-1}\\text{d}t \\end{bmatrix} \\begin{bmatrix} y_{k} \\\\ \\theta_{k} \\\\ \\delta_{k} \\end{bmatrix} + \\begin{bmatrix} 0 \\\\ 0 \\\\ \\tau^{-1}\\text{d}t \\end{bmatrix}\\delta_{des} + \\begin{bmatrix} 0 \\\\ -\\frac{v}{L}\\frac{\\delta_{r}\\text{d}t}{\\cos^{2}\\delta_{r}} \\\\ 0 \\end{bmatrix} \\end{align} \\]This equation has the same form as equation (1) of the linear MPC assumption, but the matrices \\(A\\), \\(B\\), and \\(w\\) change depending on the coordinate transformation. To make this explicit, the entire equation is written as follows
\\[ \\begin{align} x_{k+1} = A_{k}x_{k} + B_{k}u_{k}+w_{k} \\end{align} \\]Comparing equation (1), \\(A \\rightarrow A_{k}\\). This means that the \\(A\\) matrix is a linear approximation in the vicinity of the trajectory after \\(k\\) steps (i.e., \\(k* \\text{d}t\\) seconds), and it can be obtained if the trajectory is known in advance.
Using this equation, write down the update equation likewise (2) ~ (6)
\\[ \\begin{align} \\begin{bmatrix} x_{1} \\\\ x_{2} \\\\ x_{3} \\\\ \\vdots \\\\ x_{n} \\end{bmatrix} = \\begin{bmatrix} A_{1} \\\\ A_{1}A_{0} \\\\ A_{2}A_{1}A_{0} \\\\ \\vdots \\\\ \\prod_{i=0}^{n-1} A_{k} \\end{bmatrix} x_{0} + \\begin{bmatrix} B_{0} & 0 & \\dots & & 0 \\\\ A_{1}B_{0} & B_{1} & 0 & \\dots & 0 \\\\ A_{2}A_{1}B_{0} & A_{2}B_{1} & B_{2} & \\dots & 0 \\\\ \\vdots & \\vdots & &\\ddots & 0 \\\\ \\prod_{i=1}^{n-1} A_{k}B_{0} & \\prod_{i=2}^{n-1} A_{k}B_{1} & \\dots & A_{n-1}B_{n-1} & B_{n-1} \\end{bmatrix} \\begin{bmatrix} u_{0} \\\\ u_{1} \\\\ u_{2} \\\\ \\vdots \\\\ u_{n-1} \\end{bmatrix} + \\begin{bmatrix} I & 0 & \\dots & & 0 \\\\ A_{1} & I & 0 & \\dots & 0 \\\\ A_{2}A_{1} & A_{2} & I & \\dots & 0 \\\\ \\vdots & \\vdots & &\\ddots & 0 \\\\ \\prod_{i=1}^{n-1} A_{k} & \\prod_{i=2}^{n-1} A_{k} & \\dots & A_{n-1} & I \\end{bmatrix} \\begin{bmatrix} w_{0} \\\\ w_{1} \\\\ w_{2} \\\\ \\vdots \\\\ w_{n-1} \\end{bmatrix} \\end{align} \\]As it has the same form as equation (6), convex optimization is applicable for as much as the model in the former section.
"},{"location":"control/mpc_lateral_controller/model_predictive_control_algorithm/#the-cost-functions-and-constraints","title":"The cost functions and constraints","text":"In this section, we give the details on how to set up the cost function and constraint conditions.
"},{"location":"control/mpc_lateral_controller/model_predictive_control_algorithm/#the-cost-function","title":"The cost function","text":""},{"location":"control/mpc_lateral_controller/model_predictive_control_algorithm/#weight-for-error-and-input","title":"Weight for error and input","text":"MPC states and control weights appear in the cost function in a similar way as LQR (9). In the vehicle path following the problem described above, if C is the unit matrix, the output \\(y = x = \\left[y, \\theta, \\delta\\right]\\). (To avoid confusion with the y-directional deviation, here \\(e\\) is used for the lateral deviation.)
As an example, let's determine the weight matrix \\(Q_{1}\\) of the evaluation function for the number of prediction steps \\(n=2\\) system as follows.
\\[ \\begin{align} Q_{1} = \\begin{bmatrix} q_{e} & 0 & 0 & 0 & 0& 0 \\\\ 0 & q_{\\theta} & 0 & 0 & 0 & 0 \\\\ 0 & 0 & 0 & 0 & 0 & 0 \\\\ 0 & 0 & 0 & q_{e} & 0 & 0 \\\\ 0 & 0 & 0 & 0 & q_{\\theta} & 0 \\\\ 0 & 0 & 0 & 0 & 0 & 0 \\end{bmatrix} \\end{align} \\]The first term in the cost function (9) with \\(n=2\\), is shown as follow (\\(Y_{ref}\\) is set to \\(0\\))
\\[ \\begin{align} q_{e}\\left(e_{0}^{2} + e_{1}^{2} \\right) + q_{\\theta}\\left(\\theta_{0}^{2} + \\theta_{1}^{2} \\right) \\end{align} \\]This shows that \\(q_{e}\\) is the weight for the lateral error and \\(q\\) is for the angular error. In this example, \\(q_{e}\\) acts as the proportional - P gain and \\(q_{\\theta}\\) as the derivative - D gain for the lateral tracking error. The balance of these factors (including R) will be determined through actual experiments.
"},{"location":"control/mpc_lateral_controller/model_predictive_control_algorithm/#weight-for-non-diagonal-term","title":"Weight for non-diagonal term","text":"MPC can handle the non-diagonal term in its calculation (as long as the resulting matrix is positive definite).
For instance, write \\(Q_{2}\\) as follows for the \\(n=2\\) system.
\\[ \\begin{align} Q_{2} = \\begin{bmatrix} 0 & 0 & 0 & 0 & 0 & 0 \\\\ 0 & 0 & 0 & 0 & 0 & 0 \\\\ 0 & 0 & q_{d} & 0 & 0 & -q_{d} \\\\ 0 & 0 & 0 & 0 & 0 & 0 \\\\ 0 & 0 & 0 & 0 & 0 & 0 \\\\ 0 & 0 & -q_{d} & 0 & 0 & q_{d} \\end{bmatrix} \\end{align} \\]Expanding the first term of the evaluation function using \\(Q_{2}\\)
\\[ \\begin{align} q_{d}\\left(\\delta_{0}^{2} -2\\delta_{0}\\delta_{1} + \\delta_{1}^{2} \\right) = q_{d}\\left( \\delta_{0} - \\delta_{1}\\right)^{2} \\end{align} \\]The value of \\(q_{d}\\) is weighted by the amount of change in \\(\\delta\\), which will prevent the tire from moving quickly. By adding this section, the system can evaluate the balance between tracking accuracy and change of steering wheel angle.
Since the weight matrix can be added linearly, the final weight can be set as \\(Q = Q_{1} + Q_{2}\\).
Furthermore, MPC optimizes over a period of time, the time-varying weight can be considered in the optimization.
"},{"location":"control/mpc_lateral_controller/model_predictive_control_algorithm/#constraints","title":"Constraints","text":""},{"location":"control/mpc_lateral_controller/model_predictive_control_algorithm/#input-constraint","title":"Input constraint","text":"The main advantage of MPC controllers is the capability to deal with any state or input constraints. The constraints can be expressed as box constraints, such as \"the tire angle must be within \u00b130 degrees\", and can be put in the following form;
\\[ \\begin{align} u_{min} < u < u_{max} \\end{align} \\]The constraints must be linear and convex in the linear MPC applications.
"},{"location":"control/mpc_lateral_controller/model_predictive_control_algorithm/#constraints-on-the-derivative-of-the-input","title":"Constraints on the derivative of the input","text":"We can also put constraints on the input deviations. As the derivative of steering angle is \\(\\dot{u}\\), its box constraint is
\\[ \\begin{align} \\dot{u}_{min} < \\dot{u} < \\dot{u}_{max} \\end{align} \\]We discretize \\(\\dot{u}\\) as \\(\\left(u_{k} - u_{k-1}\\right)/\\text{d}t\\) and multiply both sides by dt, and the resulting constraint become linear and convex
\\[ \\begin{align} \\dot{u}_{min}\\text{d}t < u_{k} - u_{k-1} < \\dot{u}_{max}\\text{d}t \\end{align} \\]Along the prediction or control horizon, i.e for setting \\(n=3\\)
\\[ \\begin{align} \\dot{u}_{min}\\text{d}t < u_{1} - u_{0} < \\dot{u}_{max}\\text{d}t \\\\ \\dot{u}_{min}\\text{d}t < u_{2} - u_{1} < \\dot{u}_{max}\\text{d}t \\end{align} \\]and aligning the inequality signs
\\[ \\begin{align} u_{1} - u_{0} &< \\dot{u}_{max}\\text{d}t \\\\ + u_{1} + u_{0} &< -\\dot{u}_{min}\\text{d}t \\\\ u_{2} - u_{1} &< \\dot{u}_{max}\\text{d}t \\\\ + u_{2} + u_{1} &< - \\dot{u}_{min}\\text{d}t \\end{align} \\]We can obtain a matrix expression for the resulting constraint equation in the form of
\\[ \\begin{align} Ax \\leq b \\end{align} \\]Thus, putting this inequality to fit the form above, the constraints against \\(\\dot{u}\\) can be included at the first-order approximation level.
\\[ \\begin{align} \\begin{bmatrix} -1 & 1 & 0 \\\\ 1 & -1 & 0 \\\\ 0 & -1 & 1 \\\\ 0 & 1 & -1 \\end{bmatrix}\\begin{bmatrix} u_{0} \\\\ u_{1} \\\\ u_{2} \\end{bmatrix} \\leq \\begin{bmatrix} \\dot{u}_{max}\\text{d}t \\\\ -\\dot{u}_{min}\\text{d}t \\\\ \\dot{u}_{max}\\text{d}t \\\\ -\\dot{u}_{min}\\text{d}t \\end{bmatrix} \\end{align} \\]"},{"location":"control/obstacle_collision_checker/","title":"obstacle_collision_checker","text":""},{"location":"control/obstacle_collision_checker/#obstacle_collision_checker","title":"obstacle_collision_checker","text":""},{"location":"control/obstacle_collision_checker/#purpose","title":"Purpose","text":"obstacle_collision_checker
is a module to check obstacle collision for predicted trajectory and publish diagnostic errors if collision is found.
Check that obstacle_collision_checker
receives no ground pointcloud, predicted_trajectory, reference trajectory, and current velocity data.
If any collision is found on predicted path, this module sets ERROR
level as diagnostic status else sets OK
.
~/input/trajectory
autoware_auto_planning_msgs::msg::Trajectory
Reference trajectory ~/input/trajectory
autoware_auto_planning_msgs::msg::Trajectory
Predicted trajectory /perception/obstacle_segmentation/pointcloud
sensor_msgs::msg::PointCloud2
Pointcloud of obstacles which the ego-vehicle should stop or avoid /tf
tf2_msgs::msg::TFMessage
TF /tf_static
tf2_msgs::msg::TFMessage
TF static"},{"location":"control/obstacle_collision_checker/#output","title":"Output","text":"Name Type Description ~/debug/marker
visualization_msgs::msg::MarkerArray
Marker for visualization"},{"location":"control/obstacle_collision_checker/#parameters","title":"Parameters","text":"Name Type Description Default value delay_time
double
Delay time of vehicle [s] 0.3 footprint_margin
double
Foot print margin [m] 0.0 max_deceleration
double
Max deceleration for ego vehicle to stop [m/s^2] 2.0 resample_interval
double
Interval for resampling trajectory [m] 0.3 search_radius
double
Search distance from trajectory to point cloud [m] 5.0"},{"location":"control/obstacle_collision_checker/#assumptions-known-limits","title":"Assumptions / Known limits","text":"To perform proper collision check, it is necessary to get probably predicted trajectory and obstacle pointclouds without noise.
"},{"location":"control/operation_mode_transition_manager/","title":"operation_mode_transition_manager","text":""},{"location":"control/operation_mode_transition_manager/#operation_mode_transition_manager","title":"operation_mode_transition_manager","text":""},{"location":"control/operation_mode_transition_manager/#purpose-use-cases","title":"Purpose / Use cases","text":"This module is responsible for managing the different modes of operation for the Autoware system. The possible modes are:
Autonomous
: the vehicle is fully controlled by the autonomous driving systemLocal
: the vehicle is controlled by a physically connected control system such as a joy stickRemote
: the vehicle is controlled by a remote controllerStop
: the vehicle is stopped and there is no active control system.There is also an In Transition
state that occurs during each mode transitions. During this state, the transition to the new operator is not yet complete, and the previous operator is still responsible for controlling the system until the transition is complete. Some actions may be restricted during the In Transition
state, such as sudden braking or steering. (This is restricted by the vehicle_cmd_gate
).
Autonomous
, Local
, Remote
and Stop
based on the indication command.In Transition
mode (this is done with vehicle_cmd_gate
feature).Autonomous
, Local
, Remote
, and Stop
modes based on the indicated command.In Transition
mode (using the vehicle_cmd_gate
feature).A rough design of the relationship between `operation_mode_transition_manager`` and the other nodes is shown below.
A more detailed structure is below.
Here we see that operation_mode_transition_manager
has multiple state transitions as follows
For the mode transition:
tier4_system_msgs/srv/ChangeAutowareControl
]: change operation mode to Autonomoustier4_system_msgs/srv/ChangeOperationMode
]: change operation modeFor the transition availability/completion check:
autoware_auto_control_msgs/msg/AckermannControlCommand
]: vehicle control signalnav_msgs/msg/Odometry
]: ego vehicle stateautoware_auto_planning_msgs/msg/Trajectory
]: planning trajectoryautoware_auto_vehicle_msgs/msg/ControlModeReport
]: vehicle control mode (autonomous/manual)autoware_adapi_v1_msgs/msg/OperationModeState
]: the operation mode in the vehicle_cmd_gate
. (To be removed)For the backward compatibility (to be removed):
autoware_auto_vehicle_msgs/msg/Engage
]tier4_control_msgs/msg/GateMode
]tier4_control_msgs/msg/ExternalCommandSelectorMode
]autoware_adapi_v1_msgs/msg/OperationModeState
]: to inform the current operation modeoperation_mode_transition_manager/msg/OperationModeTransitionManagerDebug
]: detailed information about the operation mode transitiontier4_control_msgs/msg/GateMode
]: to change the vehicle_cmd_gate
state to use its features (to be removed)autoware_auto_vehicle_msgs/msg/Engage
]:autoware_auto_vehicle_msgs/srv/ControlModeCommand
]: to change the vehicle control mode (autonomous/manual)tier4_control_msgs/srv/ExternalCommandSelect
]:transition_timeout
double
If the state transition is not completed within this time, it is considered a transition failure. 10.0 frequency_hz
double
running hz 10.0 enable_engage_on_driving
bool
Set true if you want to engage the autonomous driving mode while the vehicle is driving. If set to false, it will deny Engage in any situation where the vehicle speed is not zero. Note that if you use this feature without adjusting the parameters, it may cause issues like sudden deceleration. Before using, please ensure the engage condition and the vehicle_cmd_gate transition filter are appropriately adjusted. 0.1 check_engage_condition
bool
If false, autonomous transition is always available 0.1 nearest_dist_deviation_threshold
double
distance threshold used to find nearest trajectory point 3.0 nearest_yaw_deviation_threshold
double
angle threshold used to find nearest trajectory point 1.57 For engage_acceptable_limits
related parameters:
allow_autonomous_in_stopped
bool
If true, autonomous transition is available when the vehicle is stopped even if other checks fail. true dist_threshold
double
the distance between the trajectory and ego vehicle must be within this distance for Autonomous
transition. 1.5 yaw_threshold
double
the yaw angle between trajectory and ego vehicle must be within this threshold for Autonomous
transition. 0.524 speed_upper_threshold
double
the velocity deviation between control command and ego vehicle must be within this threshold for Autonomous
transition. 10.0 speed_lower_threshold
double
the velocity deviation between the control command and ego vehicle must be within this threshold for Autonomous
transition. -10.0 acc_threshold
double
the control command acceleration must be less than this threshold for Autonomous
transition. 1.5 lateral_acc_threshold
double
the control command lateral acceleration must be less than this threshold for Autonomous
transition. 1.0 lateral_acc_diff_threshold
double
the lateral acceleration deviation between the control command must be less than this threshold for Autonomous
transition. 0.5 For stable_check
related parameters:
duration
double
the stable condition must be satisfied for this duration to complete the transition. 0.1 dist_threshold
double
the distance between the trajectory and ego vehicle must be within this distance to complete Autonomous
transition. 1.5 yaw_threshold
double
the yaw angle between trajectory and ego vehicle must be within this threshold to complete Autonomous
transition. 0.262 speed_upper_threshold
double
the velocity deviation between control command and ego vehicle must be within this threshold to complete Autonomous
transition. 2.0 speed_lower_threshold
double
the velocity deviation between control command and ego vehicle must be within this threshold to complete Autonomous
transition. 2.0"},{"location":"control/operation_mode_transition_manager/#engage-check-behavior-on-each-parameter-setting","title":"Engage check behavior on each parameter setting","text":"This matrix describes the scenarios in which the vehicle can be engaged based on the combinations of parameter settings:
enable_engage_on_driving
check_engage_condition
allow_autonomous_in_stopped
Scenarios where engage is permitted x x x Only when the vehicle is stationary. x x o Only when the vehicle is stationary. x o x When the vehicle is stationary and all engage conditions are met. x o o Only when the vehicle is stationary. o x x At any time (Caution: Not recommended). o x o At any time (Caution: Not recommended). o o x When all engage conditions are met, regardless of vehicle status. o o o When all engage conditions are met or the vehicle is stationary."},{"location":"control/operation_mode_transition_manager/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":"vehicle_cmd_gate
due to its strong connection.The longitudinal_controller computes the target acceleration to achieve the target velocity set at each point of the target trajectory using a feed-forward/back control.
It also contains a slope force correction that takes into account road slope information, and a delay compensation function. It is assumed that the target acceleration calculated here will be properly realized by the vehicle interface.
Note that the use of this module is not mandatory for Autoware if the vehicle supports the \"target speed\" interface.
"},{"location":"control/pid_longitudinal_controller/#design-inner-workings-algorithms","title":"Design / Inner-workings / Algorithms","text":""},{"location":"control/pid_longitudinal_controller/#states","title":"States","text":"This module has four state transitions as shown below in order to handle special processing in a specific situation.
The state transition diagram is shown below.
"},{"location":"control/pid_longitudinal_controller/#logics","title":"Logics","text":""},{"location":"control/pid_longitudinal_controller/#control-block-diagram","title":"Control Block Diagram","text":""},{"location":"control/pid_longitudinal_controller/#feedforward-ff","title":"FeedForward (FF)","text":"The reference acceleration set in the trajectory and slope compensation terms are output as a feedforward. Under ideal conditions with no modeling error, this FF term alone should be sufficient for velocity tracking.
Tracking errors causing modeling or discretization errors are removed by the feedback control (now using PID).
"},{"location":"control/pid_longitudinal_controller/#brake-keeping","title":"Brake keeping","text":"From the viewpoint of ride comfort, stopping with 0 acceleration is important because it reduces the impact of braking. However, if the target acceleration when stopping is 0, the vehicle may cross over the stop line or accelerate a little in front of the stop line due to vehicle model error or gradient estimation error.
For reliable stopping, the target acceleration calculated by the FeedForward system is limited to a negative acceleration when stopping.
"},{"location":"control/pid_longitudinal_controller/#slope-compensation","title":"Slope compensation","text":"Based on the slope information, a compensation term is added to the target acceleration.
There are two sources of the slope information, which can be switched by a parameter.
Notation: This function works correctly only in a vehicle system that does not have acceleration feedback in the low-level control system.
This compensation adds gravity correction to the target acceleration, resulting in an output value that is no longer equal to the target acceleration that the autonomous driving system desires. Therefore, it conflicts with the role of the acceleration feedback in the low-level controller. For instance, if the vehicle is attempting to start with an acceleration of 1.0 m/s^2
and a gravity correction of -1.0 m/s^2
is applied, the output value will be 0
. If this output value is mistakenly treated as the target acceleration, the vehicle will not start.
A suitable example of a vehicle system for the slope compensation function is one in which the output acceleration from the longitudinal_controller is converted into target accel/brake pedal input without any feedbacks. In this case, the output acceleration is just used as a feedforward term to calculate the target pedal, and hence the issue mentioned above does not arise.
Note: The angle of the slope is defined as positive for an uphill slope, while the pitch angle of the ego pose is defined as negative when facing upward. They have an opposite definition.
"},{"location":"control/pid_longitudinal_controller/#pid-control","title":"PID control","text":"For deviations that cannot be handled by FeedForward control, such as model errors, PID control is used to construct a feedback system.
This PID control calculates the target acceleration from the deviation between the current ego-velocity and the target velocity.
This PID logic has a maximum value for the output of each term. This is to prevent the following:
Note: by default, the integral term in the control system is not accumulated when the vehicle is stationary. This precautionary measure aims to prevent unintended accumulation of the integral term in scenarios where Autoware assumes the vehicle is engaged, but an external system has immobilized the vehicle to initiate startup procedures.
However, certain situations may arise, such as when the vehicle encounters a depression in the road surface during startup or if the slope compensation is inaccurately estimated (lower than necessary), leading to a failure to initiate motion. To address these scenarios, it is possible to activate error integration even when the vehicle is at rest by setting the enable_integration_at_low_speed
parameter to true.
When enable_integration_at_low_speed
is set to true, the PID controller will initiate integration of the acceleration error after a specified duration defined by the time_threshold_before_pid_integration
parameter has elapsed without the vehicle surpassing a minimum velocity set by the current_vel_threshold_pid_integration
parameter.
The presence of the time_threshold_before_pid_integration
parameter is important for practical PID tuning. Integrating the error when the vehicle is stationary or at low speed can complicate PID tuning. This parameter effectively introduces a delay before the integral part becomes active, preventing it from kicking in immediately. This delay allows for more controlled and effective tuning of the PID controller.
At present, PID control is implemented from the viewpoint of trade-off between development/maintenance cost and performance. This may be replaced by a higher performance controller (adaptive control or robust control) in future development.
"},{"location":"control/pid_longitudinal_controller/#time-delay-compensation","title":"Time delay compensation","text":"At high speeds, the delay of actuator systems such as gas pedals and brakes has a significant impact on driving accuracy. Depending on the actuating principle of the vehicle, the mechanism that physically controls the gas pedal and brake typically has a delay of about a hundred millisecond.
In this controller, the predicted ego-velocity and the target velocity after the delay time are calculated and used for the feedback to address the time delay problem.
"},{"location":"control/pid_longitudinal_controller/#slope-compensation_1","title":"Slope compensation","text":"Based on the slope information, a compensation term is added to the target acceleration.
There are two sources of the slope information, which can be switched by a parameter.
Set the following from the controller_node
autoware_auto_planning_msgs/Trajectory
: reference trajectory to follow.nav_msgs/Odometry
: current odometryReturn LongitudinalOutput which contains the following to the controller node
autoware_auto_control_msgs/LongitudinalCommand
: command to control the longitudinal motion of the vehicle. It contains the target velocity and target acceleration.The PIDController
class is straightforward to use. First, gains and limits must be set (using setGains()
and setLimits()
) for the proportional (P), integral (I), and derivative (D) components. Then, the velocity can be calculated by providing the current error and time step duration to the calculate()
function.
The default parameters defined in param/lateral_controller_defaults.param.yaml
are adjusted to the AutonomouStuff Lexus RX 450h for under 40 km/h driving.
emergency_state_overshoot_stop_dist
. true enable_large_tracking_error_emergency bool flag to enable transition to EMERGENCY when the closest trajectory point search is failed due to a large deviation between trajectory and ego pose. true enable_slope_compensation bool flag to modify output acceleration for slope compensation. The source of the slope angle can be selected from ego-pose or trajectory angle. See use_trajectory_for_pitch_calculation
. true enable_brake_keeping_before_stop bool flag to keep a certain acceleration during DRIVE state before the ego stops. See Brake keeping. false enable_keep_stopped_until_steer_convergence bool flag to keep stopped condition until until the steer converges. true max_acc double max value of output acceleration [m/s^2] 3.0 min_acc double min value of output acceleration [m/s^2] -5.0 max_jerk double max value of jerk of output acceleration [m/s^3] 2.0 min_jerk double min value of jerk of output acceleration [m/s^3] -5.0 use_trajectory_for_pitch_calculation bool If true, the slope is estimated from trajectory z-level. Otherwise the pitch angle of the ego pose is used. false lpf_pitch_gain double gain of low-pass filter for pitch estimation 0.95 max_pitch_rad double max value of estimated pitch [rad] 0.1 min_pitch_rad double min value of estimated pitch [rad] -0.1"},{"location":"control/pid_longitudinal_controller/#state-transition","title":"State transition","text":"Name Type Description Default value drive_state_stop_dist double The state will transit to DRIVE when the distance to the stop point is longer than drive_state_stop_dist
+ drive_state_offset_stop_dist
[m] 0.5 drive_state_offset_stop_dist double The state will transit to DRIVE when the distance to the stop point is longer than drive_state_stop_dist
+ drive_state_offset_stop_dist
[m] 1.0 stopping_state_stop_dist double The state will transit to STOPPING when the distance to the stop point is shorter than stopping_state_stop_dist
[m] 0.5 stopped_state_entry_vel double threshold of the ego velocity in transition to the STOPPED state [m/s] 0.01 stopped_state_entry_acc double threshold of the ego acceleration in transition to the STOPPED state [m/s^2] 0.1 emergency_state_overshoot_stop_dist double If enable_overshoot_emergency
is true and the ego is emergency_state_overshoot_stop_dist
-meter ahead of the stop point, the state will transit to EMERGENCY. [m] 1.5 emergency_state_traj_trans_dev double If the ego's position is emergency_state_traj_tran_dev
meter away from the nearest trajectory point, the state will transit to EMERGENCY. [m] 3.0 emergency_state_traj_rot_dev double If the ego's orientation is emergency_state_traj_rot_dev
rad away from the nearest trajectory point orientation, the state will transit to EMERGENCY. [rad] 0.784"},{"location":"control/pid_longitudinal_controller/#drive-parameter","title":"DRIVE Parameter","text":"Name Type Description Default value kp double p gain for longitudinal control 1.0 ki double i gain for longitudinal control 0.1 kd double d gain for longitudinal control 0.0 max_out double max value of PID's output acceleration during DRIVE state [m/s^2] 1.0 min_out double min value of PID's output acceleration during DRIVE state [m/s^2] -1.0 max_p_effort double max value of acceleration with p gain 1.0 min_p_effort double min value of acceleration with p gain -1.0 max_i_effort double max value of acceleration with i gain 0.3 min_i_effort double min value of acceleration with i gain -0.3 max_d_effort double max value of acceleration with d gain 0.0 min_d_effort double min value of acceleration with d gain 0.0 lpf_vel_error_gain double gain of low-pass filter for velocity error 0.9 enable_integration_at_low_speed bool Whether to enable integration of acceleration errors when the vehicle speed is lower than current_vel_threshold_pid_integration
or not. current_vel_threshold_pid_integration double Velocity error is integrated for I-term only when the absolute value of current velocity is larger than this parameter. [m/s] time_threshold_before_pid_integration double How much time without the vehicle moving must past to enable PID error integration. [s] 5.0 brake_keeping_acc double If enable_brake_keeping_before_stop
is true, a certain acceleration is kept during DRIVE state before the ego stops [m/s^2] See Brake keeping. 0.2"},{"location":"control/pid_longitudinal_controller/#stopping-parameter-smooth-stop","title":"STOPPING Parameter (smooth stop)","text":"Smooth stop is enabled if enable_smooth_stop
is true. In smooth stop, strong acceleration (strong_acc
) will be output first to decrease the ego velocity. Then weak acceleration (weak_acc
) will be output to stop smoothly by decreasing the ego jerk. If the ego does not stop in a certain time or some-meter over the stop point, weak acceleration to stop right (weak_stop_acc
) now will be output. If the ego is still running, strong acceleration (strong_stop_acc
) to stop right now will be output.
smooth_stop_strong_stop_dist
-meter over the stop point. [m/s^2] -3.4 smooth_stop_max_fast_vel double max fast vel to judge the ego is running fast [m/s]. If the ego is running fast, strong acceleration will be output. 0.5 smooth_stop_min_running_vel double min ego velocity to judge if the ego is running or not [m/s] 0.01 smooth_stop_min_running_acc double min ego acceleration to judge if the ego is running or not [m/s^2] 0.01 smooth_stop_weak_stop_time double max time to output weak acceleration [s]. After this, strong acceleration will be output. 0.8 smooth_stop_weak_stop_dist double Weak acceleration will be output when the ego is smooth_stop_weak_stop_dist
-meter before the stop point. [m] -0.3 smooth_stop_strong_stop_dist double Strong acceleration will be output when the ego is smooth_stop_strong_stop_dist
-meter over the stop point. [m] -0.5"},{"location":"control/pid_longitudinal_controller/#stopped-parameter","title":"STOPPED Parameter","text":"The STOPPED
state assumes that the vehicle is completely stopped with the brakes fully applied. Therefore, stopped_acc
should be set to a value that allows the vehicle to apply the strongest possible brake. If stopped_acc
is not sufficiently low, there is a possibility of sliding down on steep slopes.
The Predicted Path Checker package is designed for autonomous vehicles to check the predicted path generated by control modules. It handles potential collisions that the planning module might not be able to handle and that in the brake distance. In case of collision in brake distance, the package will send a diagnostic message labeled \"ERROR\" to alert the system to send emergency and in the case of collisions in outside reference trajectory, it sends pause request to pause interface to make the vehicle stop.
"},{"location":"control/predicted_path_checker/#algorithm","title":"Algorithm","text":"The package algorithm evaluates the predicted trajectory against the reference trajectory and the predicted objects in the environment. It checks for potential collisions and, if necessary, generates an appropriate response to avoid them ( emergency or pause request).
"},{"location":"control/predicted_path_checker/#inner-algorithm","title":"Inner Algorithm","text":"cutTrajectory() -> It cuts the predicted trajectory with input length. Length is calculated by multiplying the velocity of ego vehicle with \"trajectory_check_time\" parameter and \"min_trajectory_length\".
filterObstacles() -> It filters the predicted objects in the environment. It filters the objects which are not in front of the vehicle and far away from predicted trajectory.
checkTrajectoryForCollision() -> It checks the predicted trajectory for collision with the predicted objects. It calculates both polygon of trajectory points and predicted objects and checks intersection of both polygons. If there is an intersection, it calculates the nearest collision point. It returns the nearest collision point of polygon and the predicted object. It also checks predicted objects history which are intersect with the footprint before to avoid unexpected behaviors. Predicted objects history stores the objects if it was detected below the \"chattering_threshold\" seconds ago.
If the \"enable_z_axis_obstacle_filtering\" parameter is set to true, it filters the predicted objects in the Z-axis by using \"z_axis_filtering_buffer\". If the object does not intersect with the Z-axis, it is filtered out.
calculateProjectedVelAndAcc() -> It calculates the projected velocity and acceleration of the predicted object on predicted trajectory's collision point's axes.
isInBrakeDistance() -> It checks if the stop point is in brake distance. It gets relative velocity and acceleration of ego vehicle with respect to the predicted object. It calculates the brake distance, if the point in brake distance, it returns true.
isItDiscretePoint() -> It checks if the stop point on predicted trajectory is discrete point or not. If it is not discrete point, planning should handle the stop.
isThereStopPointOnRefTrajectory() -> It checks if there is a stop point on reference trajectory. If there is a stop point before the stop index, it returns true. Otherwise, it returns false, and node is going to call pause interface to make the vehicle stop.
"},{"location":"control/predicted_path_checker/#inputs","title":"Inputs","text":"Name Type Description~/input/reference_trajectory
autoware_auto_planning_msgs::msg::Trajectory
Reference trajectory ~/input/predicted_trajectory
autoware_auto_planning_msgs::msg::Trajectory
Predicted trajectory ~/input/objects
autoware_auto_perception_msgs::msg::PredictedObject
Dynamic objects in the environment ~/input/odometry
nav_msgs::msg::Odometry
Odometry message of vehicle to get current velocity ~/input/current_accel
geometry_msgs::msg::AccelWithCovarianceStamped
Current acceleration /control/vehicle_cmd_gate/is_paused
tier4_control_msgs::msg::IsPaused
Current pause state of the vehicle"},{"location":"control/predicted_path_checker/#outputs","title":"Outputs","text":"Name Type Description ~/debug/marker
visualization_msgs::msg::MarkerArray
Marker for visualization ~/debug/virtual_wall
visualization_msgs::msg::MarkerArray
Virtual wall marker for visualization /control/vehicle_cmd_gate/set_pause
tier4_control_msgs::srv::SetPause
Pause service to make the vehicle stop /diagnostics
diagnostic_msgs::msg::DiagnosticStatus
Diagnostic status of vehicle"},{"location":"control/predicted_path_checker/#parameters","title":"Parameters","text":""},{"location":"control/predicted_path_checker/#node-parameters","title":"Node Parameters","text":"Name Type Description Default value update_rate
double
The update rate [Hz] 10.0 delay_time
double
he time delay considered for the emergency response [s] 0.17 max_deceleration
double
Max deceleration for ego vehicle to stop [m/s^2] 1.5 resample_interval
double
Interval for resampling trajectory [m] 0.5 stop_margin
double
The stopping margin [m] 0.5 ego_nearest_dist_threshold
double
The nearest distance threshold for ego vehicle [m] 3.0 ego_nearest_yaw_threshold
double
The nearest yaw threshold for ego vehicle [rad] 1.046 min_trajectory_check_length
double
The minimum trajectory check length in meters [m] 1.5 trajectory_check_time
double
The trajectory check time in seconds. [s] 3.0 distinct_point_distance_threshold
double
The distinct point distance threshold [m] 0.3 distinct_point_yaw_threshold
double
The distinct point yaw threshold [deg] 5.0 filtering_distance_threshold
double
It ignores the objects if distance is higher than this [m] 1.5 use_object_prediction
bool
If true, node predicts current pose of the objects wrt delta time [-] true"},{"location":"control/predicted_path_checker/#collision-checker-parameters","title":"Collision Checker Parameters","text":"Name Type Description Default value width_margin
double
The width margin for collision checking [Hz] 0.2 chattering_threshold
double
The chattering threshold for collision detection [s] 0.2 z_axis_filtering_buffer
double
The Z-axis filtering buffer [m] 0.3 enable_z_axis_obstacle_filtering
bool
A boolean flag indicating if Z-axis obstacle filtering is enabled false"},{"location":"control/pure_pursuit/","title":"Pure Pursuit Controller","text":""},{"location":"control/pure_pursuit/#pure-pursuit-controller","title":"Pure Pursuit Controller","text":"The Pure Pursuit Controller module calculates the steering angle for tracking a desired trajectory using the pure pursuit algorithm. This is used as a lateral controller plugin in the trajectory_follower_node
.
Set the following from the controller_node
autoware_auto_planning_msgs/Trajectory
: reference trajectory to follow.nav_msgs/Odometry
: current ego pose and velocity informationReturn LateralOutput which contains the following to the controller node
autoware_auto_control_msgs/AckermannLateralCommand
: target steering angleautoware_auto_planning_msgs/Trajectory
: predicted path for ego vehicleshift_decider
is a module to decide shift from ackermann control command.
~/input/control_cmd
autoware_auto_control_msgs::msg::AckermannControlCommand
Control command for vehicle."},{"location":"control/shift_decider/#output","title":"Output","text":"Name Type Description ~output/gear_cmd
autoware_auto_vehicle_msgs::msg::GearCommand
Gear for drive forward / backward."},{"location":"control/shift_decider/#parameters","title":"Parameters","text":"none.
"},{"location":"control/shift_decider/#assumptions-known-limits","title":"Assumptions / Known limits","text":"TBD.
"},{"location":"control/trajectory_follower_base/","title":"Trajectory Follower","text":""},{"location":"control/trajectory_follower_base/#trajectory-follower","title":"Trajectory Follower","text":"This is the design document for the trajectory_follower
package.
This package provides the interface of longitudinal and lateral controllers used by the node of the trajectory_follower_node
package. We can implement a detailed controller by deriving the longitudinal and lateral base interfaces.
There are lateral and longitudinal base interface classes and each algorithm inherits from this class to implement. The interface class has the following base functions.
isReady()
: Check if the control is ready to compute.run()
: Compute control commands and return to Trajectory Follower Nodes. This must be implemented by inherited algorithms.sync()
: Input the result of running the other controller.See the Design of Trajectory Follower Nodes for how these functions work in the node.
"},{"location":"control/trajectory_follower_base/#separated-lateral-steering-and-longitudinal-velocity-controls","title":"Separated lateral (steering) and longitudinal (velocity) controls","text":"This longitudinal controller assumes that the roles of lateral and longitudinal control are separated as follows.
Ideally, dealing with the lateral and longitudinal control as a single mixed problem can achieve high performance. In contrast, there are two reasons to provide velocity controller as a stand-alone function, described below.
"},{"location":"control/trajectory_follower_base/#complex-requirements-for-longitudinal-motion","title":"Complex requirements for longitudinal motion","text":"The longitudinal vehicle behavior that humans expect is difficult to express in a single logic. For example, the expected behavior just before stopping differs depending on whether the ego-position is ahead/behind of the stop line, or whether the current speed is higher/lower than the target speed to achieve a human-like movement.
In addition, some vehicles have difficulty measuring the ego-speed at extremely low speeds. In such cases, a configuration that can improve the functionality of the longitudinal control without affecting the lateral control is important.
There are many characteristics and needs that are unique to longitudinal control. Designing them separately from the lateral control keeps the modules less coupled and improves maintainability.
"},{"location":"control/trajectory_follower_base/#nonlinear-coupling-of-lateral-and-longitudinal-motion","title":"Nonlinear coupling of lateral and longitudinal motion","text":"The lat-lon mixed control problem is very complex and uses nonlinear optimization to achieve high performance. Since it is difficult to guarantee the convergence of the nonlinear optimization, a simple control logic is also necessary for development.
Also, the benefits of simultaneous longitudinal and lateral control are small if the vehicle doesn't move at high speed.
"},{"location":"control/trajectory_follower_base/#related-issues","title":"Related issues","text":""},{"location":"control/trajectory_follower_node/","title":"Trajectory Follower Nodes","text":""},{"location":"control/trajectory_follower_node/#trajectory-follower-nodes","title":"Trajectory Follower Nodes","text":""},{"location":"control/trajectory_follower_node/#purpose","title":"Purpose","text":"Generate control commands to follow a given Trajectory.
"},{"location":"control/trajectory_follower_node/#design","title":"Design","text":"This is a node of the functionalities implemented in the controller class derived from trajectory_follower_base package. It has instances of those functionalities, gives them input data to perform calculations, and publishes control commands.
By default, the controller instance with the Controller
class as follows is used.
The process flow of Controller
class is as follows.
// 1. create input data\nconst auto input_data = createInputData(*get_clock());\nif (!input_data) {\nreturn;\n}\n\n// 2. check if controllers are ready\nconst bool is_lat_ready = lateral_controller_->isReady(*input_data);\nconst bool is_lon_ready = longitudinal_controller_->isReady(*input_data);\nif (!is_lat_ready || !is_lon_ready) {\nreturn;\n}\n\n// 3. run controllers\nconst auto lat_out = lateral_controller_->run(*input_data);\nconst auto lon_out = longitudinal_controller_->run(*input_data);\n\n// 4. sync with each other controllers\nlongitudinal_controller_->sync(lat_out.sync_data);\nlateral_controller_->sync(lon_out.sync_data);\n\n// 5. publish control command\ncontrol_cmd_pub_->publish(out);\n
Giving the longitudinal controller information about steer convergence allows it to control steer when stopped if following parameters are true
keep_steer_control_until_converged
enable_keep_stopped_until_steer_convergence
autoware_auto_planning_msgs/Trajectory
: reference trajectory to follow.nav_msgs/Odometry
: current odometryautoware_auto_vehicle_msgs/SteeringReport
current steeringautoware_auto_control_msgs/AckermannControlCommand
: message containing both lateral and longitudinal commands.ctrl_period
: control commands publishing periodtimeout_thr_sec
: duration in second after which input messages are discarded.AckermannControlCommand
if the following two conditions are met.timeout_thr_sec
.lateral_controller_mode
: mpc
or pure_pursuit
PID
for longitudinal controller)Debug information are published by the lateral and longitudinal controller using tier4_debug_msgs/Float32MultiArrayStamped
messages.
A configuration file for PlotJuggler is provided in the config
folder which, when loaded, allow to automatically subscribe and visualize information useful for debugging.
In addition, the predicted MPC trajectory is published on topic output/lateral/predicted_trajectory
and can be visualized in Rviz.
Provide a base trajectory follower code that is simple and flexible to use. This node calculates control command based on a reference trajectory and an ego vehicle kinematics.
"},{"location":"control/trajectory_follower_node/design/simple_trajectory_follower-design/#design","title":"Design","text":""},{"location":"control/trajectory_follower_node/design/simple_trajectory_follower-design/#inputs-outputs","title":"Inputs / Outputs","text":"Inputs
input/reference_trajectory
[autoware_auto_planning_msgs::msg::Trajectory] : reference trajectory to follow.input/current_kinematic_state
[nav_msgs::msg::Odometry] : current state of the vehicle (position, velocity, etc).output/control_cmd
[autoware_auto_control_msgs::msg::AckermannControlCommand] : generated control command.use_external_target_vel
is true. 0.0 lateral_deviation float target lateral deviation when following. 0.0"},{"location":"control/vehicle_cmd_gate/","title":"vehicle_cmd_gate","text":""},{"location":"control/vehicle_cmd_gate/#vehicle_cmd_gate","title":"vehicle_cmd_gate","text":""},{"location":"control/vehicle_cmd_gate/#purpose","title":"Purpose","text":"vehicle_cmd_gate
is the package to get information from emergency handler, planning module, external controller, and send a msg to vehicle.
~/input/steering
autoware_auto_vehicle_msgs::msg::SteeringReport
steering status ~/input/auto/control_cmd
autoware_auto_control_msgs::msg::AckermannControlCommand
command for lateral and longitudinal velocity from planning module ~/input/auto/turn_indicators_cmd
autoware_auto_vehicle_msgs::msg::TurnIndicatorsCommand
turn indicators command from planning module ~/input/auto/hazard_lights_cmd
autoware_auto_vehicle_msgs::msg::HazardLightsCommand
hazard lights command from planning module ~/input/auto/gear_cmd
autoware_auto_vehicle_msgs::msg::GearCommand
gear command from planning module ~/input/external/control_cmd
autoware_auto_control_msgs::msg::AckermannControlCommand
command for lateral and longitudinal velocity from external ~/input/external/turn_indicators_cmd
autoware_auto_vehicle_msgs::msg::TurnIndicatorsCommand
turn indicators command from external ~/input/external/hazard_lights_cmd
autoware_auto_vehicle_msgs::msg::HazardLightsCommand
hazard lights command from external ~/input/external/gear_cmd
autoware_auto_vehicle_msgs::msg::GearCommand
gear command from external ~/input/external_emergency_stop_heartbeat
tier4_external_api_msgs::msg::Heartbeat
heartbeat ~/input/gate_mode
tier4_control_msgs::msg::GateMode
gate mode (AUTO or EXTERNAL) ~/input/emergency/control_cmd
autoware_auto_control_msgs::msg::AckermannControlCommand
command for lateral and longitudinal velocity from emergency handler ~/input/emergency/hazard_lights_cmd
autoware_auto_vehicle_msgs::msg::HazardLightsCommand
hazard lights command from emergency handler ~/input/emergency/gear_cmd
autoware_auto_vehicle_msgs::msg::GearCommand
gear command from emergency handler ~/input/engage
autoware_auto_vehicle_msgs::msg::Engage
engage signal ~/input/operation_mode
autoware_adapi_v1_msgs::msg::OperationModeState
operation mode of Autoware"},{"location":"control/vehicle_cmd_gate/#output","title":"Output","text":"Name Type Description ~/output/vehicle_cmd_emergency
autoware_auto_system_msgs::msg::EmergencyState
emergency state which was originally in vehicle command ~/output/command/control_cmd
autoware_auto_control_msgs::msg::AckermannControlCommand
command for lateral and longitudinal velocity to vehicle ~/output/command/turn_indicators_cmd
autoware_auto_vehicle_msgs::msg::TurnIndicatorsCommand
turn indicators command to vehicle ~/output/command/hazard_lights_cmd
autoware_auto_vehicle_msgs::msg::HazardLightsCommand
hazard lights command to vehicle ~/output/command/gear_cmd
autoware_auto_vehicle_msgs::msg::GearCommand
gear command to vehicle ~/output/gate_mode
tier4_control_msgs::msg::GateMode
gate mode (AUTO or EXTERNAL) ~/output/engage
autoware_auto_vehicle_msgs::msg::Engage
engage signal ~/output/external_emergency
tier4_external_api_msgs::msg::Emergency
external emergency signal ~/output/operation_mode
tier4_system_msgs::msg::OperationMode
current operation mode of the vehicle_cmd_gate"},{"location":"control/vehicle_cmd_gate/#parameters","title":"Parameters","text":"Parameter Type Description update_period
double update period use_emergency_handling
bool true when emergency handler is used check_external_emergency_heartbeat
bool true when checking heartbeat for emergency stop system_emergency_heartbeat_timeout
double timeout for system emergency external_emergency_stop_heartbeat_timeout
double timeout for external emergency filter_activated_count_threshold
int threshold for filter activation filter_activated_velocity_threshold
double velocity threshold for filter activation stop_hold_acceleration
double longitudinal acceleration cmd when vehicle should stop emergency_acceleration
double longitudinal acceleration cmd when vehicle stop with emergency moderate_stop_service_acceleration
double longitudinal acceleration cmd when vehicle stop with moderate stop service nominal.vel_lim
double limit of longitudinal velocity (activated in AUTONOMOUS operation mode) nominal.reference_speed_point
velocity point used as a reference when calculate control command limit (activated in AUTONOMOUS operation mode). The size of this array must be equivalent to the size of the limit array. nominal.lon_acc_lim
array of limits of longitudinal acceleration (activated in AUTONOMOUS operation mode) nominal.lon_jerk_lim
array of limits of longitudinal jerk (activated in AUTONOMOUS operation mode) nominal.lat_acc_lim
array of limits of lateral acceleration (activated in AUTONOMOUS operation mode) nominal.lat_jerk_lim
array of limits of lateral jerk (activated in AUTONOMOUS operation mode) on_transition.vel_lim
double limit of longitudinal velocity (activated in TRANSITION operation mode) on_transition.reference_speed_point
velocity point used as a reference when calculate control command limit (activated in TRANSITION operation mode). The size of this array must be equivalent to the size of the limit array. on_transition.lon_acc_lim
array of limits of longitudinal acceleration (activated in TRANSITION operation mode) on_transition.lon_jerk_lim
array of limits of longitudinal jerk (activated in TRANSITION operation mode) on_transition.lat_acc_lim
array of limits of lateral acceleration (activated in TRANSITION operation mode) on_transition.lat_jerk_lim
array of limits of lateral jerk (activated in TRANSITION operation mode)"},{"location":"control/vehicle_cmd_gate/#filter-function","title":"Filter function","text":"This module incorporates a limitation filter to the control command right before its published. Primarily for safety, this filter restricts the output range of all control commands published through Autoware.
The limitation values are calculated based on the 1D interpolation of the limitation array parameters. Here is an example for the longitudinal jerk limit.
Notation: this filter is not designed to enhance ride comfort. Its main purpose is to detect and remove abnormal values in the control outputs during the final stages of Autoware. If this filter is frequently active, it implies the control module may need tuning. If you're aiming to smoothen the signal via a low-pass filter or similar techniques, that should be handled in the control module. When the filter is activated, the topic ~/is_filter_activated
is published.
The parameter check_external_emergency_heartbeat
(true by default) enables an emergency stop request from external modules. This feature requires a ~/input/external_emergency_stop_heartbeat
topic for health monitoring of the external module, and the vehicle_cmd_gate module will not start without the topic. The check_external_emergency_heartbeat
parameter must be false when the \"external emergency stop\" function is not used.
This package provides a node to convert diagnostic_msgs::msg::DiagnosticArray
messages into tier4_simulation_msgs::msg::UserDefinedValue
messages.
The node subscribes to all topics listed in the parameters and assumes they publish DiagnosticArray
messages. Each time such message is received, it is converted into as many UserDefinedValue
messages as the number of KeyValue
objects. The format of the output topic is detailed in the output section.
The node listens to DiagnosticArray
messages on the topics specified in the parameters.
The node outputs UserDefinedValue
messages that are converted from the received DiagnosticArray
.
The name of the output topics are generated from the corresponding input topic, the name of the diagnostic status, and the key of the diagnostic. For example, we might listen to topic /diagnostic_topic
and receive a DiagnosticArray
with 2 status:
name: \"x\"
.a
.b
.name: \"y\"
.a
.c
.The resulting topics to publish the UserDefinedValue
are as follows:
/metrics_x_a
./metrics_x_b
./metrics_y_a
./metrics_y_c
.diagnostic_topics
list of string
list of DiagnosticArray topics to convert to UserDefinedValue"},{"location":"evaluator/diagnostic_converter/#assumptions-known-limits","title":"Assumptions / Known limits","text":"Values in the KeyValue
objects of a DiagnosticStatus
are assumed to be of type double
.
TBD
"},{"location":"evaluator/localization_evaluator/","title":"Localization Evaluator","text":""},{"location":"evaluator/localization_evaluator/#localization-evaluator","title":"Localization Evaluator","text":"TBD
"},{"location":"evaluator/planning_evaluator/","title":"Planning Evaluator","text":""},{"location":"evaluator/planning_evaluator/#planning-evaluator","title":"Planning Evaluator","text":""},{"location":"evaluator/planning_evaluator/#purpose","title":"Purpose","text":"This package provides nodes that generate metrics to evaluate the quality of planning and control.
"},{"location":"evaluator/planning_evaluator/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"The evaluation node calculates metrics each time it receives a trajectory T(0)
. Metrics are calculated using the following information:
T(0)
itself.T(-1)
.T(0)
.These information are maintained by an instance of class MetricsCalculator
which is also responsible for calculating metrics.
Each metric is calculated using a Stat
instance which contains the minimum, maximum, and mean values calculated for the metric as well as the number of values measured.
All possible metrics are defined in the Metric
enumeration defined include/planning_evaluator/metrics/metric.hpp
. This file also defines conversions from/to string as well as human readable descriptions to be used as header of the output file.
The MetricsCalculator
is responsible for calculating metric statistics through calls to function:
Stat<double> MetricsCalculator::calculate(const Metric metric, const Trajectory & traj) const;\n
Adding a new metric M
requires the following steps:
metrics/metric.hpp
: add M
to the enum
, to the from/to string conversion maps, and to the description map.metrics_calculator.cpp
: add M
to the switch/case
statement of the calculate
function.M
to the selected_metrics
parameters.~/input/trajectory
autoware_auto_planning_msgs::msg::Trajectory
Main trajectory to evaluate ~/input/reference_trajectory
autoware_auto_planning_msgs::msg::Trajectory
Reference trajectory to use for deviation metrics ~/input/objects
autoware_auto_perception_msgs::msg::PredictedObjects
Obstacles"},{"location":"evaluator/planning_evaluator/#outputs","title":"Outputs","text":"Each metric is published on a topic named after the metric name.
Name Type Description~/metrics
diagnostic_msgs::msg::DiagnosticArray
DiagnosticArray with a DiagnosticStatus for each metric When shut down, the evaluation node writes the values of the metrics measured during its lifetime to a file as specified by the output_file
parameter.
output_file
string
file used to write metrics ego_frame
string
frame used for the ego pose selected_metrics
List metrics to measure and publish trajectory.min_point_dist_m
double
minimum distance between two successive points to use for angle calculation trajectory.lookahead.max_dist_m
double
maximum distance from ego along the trajectory to use for calculation trajectory.lookahead.max_time_m
double
maximum time ahead of ego along the trajectory to use for calculation obstacle.dist_thr_m
double
distance between ego and the obstacle below which a collision is considered"},{"location":"evaluator/planning_evaluator/#assumptions-known-limits","title":"Assumptions / Known limits","text":"There is a strong assumption that when receiving a trajectory T(0)
, it has been generated using the last received reference trajectory and objects. This can be wrong if a new reference trajectory or objects are published while T(0)
is being calculated.
Precision is currently limited by the resolution of the trajectories. It is possible to interpolate the trajectory and reference trajectory to increase precision but would make computation significantly more expensive.
"},{"location":"evaluator/planning_evaluator/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":"Route
or Path
messages as reference trajectory.min
and max
metric values. For now only the mean
value is published.motion_evaluator_node
.This plugin panel to visualize planning_evaluator
output.
/diagnostic/planning_evaluator/metrics
diagnostic_msgs::msg::DiagnosticArray
Subscribe planning_evaluator
output"},{"location":"evaluator/tier4_metrics_rviz_plugin/#howtouse","title":"HowToUse","text":"This package contains launch files that run nodes to convert Autoware internal topics into consistent API used by external software (e.g., fleet management system, simulator).
"},{"location":"launch/tier4_autoware_api_launch/#package-dependencies","title":"Package Dependencies","text":"Please see <exec_depend>
in package.xml
.
You can include as follows in *.launch.xml
to use autoware_api.launch.xml
.
<include file=\"$(find-pkg-share tier4_autoware_api_launch)/launch/autoware_api.launch.xml\"/>\n
"},{"location":"launch/tier4_autoware_api_launch/#notes","title":"Notes","text":"For reducing processing load, we use the Component feature in ROS 2 (similar to Nodelet in ROS 1 )
"},{"location":"launch/tier4_control_launch/","title":"tier4_control_launch","text":""},{"location":"launch/tier4_control_launch/#tier4_control_launch","title":"tier4_control_launch","text":""},{"location":"launch/tier4_control_launch/#structure","title":"Structure","text":""},{"location":"launch/tier4_control_launch/#package-dependencies","title":"Package Dependencies","text":"Please see <exec_depend>
in package.xml
.
You can include as follows in *.launch.xml
to use control.launch.py
.
Note that you should provide parameter paths as PACKAGE_param_path
. The list of parameter paths you should provide is written at the top of planning.launch.xml
.
<include file=\"$(find-pkg-share tier4_control_launch)/launch/control.launch.py\">\n<!-- options for lateral_controller_mode: mpc_follower, pure_pursuit -->\n<!-- Parameter files -->\n<arg name=\"FOO_NODE_param_path\" value=\"...\"/>\n<arg name=\"BAR_NODE_param_path\" value=\"...\"/>\n...\n <arg name=\"lateral_controller_mode\" value=\"mpc_follower\" />\n</include>\n
"},{"location":"launch/tier4_control_launch/#notes","title":"Notes","text":"For reducing processing load, we use the Component feature in ROS 2 (similar to Nodelet in ROS 1 )
"},{"location":"launch/tier4_localization_launch/","title":"tier4_localization_launch","text":""},{"location":"launch/tier4_localization_launch/#tier4_localization_launch","title":"tier4_localization_launch","text":""},{"location":"launch/tier4_localization_launch/#structure","title":"Structure","text":""},{"location":"launch/tier4_localization_launch/#package-dependencies","title":"Package Dependencies","text":"Please see <exec_depend>
in package.xml
.
Include localization.launch.xml
in other launch files as follows.
You can select which methods in localization to launch as pose_estimator
or twist_estimator
by specifying pose_source
and twist_source
.
In addition, you should provide parameter paths as PACKAGE_param_path
. The list of parameter paths you should provide is written at the top of localization.launch.xml
.
<include file=\"$(find-pkg-share tier4_localization_launch)/launch/localization.launch.xml\">\n<!-- Localization methods -->\n<arg name=\"pose_source\" value=\"...\"/>\n<arg name=\"twist_source\" value=\"...\"/>\n\n<!-- Parameter files -->\n<arg name=\"FOO_param_path\" value=\"...\"/>\n<arg name=\"BAR_param_path\" value=\"...\"/>\n...\n </include>\n
"},{"location":"launch/tier4_map_launch/","title":"tier4_map_launch","text":""},{"location":"launch/tier4_map_launch/#tier4_map_launch","title":"tier4_map_launch","text":""},{"location":"launch/tier4_map_launch/#structure","title":"Structure","text":""},{"location":"launch/tier4_map_launch/#package-dependencies","title":"Package Dependencies","text":"Please see <exec_depend>
in package.xml
.
You can include as follows in *.launch.xml
to use map.launch.py
.
Note that you should provide parameter paths as PACKAGE_param_path
. The list of parameter paths you should provide is written at the top of map.launch.xml
.
<arg name=\"map_path\" description=\"point cloud and lanelet2 map directory path\"/>\n<arg name=\"lanelet2_map_file\" default=\"lanelet2_map.osm\" description=\"lanelet2 map file name\"/>\n<arg name=\"pointcloud_map_file\" default=\"pointcloud_map.pcd\" description=\"pointcloud map file name\"/>\n\n<include file=\"$(find-pkg-share tier4_map_launch)/launch/map.launch.py\">\n<arg name=\"lanelet2_map_path\" value=\"$(var map_path)/$(var lanelet2_map_file)\" />\n<arg name=\"pointcloud_map_path\" value=\"$(var map_path)/$(var pointcloud_map_file)\"/>\n\n<!-- Parameter files -->\n<arg name=\"FOO_param_path\" value=\"...\"/>\n<arg name=\"BAR_param_path\" value=\"...\"/>\n...\n</include>\n
"},{"location":"launch/tier4_map_launch/#notes","title":"Notes","text":"For reducing processing load, we use the Component feature in ROS 2 (similar to Nodelet in ROS 1 )
"},{"location":"launch/tier4_perception_launch/","title":"tier4_perception_launch","text":""},{"location":"launch/tier4_perception_launch/#tier4_perception_launch","title":"tier4_perception_launch","text":""},{"location":"launch/tier4_perception_launch/#structure","title":"Structure","text":""},{"location":"launch/tier4_perception_launch/#package-dependencies","title":"Package Dependencies","text":"Please see <exec_depend>
in package.xml
.
You can include as follows in *.launch.xml
to use perception.launch.xml
.
Note that you should provide parameter paths as PACKAGE_param_path
. The list of parameter paths you should provide is written at the top of perception.launch.xml
.
<include file=\"$(find-pkg-share tier4_perception_launch)/launch/perception.launch.xml\">\n<!-- options for mode: camera_lidar_fusion, lidar, camera -->\n<arg name=\"mode\" value=\"lidar\" />\n\n<!-- Parameter files -->\n<arg name=\"FOO_param_path\" value=\"...\"/>\n<arg name=\"BAR_param_path\" value=\"...\"/>\n...\n </include>\n
"},{"location":"launch/tier4_planning_launch/","title":"tier4_planning_launch","text":""},{"location":"launch/tier4_planning_launch/#tier4_planning_launch","title":"tier4_planning_launch","text":""},{"location":"launch/tier4_planning_launch/#structure","title":"Structure","text":""},{"location":"launch/tier4_planning_launch/#package-dependencies","title":"Package Dependencies","text":"Please see <exec_depend>
in package.xml
.
Note that you should provide parameter paths as PACKAGE_param_path
. The list of parameter paths you should provide is written at the top of planning.launch.xml
.
<include file=\"$(find-pkg-share tier4_planning_launch)/launch/planning.launch.xml\">\n<!-- Parameter files -->\n<arg name=\"FOO_NODE_param_path\" value=\"...\"/>\n<arg name=\"BAR_NODE_param_path\" value=\"...\"/>\n...\n</include>\n
"},{"location":"launch/tier4_sensing_launch/","title":"tier4_sensing_launch","text":""},{"location":"launch/tier4_sensing_launch/#tier4_sensing_launch","title":"tier4_sensing_launch","text":""},{"location":"launch/tier4_sensing_launch/#structure","title":"Structure","text":""},{"location":"launch/tier4_sensing_launch/#package-dependencies","title":"Package Dependencies","text":"Please see <exec_depend>
in package.xml
.
You can include as follows in *.launch.xml
to use sensing.launch.xml
.
<include file=\"$(find-pkg-share tier4_sensing_launch)/launch/sensing.launch.xml\">\n<arg name=\"launch_driver\" value=\"true\"/>\n<arg name=\"sensor_model\" value=\"$(var sensor_model)\"/>\n<arg name=\"vehicle_param_file\" value=\"$(find-pkg-share $(var vehicle_model)_description)/config/vehicle_info.param.yaml\"/>\n<arg name=\"vehicle_mirror_param_file\" value=\"$(find-pkg-share $(var vehicle_model)_description)/config/mirror.param.yaml\"/>\n</include>\n
"},{"location":"launch/tier4_sensing_launch/#launch-directory-structure","title":"Launch Directory Structure","text":"This package finds sensor settings of specified sensor model in launch
.
launch/\n\u251c\u2500\u2500 aip_x1 # Sensor model name\n\u2502 \u251c\u2500\u2500 camera.launch.xml # Camera\n\u2502 \u251c\u2500\u2500 gnss.launch.xml # GNSS\n\u2502 \u251c\u2500\u2500 imu.launch.xml # IMU\n\u2502 \u251c\u2500\u2500 lidar.launch.xml # LiDAR\n\u2502 \u2514\u2500\u2500 pointcloud_preprocessor.launch.py # for preprocessing pointcloud\n...\n
"},{"location":"launch/tier4_sensing_launch/#notes","title":"Notes","text":"This package finds settings with variables.
ex.)
<include file=\"$(find-pkg-share tier4_sensing_launch)/launch/$(var sensor_model)/lidar.launch.xml\">\n
"},{"location":"launch/tier4_simulator_launch/","title":"tier4_simulator_launch","text":""},{"location":"launch/tier4_simulator_launch/#tier4_simulator_launch","title":"tier4_simulator_launch","text":""},{"location":"launch/tier4_simulator_launch/#structure","title":"Structure","text":""},{"location":"launch/tier4_simulator_launch/#package-dependencies","title":"Package Dependencies","text":"Please see <exec_depend>
in package.xml
.
<include file=\"$(find-pkg-share tier4_simulator_launch)/launch/simulator.launch.xml\">\n<arg name=\"vehicle_info_param_file\" value=\"VEHICLE_INFO_PARAM_FILE\" />\n<arg name=\"vehicle_model\" value=\"VEHICLE_MODEL\"/>\n</include>\n
The simulator model used in simple_planning_simulator is loaded from \"config/simulator_model.param.yaml\" in the \"VEHICLE_MODEL
_description\" package.
Please see <exec_depend>
in package.xml
.
Note that you should provide parameter paths as PACKAGE_param_path
. The list of parameter paths you should provide is written at the top of system.launch.xml
.
<include file=\"$(find-pkg-share tier4_system_launch)/launch/system.launch.xml\">\n<arg name=\"run_mode\" value=\"online\"/>\n<arg name=\"sensor_model\" value=\"SENSOR_MODEL\"/>\n\n<!-- Parameter files -->\n<arg name=\"FOO_param_path\" value=\"...\"/>\n<arg name=\"BAR_param_path\" value=\"...\"/>\n...\n </include>\n
The sensing configuration parameters used in system_error_monitor are loaded from \"config/diagnostic_aggregator/sensor_kit.param.yaml\" in the \"SENSOR_MODEL
_description\" package.
Please see <exec_depend>
in package.xml
.
You can include as follows in *.launch.xml
to use vehicle.launch.xml
.
<arg name=\"vehicle_model\" default=\"sample_vehicle\" description=\"vehicle model name\"/>\n<arg name=\"sensor_model\" default=\"sample_sensor_kit\" description=\"sensor model name\"/>\n\n<include file=\"$(find-pkg-share tier4_vehicle_launch)/launch/vehicle.launch.xml\">\n<arg name=\"vehicle_model\" value=\"$(var vehicle_model)\"/>\n<arg name=\"sensor_model\" value=\"$(var sensor_model)\"/>\n</include>\n
"},{"location":"launch/tier4_vehicle_launch/#notes","title":"Notes","text":"This package finds some external packages and settings with variables and package names.
ex.)
<let name=\"vehicle_model_pkg\" value=\"$(find-pkg-share $(var vehicle_model)_description)\"/>\n
<arg name=\"config_dir\" default=\"$(find-pkg-share individual_params)/config/$(var vehicle_id)/$(var sensor_model)\"/>\n
"},{"location":"launch/tier4_vehicle_launch/#vehiclexacro","title":"vehicle.xacro","text":""},{"location":"launch/tier4_vehicle_launch/#arguments","title":"Arguments","text":"Name Type Description Default sensor_model String sensor model name \"\" vehicle_model String vehicle model name \"\""},{"location":"launch/tier4_vehicle_launch/#usage_1","title":"Usage","text":"You can write as follows in *.launch.xml
.
<arg name=\"vehicle_model\" default=\"sample_vehicle\" description=\"vehicle model name\"/>\n<arg name=\"sensor_model\" default=\"sample_sensor_kit\" description=\"sensor model name\"/>\n<arg name=\"model\" default=\"$(find-pkg-share tier4_vehicle_launch)/urdf/vehicle.xacro\"/>\n\n<node name=\"robot_state_publisher\" pkg=\"robot_state_publisher\" exec=\"robot_state_publisher\">\n<param name=\"robot_description\" value=\"$(command 'xacro $(var model) vehicle_model:=$(var vehicle_model) sensor_model:=$(var sensor_model)')\"/>\n</node>\n
"},{"location":"localization/ekf_localizer/","title":"Overview","text":""},{"location":"localization/ekf_localizer/#overview","title":"Overview","text":"The Extend Kalman Filter Localizer estimates robust and less noisy robot pose and twist by integrating the 2D vehicle dynamics model with input ego-pose and ego-twist messages. The algorithm is designed especially for fast-moving robots such as autonomous driving systems.
"},{"location":"localization/ekf_localizer/#flowchart","title":"Flowchart","text":"The overall flowchart of the ekf_localizer is described below.
"},{"location":"localization/ekf_localizer/#features","title":"Features","text":"
This package includes the following features:
"},{"location":"localization/ekf_localizer/#node","title":"Node","text":""},{"location":"localization/ekf_localizer/#subscribed-topics","title":"Subscribed Topics","text":"Name Type Description
measured_pose_with_covariance
geometry_msgs::msg::PoseWithCovarianceStamped
Input pose source with the measurement covariance matrix. measured_twist_with_covariance
geometry_msgs::msg::TwistWithCovarianceStamped
Input twist source with the measurement covariance matrix. initialpose
geometry_msgs::msg::PoseWithCovarianceStamped
Initial pose for EKF. The estimated pose is initialized with zeros at the start. It is initialized with this message whenever published."},{"location":"localization/ekf_localizer/#published-topics","title":"Published Topics","text":"Name Type Description ekf_odom
nav_msgs::msg::Odometry
Estimated odometry. ekf_pose
geometry_msgs::msg::PoseStamped
Estimated pose. ekf_pose_with_covariance
geometry_msgs::msg::PoseWithCovarianceStamped
Estimated pose with covariance. ekf_biased_pose
geometry_msgs::msg::PoseStamped
Estimated pose including the yaw bias ekf_biased_pose_with_covariance
geometry_msgs::msg::PoseWithCovarianceStamped
Estimated pose with covariance including the yaw bias ekf_twist
geometry_msgs::msg::TwistStamped
Estimated twist. ekf_twist_with_covariance
geometry_msgs::msg::TwistWithCovarianceStamped
The estimated twist with covariance. diagnostics
diagnostics_msgs::msg::DiagnosticArray
The diagnostic information."},{"location":"localization/ekf_localizer/#published-tf","title":"Published TF","text":"map
coordinate to estimated pose.The current robot state is predicted from previously estimated data using a given prediction model. This calculation is called at a constant interval (predict_frequency [Hz]
). The prediction equation is described at the end of this page.
Before the update, the Mahalanobis distance is calculated between the measured input and the predicted state, the measurement update is not performed for inputs where the Mahalanobis distance exceeds the given threshold.
The predicted state is updated with the latest measured inputs, measured_pose, and measured_twist. The updates are performed with the same frequency as prediction, usually at a high frequency, in order to enable smooth state estimation.
"},{"location":"localization/ekf_localizer/#parameter-description","title":"Parameter description","text":"The parameters are set in launch/ekf_localizer.launch
.
note: process noise for positions x & y are calculated automatically from nonlinear dynamics.
"},{"location":"localization/ekf_localizer/#simple-1d-filter-parameters","title":"Simple 1D Filter Parameters","text":"Name Type Description Default value z_filter_proc_dev double Simple1DFilter - Z filter process deviation 1.0 roll_filter_proc_dev double Simple1DFilter - Roll filter process deviation 0.01 pitch_filter_proc_dev double Simple1DFilter - Pitch filter process deviation 0.01"},{"location":"localization/ekf_localizer/#for-diagnostics","title":"For diagnostics","text":"Name Type Description Default value pose_no_update_count_threshold_warn size_t The threshold at which a WARN state is triggered due to the Pose Topic update not happening continuously for a certain number of times. 50 pose_no_update_count_threshold_error size_t The threshold at which an ERROR state is triggered due to the Pose Topic update not happening continuously for a certain number of times. 250 twist_no_update_count_threshold_warn size_t The threshold at which a WARN state is triggered due to the Twist Topic update not happening continuously for a certain number of times. 50 twist_no_update_count_threshold_error size_t The threshold at which an ERROR state is triggered due to the Twist Topic update not happening continuously for a certain number of times. 250"},{"location":"localization/ekf_localizer/#misc","title":"Misc","text":"Name Type Description Default value threshold_observable_velocity_mps double Minimum value for velocity that will be used for EKF. Mainly used for dead zone in velocity sensor 0.0 (disabled)"},{"location":"localization/ekf_localizer/#how-to-tune-ekf-parameters","title":"How to tune EKF parameters","text":""},{"location":"localization/ekf_localizer/#0-preliminaries","title":"0. Preliminaries","text":"twist_additional_delay
and pose_additional_delay
to correct the time.Set standard deviation for each sensor. The pose_measure_uncertainty_time
is for the uncertainty of the header timestamp data. You can also tune a number of steps for smoothing for each observed sensor data by tuning *_smoothing_steps
. Increasing the number will improve the smoothness of the estimation, but may have an adverse effect on the estimation performance.
pose_measure_uncertainty_time
pose_smoothing_steps
twist_smoothing_steps
proc_stddev_vx_c
: set to maximum linear accelerationproc_stddev_wz_c
: set to maximum angular accelerationproc_stddev_yaw_c
: This parameter describes the correlation between the yaw and yaw rate. A large value means the change in yaw does not correlate to the estimated yaw rate. If this is set to 0, it means the change in estimated yaw is equal to yaw rate. Usually, this should be set to 0.proc_stddev_yaw_bias_c
: This parameter is the standard deviation for the rate of change in yaw bias. In most cases, yaw bias is constant, so it can be very small, but must be non-zero.where, \\(\\theta_k\\) represents the vehicle's heading angle, including the mounting angle bias. \\(b_k\\) is a correction term for the yaw bias, and it is modeled so that \\((\\theta_k+b_k)\\) becomes the heading angle of the base_link. The pose_estimator is expected to publish the base_link in the map coordinate system. However, the yaw angle may be offset due to calibration errors. This model compensates this error and improves estimation accuracy.
"},{"location":"localization/ekf_localizer/#time-delay-model","title":"time delay model","text":"The measurement time delay is handled by an augmented state [1] (See, Section 7.3 FIXED-LAG SMOOTHING).
Note that, although the dimension gets larger since the analytical expansion can be applied based on the specific structures of the augmented states, the computational complexity does not significantly change.
"},{"location":"localization/ekf_localizer/#test-result-with-autoware-ndt","title":"Test Result with Autoware NDT","text":""},{"location":"localization/ekf_localizer/#diagnostics","title":"Diagnostics","text":""},{"location":"localization/ekf_localizer/#the-conditions-that-result-in-a-warn-state","title":"The conditions that result in a WARN state","text":"pose_no_update_count_threshold_warn
/twist_no_update_count_threshold_warn
.pose_no_update_count_threshold_error
/twist_no_update_count_threshold_error
.b_k
in the current EKF state would not make any sense and cannot correctly handle these multiple yaw biases. Thus, future work includes introducing yaw bias for each sensor with yaw estimation.[1] Anderson, B. D. O., & Moore, J. B. (1979). Optimal filtering. Englewood Cliffs, NJ: Prentice-Hall.
"},{"location":"localization/geo_pose_projector/","title":"geo_pose_projector","text":""},{"location":"localization/geo_pose_projector/#geo_pose_projector","title":"geo_pose_projector","text":""},{"location":"localization/geo_pose_projector/#overview","title":"Overview","text":"This node is a simple node that subscribes to the geo-referenced pose topic and publishes the pose in the map frame.
"},{"location":"localization/geo_pose_projector/#subscribed-topics","title":"Subscribed Topics","text":"Name Type Descriptioninput_geo_pose
geographic_msgs::msg::GeoPoseWithCovarianceStamped
geo-referenced pose /map/map_projector_info
tier4_map_msgs::msg::MapProjectedObjectInfo
map projector info"},{"location":"localization/geo_pose_projector/#published-topics","title":"Published Topics","text":"Name Type Description output_pose
geometry_msgs::msg::PoseWithCovarianceStamped
pose in map frame /tf
tf2_msgs::msg::TFMessage
tf from parent link to the child link"},{"location":"localization/geo_pose_projector/#parameters","title":"Parameters","text":"Name Type Description Default Range publish_tf boolean whether to publish tf True N/A parent_frame string parent frame for published tf map N/A child_frame string child frame for published tf pose_estimator_base_link N/A"},{"location":"localization/geo_pose_projector/#limitations","title":"Limitations","text":"The covariance conversion may be incorrect depending on the projection type you are using. The covariance of input topic is expressed in (Latitude, Longitude, Altitude) as a diagonal matrix. Currently, we assume that the x axis is the east direction and the y axis is the north direction. Thus, the conversion may be incorrect when this assumption breaks, especially when the covariance of latitude and longitude is different.
"},{"location":"localization/gyro_odometer/","title":"gyro_odometer","text":""},{"location":"localization/gyro_odometer/#gyro_odometer","title":"gyro_odometer","text":""},{"location":"localization/gyro_odometer/#purpose","title":"Purpose","text":"gyro_odometer
is the package to estimate twist by combining imu and vehicle speed.
vehicle/twist_with_covariance
geometry_msgs::msg::TwistWithCovarianceStamped
twist with covariance from vehicle imu
sensor_msgs::msg::Imu
imu from sensor"},{"location":"localization/gyro_odometer/#output","title":"Output","text":"Name Type Description twist_with_covariance
geometry_msgs::msg::TwistWithCovarianceStamped
estimated twist with covariance"},{"location":"localization/gyro_odometer/#parameters","title":"Parameters","text":"Name Type Description Default Range output_frame string output's frame id base_link N/A message_timeout_sec float delay tolerance time for message 0.2 N/A"},{"location":"localization/gyro_odometer/#assumptions-known-limits","title":"Assumptions / Known limits","text":"This directory contains packages for landmark-based localization.
Landmarks are, for example
etc.
Since these landmarks are easy to detect and estimate pose, the ego pose can be calculated from the pose of the detected landmark if the pose of the landmark is written on the map in advance.
Currently, landmarks are assumed to be flat.
The following figure shows the principle of localization in the case of ar_tag_based_localizer
.
This calculated ego pose is passed to the EKF, where it is fused with the twist information and used to estimate a more accurate ego pose.
"},{"location":"localization/landmark_based_localizer/#node-diagram","title":"Node diagram","text":""},{"location":"localization/landmark_based_localizer/#landmark_manager","title":"landmark_manager
","text":"The definitions of the landmarks written to the map are introduced in the next section. See Map Specifications
.
The landmark_manager
is a utility package to load landmarks from the map.
Users can define landmarks as Lanelet2 4-vertex polygons. In this case, it is possible to define an arrangement in which the four vertices cannot be considered to be on the same plane. The direction of the landmark in that case is difficult to calculate. So, if the 4 vertices are considered as forming a tetrahedron and its volume exceeds the volume_threshold
parameter, the landmark will not publish tf_static.
See https://github.com/autowarefoundation/autoware_common/blob/main/tmp/lanelet2_extension/docs/lanelet2_format_extension.md#localization-landmarks
"},{"location":"localization/landmark_based_localizer/#about-consider_orientation","title":"Aboutconsider_orientation
","text":"The calculate_new_self_pose
function in the LandmarkManager
class includes a boolean argument named consider_orientation
. This argument determines the method used to calculate the new self pose based on detected and mapped landmarks. The following image illustrates the difference between the two methods.
consider_orientation = true
","text":"In this mode, the new self pose is calculated so that the relative Pose of the \"landmark detected from the current self pose\" is equal to the relative Pose of the \"landmark mapped from the new self pose\". This method can correct for orientation, but is strongly affected by the orientation error of the landmark detection.
"},{"location":"localization/landmark_based_localizer/#consider_orientation-false","title":"consider_orientation = false
","text":"In this mode, the new self pose is calculated so that only the relative position is correct for x, y, and z.
This method can not correct for orientation, but it is not affected by the orientation error of the landmark detection.
"},{"location":"localization/landmark_based_localizer/ar_tag_based_localizer/","title":"AR Tag Based Localizer","text":""},{"location":"localization/landmark_based_localizer/ar_tag_based_localizer/#ar-tag-based-localizer","title":"AR Tag Based Localizer","text":"ArTagBasedLocalizer is a vision-based localization node.
This node uses the ArUco library to detect AR-Tags from camera images and calculates and publishes the pose of the ego vehicle based on these detections. The positions and orientations of the AR-Tags are assumed to be written in the Lanelet2 format.
"},{"location":"localization/landmark_based_localizer/ar_tag_based_localizer/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"localization/landmark_based_localizer/ar_tag_based_localizer/#ar_tag_based_localizer-node","title":"ar_tag_based_localizer
node","text":""},{"location":"localization/landmark_based_localizer/ar_tag_based_localizer/#input","title":"Input","text":"Name Type Description ~/input/lanelet2_map
autoware_auto_mapping_msgs::msg::HADMapBin
Data of lanelet2 ~/input/image
sensor_msgs::msg::Image
Camera Image ~/input/camera_info
sensor_msgs::msg::CameraInfo
Camera Info ~/input/ekf_pose
geometry_msgs::msg::PoseWithCovarianceStamped
EKF Pose without IMU correction. It is used to validate detected AR tags by filtering out False Positives. Only if the EKF Pose and the AR tag-detected Pose are within a certain temporal and spatial range, the AR tag-detected Pose is considered valid and published."},{"location":"localization/landmark_based_localizer/ar_tag_based_localizer/#output","title":"Output","text":"Name Type Description ~/output/pose_with_covariance
geometry_msgs::msg::PoseWithCovarianceStamped
Estimated Pose ~/debug/result
sensor_msgs::msg::Image
[debug topic] Image in which marker detection results are superimposed on the input image ~/debug/marker
visualization_msgs::msg::MarkerArray
[debug topic] Loaded landmarks to visualize in Rviz as thin boards /tf
geometry_msgs::msg::TransformStamped
[debug topic] TF from camera to detected tag /diagnostics
diagnostic_msgs::msg::DiagnosticArray
Diagnostics outputs"},{"location":"localization/landmark_based_localizer/ar_tag_based_localizer/#parameters","title":"Parameters","text":"Name Type Description Default Range marker_size float marker_size 0.6 N/A target_tag_ids array target_tag_ids ['0','1','2','3','4','5','6'] N/A base_covariance array base_covariance [0.2, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.2, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.2, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.02, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.02, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.02] N/A distance_threshold float distance_threshold(m) 13.0 N/A consider_orientation boolean consider_orientation false N/A detection_mode string detection_mode select from [DM_NORMAL, DM_FAST, DM_VIDEO_FAST] DM_NORMAL N/A min_marker_size float min_marker_size 0.02 N/A ekf_time_tolerance float ekf_time_tolerance(sec) 5.0 N/A ekf_position_tolerance float ekf_position_tolerance(m) 10.0 N/A"},{"location":"localization/landmark_based_localizer/ar_tag_based_localizer/#how-to-launch","title":"How to launch","text":"When launching Autoware, set artag
for pose_source
.
ros2 launch autoware_launch ... \\\npose_source:=artag \\\n...\n
"},{"location":"localization/landmark_based_localizer/ar_tag_based_localizer/#rosbag","title":"Rosbag","text":""},{"location":"localization/landmark_based_localizer/ar_tag_based_localizer/#sample-rosbag-and-map-awsim-data","title":"Sample rosbag and map (AWSIM data)","text":"This data is simulated data created by AWSIM. Essentially, AR tag-based self-localization is not intended for such public road driving, but for driving in a smaller area, so the max driving speed is set at 15 km/h.
It is a known problem that the timing of when each AR tag begins to be detected can cause significant changes in estimation.
"},{"location":"localization/landmark_based_localizer/ar_tag_based_localizer/#sample-rosbag-and-map-real-world-data","title":"Sample rosbag and map (Real world data)","text":"Please remap the topic names and play it.
ros2 bag play /path/to/ar_tag_based_localizer_sample_bag/ -r 0.5 -s sqlite3 \\\n--remap /sensing/camera/front/image:=/sensing/camera/traffic_light/image_raw \\\n/sensing/camera/front/image/info:=/sensing/camera/traffic_light/camera_info\n
This dataset contains issues such as missing IMU data, and overall the accuracy is low. Even when running AR tag-based self-localization, significant difference from the true trajectory can be observed.
The image below shows the trajectory when the sample is executed and plotted.
The pull request video below should also be helpful.
https://github.com/autowarefoundation/autoware.universe/pull/4347#issuecomment-1663155248
"},{"location":"localization/landmark_based_localizer/ar_tag_based_localizer/#principle","title":"Principle","text":""},{"location":"localization/localization_error_monitor/","title":"localization_error_monitor","text":""},{"location":"localization/localization_error_monitor/#localization_error_monitor","title":"localization_error_monitor","text":""},{"location":"localization/localization_error_monitor/#purpose","title":"Purpose","text":"localization_error_monitor is a package for diagnosing localization errors by monitoring uncertainty of the localization results. The package monitors the following two values:
input/pose_with_cov
geometry_msgs::msg::PoseWithCovarianceStamped
localization result"},{"location":"localization/localization_error_monitor/#output","title":"Output","text":"Name Type Description debug/ellipse_marker
visualization_msgs::msg::Marker
ellipse marker diagnostics
diagnostic_msgs::msg::DiagnosticArray
diagnostics outputs"},{"location":"localization/localization_error_monitor/#parameters","title":"Parameters","text":"Name Type Description Default Range scale float scale factor for monitored values 3 N/A error_ellipse_size float error threshold for long radius of confidence ellipse [m] 1.5 N/A warn_ellipse_size float warning threshold for long radius of confidence ellipse [m] 1.2 N/A error_ellipse_size_lateral_direction float error threshold for size of confidence ellipse along lateral direction [m] 0.3 N/A warn_ellipse_size_lateral_direction float warning threshold for size of confidence ellipse along lateral direction [m] 0.25 N/A"},{"location":"localization/localization_util/","title":"localization_util","text":""},{"location":"localization/localization_util/#localization_util","title":"localization_util","text":"`localization_util`` is a localization utility package.
This package does not have a node, it is just a library.
"},{"location":"localization/ndt_scan_matcher/","title":"ndt_scan_matcher","text":""},{"location":"localization/ndt_scan_matcher/#ndt_scan_matcher","title":"ndt_scan_matcher","text":""},{"location":"localization/ndt_scan_matcher/#purpose","title":"Purpose","text":"ndt_scan_matcher is a package for position estimation using the NDT scan matching method.
There are two main functions in this package:
One optional function is regularization. Please see the regularization chapter in the back for details. It is disabled by default.
"},{"location":"localization/ndt_scan_matcher/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"localization/ndt_scan_matcher/#input","title":"Input","text":"Name Type Descriptionekf_pose_with_covariance
geometry_msgs::msg::PoseWithCovarianceStamped
initial pose points_raw
sensor_msgs::msg::PointCloud2
sensor pointcloud sensing/gnss/pose_with_covariance
sensor_msgs::msg::PoseWithCovarianceStamped
base position for regularization term sensing/gnss/pose_with_covariance
is required only when regularization is enabled.
ndt_pose
geometry_msgs::msg::PoseStamped
estimated pose ndt_pose_with_covariance
geometry_msgs::msg::PoseWithCovarianceStamped
estimated pose with covariance /diagnostics
diagnostic_msgs::msg::DiagnosticArray
diagnostics points_aligned
sensor_msgs::msg::PointCloud2
[debug topic] pointcloud aligned by scan matching points_aligned_no_ground
sensor_msgs::msg::PointCloud2
[debug topic] no ground pointcloud aligned by scan matching initial_pose_with_covariance
geometry_msgs::msg::PoseWithCovarianceStamped
[debug topic] initial pose used in scan matching multi_ndt_pose
geometry_msgs::msg::PoseArray
[debug topic] estimated poses from multiple initial poses in real-time covariance estimation multi_initial_pose
geometry_msgs::msg::PoseArray
[debug topic] initial poses for real-time covariance estimation exe_time_ms
tier4_debug_msgs::msg::Float32Stamped
[debug topic] execution time for scan matching [ms] transform_probability
tier4_debug_msgs::msg::Float32Stamped
[debug topic] score of scan matching no_ground_transform_probability
tier4_debug_msgs::msg::Float32Stamped
[debug topic] score of scan matching based on no ground LiDAR scan iteration_num
tier4_debug_msgs::msg::Int32Stamped
[debug topic] number of scan matching iterations initial_to_result_relative_pose
geometry_msgs::msg::PoseStamped
[debug topic] relative pose between the initial point and the convergence point initial_to_result_distance
tier4_debug_msgs::msg::Float32Stamped
[debug topic] distance difference between the initial point and the convergence point [m] initial_to_result_distance_old
tier4_debug_msgs::msg::Float32Stamped
[debug topic] distance difference between the older of the two initial points used in linear interpolation and the convergence point [m] initial_to_result_distance_new
tier4_debug_msgs::msg::Float32Stamped
[debug topic] distance difference between the newer of the two initial points used in linear interpolation and the convergence point [m] ndt_marker
visualization_msgs::msg::MarkerArray
[debug topic] markers for debugging monte_carlo_initial_pose_marker
visualization_msgs::msg::MarkerArray
[debug topic] particles used in initial position estimation"},{"location":"localization/ndt_scan_matcher/#service","title":"Service","text":"Name Type Description ndt_align_srv
autoware_localization_srvs::srv::PoseWithCovarianceStamped
service to estimate initial pose"},{"location":"localization/ndt_scan_matcher/#parameters","title":"Parameters","text":""},{"location":"localization/ndt_scan_matcher/#core-parameters","title":"Core Parameters","text":"Name Type Description base_frame
string Vehicle reference frame ndt_base_frame
string NDT reference frame map_frame
string map frame input_sensor_points_queue_size
int Subscriber queue size trans_epsilon
double The max difference between two consecutive transformations to consider convergence step_size
double The newton line search maximum step length resolution
double The ND voxel grid resolution [m] max_iterations
int The number of iterations required to calculate alignment converged_param_type
int The type of indicators for scan matching score (0: TP, 1: NVTL) converged_param_transform_probability
double TP threshold for deciding whether to trust the estimation result (when converged_param_type = 0) converged_param_nearest_voxel_transformation_likelihood
double NVTL threshold for deciding whether to trust the estimation result (when converged_param_type = 1) initial_estimate_particles_num
int The number of particles to estimate initial pose n_startup_trials
int The number of initial random trials in the TPE (Tree-Structured Parzen Estimator). lidar_topic_timeout_sec
double Tolerance of timestamp difference between current time and sensor pointcloud initial_pose_timeout_sec
int Tolerance of timestamp difference between initial_pose and sensor pointcloud. [sec] initial_pose_distance_tolerance_m
double Tolerance of distance difference between two initial poses used for linear interpolation. [m] num_threads
int Number of threads used for parallel computing output_pose_covariance
std::array The covariance of output pose (TP: Transform Probability, NVTL: Nearest Voxel Transform Probability)
"},{"location":"localization/ndt_scan_matcher/#regularization","title":"Regularization","text":""},{"location":"localization/ndt_scan_matcher/#abstract","title":"Abstract","text":"This is a function that adds the regularization term to the NDT optimization problem as follows.
\\[ \\begin{align} \\min_{\\mathbf{R},\\mathbf{t}} \\mathrm{NDT}(\\mathbf{R},\\mathbf{t}) +\\mathrm{scale\\ factor}\\cdot \\left| \\mathbf{R}^\\top (\\mathbf{t_{base}-\\mathbf{t}}) \\cdot \\begin{pmatrix} 1\\\\ 0\\\\ 0 \\end{pmatrix} \\right|^2 \\end{align} \\], where t_base is base position measured by GNSS or other means. NDT(R,t) stands for the pure NDT cost function. The regularization term shifts the optimal solution to the base position in the longitudinal direction of the vehicle. Only errors along the longitudinal direction with respect to the base position are considered; errors along Z-axis and lateral-axis error are not considered.
Although the regularization term has rotation as a parameter, the gradient and hessian associated with it is not computed to stabilize the optimization. Specifically, the gradients are computed as follows.
\\[ \\begin{align} &g_x=\\nabla_x \\mathrm{NDT}(\\mathbf{R},\\mathbf{t}) + 2 \\mathrm{scale\\ factor} \\cos\\theta_z\\cdot e_{\\mathrm{longitudinal}} \\\\ &g_y=\\nabla_y \\mathrm{NDT}(\\mathbf{R},\\mathbf{t}) + 2 \\mathrm{scale\\ factor} \\sin\\theta_z\\cdot e_{\\mathrm{longitudinal}} \\\\ &g_z=\\nabla_z \\mathrm{NDT}(\\mathbf{R},\\mathbf{t}) \\\\ &g_\\mathbf{R}=\\nabla_\\mathbf{R} \\mathrm{NDT}(\\mathbf{R},\\mathbf{t}) \\end{align} \\]Regularization is disabled by default. If you wish to use it, please edit the following parameters to enable it.
"},{"location":"localization/ndt_scan_matcher/#where-is-regularization-available","title":"Where is regularization available","text":"This feature is effective on feature-less roads where GNSS is available, such as
By remapping the base position topic to something other than GNSS, as described below, it can be valid outside of these.
"},{"location":"localization/ndt_scan_matcher/#using-other-base-position","title":"Using other base position","text":"Other than GNSS, you can give other global position topics obtained from magnetic markers, visual markers or etc. if they are available in your environment. (Currently Autoware does not provide a node that gives such pose.) To use your topic for regularization, you need to remap the input_regularization_pose_topic
with your topic in ndt_scan_matcher.launch.xml
. By default, it is remapped with /sensing/gnss/pose_with_covariance
.
Since this function determines the base position by linear interpolation from the recently subscribed poses, topics that are published at a low frequency relative to the driving speed cannot be used. Inappropriate linear interpolation may result in bad optimization results.
When using GNSS for base location, the regularization can have negative effects in tunnels, indoors, and near skyscrapers. This is because if the base position is far off from the true value, NDT scan matching may converge to inappropriate optimal position.
"},{"location":"localization/ndt_scan_matcher/#parameters_1","title":"Parameters","text":"Name Type Descriptionregularization_enabled
bool Flag to add regularization term to NDT optimization (FALSE by default) regularization_scale_factor
double Coefficient of the regularization term. Regularization is disabled by default because GNSS is not always accurate enough to serve the appropriate base position in any scenes.
If the scale_factor is too large, the NDT will be drawn to the base position and scan matching may fail. Conversely, if it is too small, the regularization benefit will be lost.
Note that setting scale_factor to 0 is equivalent to disabling regularization.
"},{"location":"localization/ndt_scan_matcher/#example","title":"Example","text":"The following figures show tested maps.
The following figures show the trajectories estimated on the feature-less map with standard NDT and regularization-enabled NDT, respectively. The color of the trajectory indicates the error (meter) from the reference trajectory, which is computed with the feature-rich map.
"},{"location":"localization/ndt_scan_matcher/#dynamic-map-loading","title":"Dynamic map loading","text":"
Autoware supports dynamic map loading feature for ndt_scan_matcher
. Using this feature, NDT dynamically requests for the surrounding pointcloud map to pointcloud_map_loader
, and then receive and preprocess the map in an online fashion.
Using the feature, ndt_scan_matcher
can theoretically handle any large size maps in terms of memory usage. (Note that it is still possible that there exists a limitation due to other factors, e.g. floating-point error)
debug/loaded_pointcloud_map
sensor_msgs::msg::PointCloud2
pointcloud maps used for localization (for debug)"},{"location":"localization/ndt_scan_matcher/#additional-client","title":"Additional client","text":"Name Type Description client_map_loader
autoware_map_msgs::srv::GetDifferentialPointCloudMap
map loading client"},{"location":"localization/ndt_scan_matcher/#parameters_2","title":"Parameters","text":"Name Type Description dynamic_map_loading_update_distance
double Distance traveled to load new map(s) dynamic_map_loading_map_radius
double Map loading radius for every update lidar_radius
double LiDAR radius used for localization (only used for diagnosis)"},{"location":"localization/ndt_scan_matcher/#notes-for-dynamic-map-loading","title":"Notes for dynamic map loading","text":"To use dynamic map loading feature for ndt_scan_matcher
, you also need to split the PCD files into grids (recommended size: 20[m] x 20[m])
Note that the dynamic map loading may FAIL if the map is split into two or more large size map (e.g. 1000[m] x 1000[m]). Please provide either of
Here is a split PCD map for sample-map-rosbag
from Autoware tutorial: sample-map-rosbag_split.zip
This is a function that uses no ground LiDAR scan to estimate the scan matching score. This score can reflect the current localization performance more accurately. related issue.
"},{"location":"localization/ndt_scan_matcher/#parameters_3","title":"Parameters","text":"Name Type Descriptionestimate_scores_by_no_ground_points
bool Flag for using scan matching score based on no ground LiDAR scan (FALSE by default) z_margin_for_ground_removal
double Z-value margin for removal ground points"},{"location":"localization/ndt_scan_matcher/#2d-real-time-covariance-estimation","title":"2D real-time covariance estimation","text":""},{"location":"localization/ndt_scan_matcher/#abstract_2","title":"Abstract","text":"Calculate 2D covariance (xx, xy, yx, yy) in real time using the NDT convergence from multiple initial poses. The arrangement of multiple initial poses is efficiently limited by the Hessian matrix of the NDT score function. In this implementation, the number of initial positions is fixed to simplify the code. The covariance can be seen as error ellipse from ndt_pose_with_covariance setting on rviz2. original paper.
Note that this function may spoil healthy system behavior if it consumes much calculation resources.
"},{"location":"localization/ndt_scan_matcher/#parameters_4","title":"Parameters","text":"initial_pose_offset_model is rotated around (x,y) = (0,0) in the direction of the first principal component of the Hessian matrix. initial_pose_offset_model_x & initial_pose_offset_model_y must have the same number of elements.
Name Type Descriptionuse_covariance_estimation
bool Flag for using real-time covariance estimation (FALSE by default) initial_pose_offset_model_x
std::vector X-axis offset [m] initial_pose_offset_model_y
std::vector Y-axis offset [m]"},{"location":"localization/pose2twist/","title":"pose2twist","text":""},{"location":"localization/pose2twist/#pose2twist","title":"pose2twist","text":""},{"location":"localization/pose2twist/#purpose","title":"Purpose","text":"This pose2twist
calculates the velocity from the input pose history. In addition to the computed twist, this node outputs the linear-x and angular-z components as a float message to simplify debugging.
The twist.linear.x
is calculated as sqrt(dx * dx + dy * dy + dz * dz) / dt
, and the values in the y
and z
fields are zero. The twist.angular
is calculated as d_roll / dt
, d_pitch / dt
and d_yaw / dt
for each field.
none.
"},{"location":"localization/pose2twist/#assumptions-known-limits","title":"Assumptions / Known limits","text":"none.
"},{"location":"localization/pose_initializer/","title":"pose_initializer","text":""},{"location":"localization/pose_initializer/#pose_initializer","title":"pose_initializer","text":""},{"location":"localization/pose_initializer/#purpose","title":"Purpose","text":"The pose_initializer
is the package to send an initial pose to ekf_localizer
. It receives roughly estimated initial pose from GNSS/user. Passing the pose to ndt_scan_matcher
, and it gets a calculated ego pose from ndt_scan_matcher
via service. Finally, it publishes the initial pose to ekf_localizer
. This node depends on the map height fitter library. See here for more details.
ekf_enabled
bool If true, EKF localizer is activated. ndt_enabled
bool If true, the pose will be estimated by NDT scan matcher, otherwise it is passed through. stop_check_enabled
bool If true, initialization is accepted only when the vehicle is stopped. stop_check_duration
bool The duration used for the stop check above. gnss_enabled
bool If true, use the GNSS pose when no pose is specified. gnss_pose_timeout
bool The duration that the GNSS pose is valid."},{"location":"localization/pose_initializer/#services","title":"Services","text":"Name Type Description /localization/initialize
autoware_adapi_v1_msgs::srv::InitializeLocalization initial pose from api"},{"location":"localization/pose_initializer/#clients","title":"Clients","text":"Name Type Description /localization/pose_estimator/ndt_align_srv
tier4_localization_msgs::srv::PoseWithCovarianceStamped pose estimation service"},{"location":"localization/pose_initializer/#subscriptions","title":"Subscriptions","text":"Name Type Description /sensing/gnss/pose_with_covariance
geometry_msgs::msg::PoseWithCovarianceStamped pose from gnss /sensing/vehicle_velocity_converter/twist_with_covariance
geometry_msgs::msg::TwistStamped twist for stop check"},{"location":"localization/pose_initializer/#publications","title":"Publications","text":"Name Type Description /localization/initialization_state
autoware_adapi_v1_msgs::msg::LocalizationInitializationState pose initialization state /initialpose3d
geometry_msgs::msg::PoseWithCovarianceStamped calculated initial ego pose"},{"location":"localization/pose_initializer/#connection-with-default-ad-api","title":"Connection with Default AD API","text":"This pose_initializer
is used via default AD API. For detailed description of the API description, please refer to the description of default_ad_api
.
The pose_instability_detector
package includes a node designed to monitor the stability of /localization/kinematic_state
, which is an output topic of the Extended Kalman Filter (EKF).
This node triggers periodic timer callbacks to compare two poses:
/localization/kinematic_state
over a duration specified by interval_sec
./localization/kinematic_state
.The results of this comparison are then output to the /diagnostics
topic.
If this node outputs WARN messages to /diagnostics
, it means that the EKF output is significantly different from the integrated twist values. This discrepancy suggests that there may be an issue with either the estimated pose or the input twist.
The following diagram provides an overview of what the timeline of this process looks like:
"},{"location":"localization/pose_instability_detector/#parameters","title":"Parameters","text":"Name Type Description Default Range interval_sec float The interval of timer_callback in seconds. 1 >0 threshold_diff_position_x float The threshold of diff_position x (m). 1 \u22650.0 threshold_diff_position_y float The threshold of diff_position y (m). 1 \u22650.0 threshold_diff_position_z float The threshold of diff_position z (m). 1 \u22650.0 threshold_diff_angle_x float The threshold of diff_angle x (rad). 1 \u22650.0 threshold_diff_angle_y float The threshold of diff_angle y (rad). 1 \u22650.0 threshold_diff_angle_z float The threshold of diff_angle z (rad). 1 \u22650.0"},{"location":"localization/pose_instability_detector/#input","title":"Input","text":"Name Type Description~/input/odometry
nav_msgs::msg::Odometry Pose estimated by EKF ~/input/twist
geometry_msgs::msg::TwistWithCovarianceStamped Twist"},{"location":"localization/pose_instability_detector/#output","title":"Output","text":"Name Type Description ~/debug/diff_pose
geometry_msgs::msg::PoseStamped diff_pose /diagnostics
diagnostic_msgs::msg::DiagnosticArray Diagnostics"},{"location":"localization/stop_filter/","title":"stop_filter","text":""},{"location":"localization/stop_filter/#stop_filter","title":"stop_filter","text":""},{"location":"localization/stop_filter/#purpose","title":"Purpose","text":"When this function did not exist, each node used a different criterion to determine whether the vehicle is stopping or not, resulting that some nodes were in operation of stopping the vehicle and some nodes continued running in the drive mode. This node aims to:
input/odom
nav_msgs::msg::Odometry
localization odometry"},{"location":"localization/stop_filter/#output","title":"Output","text":"Name Type Description output/odom
nav_msgs::msg::Odometry
odometry with suppressed longitudinal and yaw twist debug/stop_flag
tier4_debug_msgs::msg::BoolStamped
flag to represent whether the vehicle is stopping or not"},{"location":"localization/stop_filter/#parameters","title":"Parameters","text":"Name Type Description Default Range vx_threshold float Longitudinal velocity threshold to determine if the vehicle is stopping. [m/s] 0.01 \u22650.0 wz_threshold float Yaw velocity threshold to determine if the vehicle is stopping. [rad/s] 0.01 \u22650.0"},{"location":"localization/tree_structured_parzen_estimator/","title":"tree_structured_parzen_estimator","text":""},{"location":"localization/tree_structured_parzen_estimator/#tree_structured_parzen_estimator","title":"tree_structured_parzen_estimator","text":"`tree_structured_parzen_estimator`` is a package for black-box optimization.
This package does not have a node, it is just a library.
"},{"location":"localization/twist2accel/","title":"twist2accel","text":""},{"location":"localization/twist2accel/#twist2accel","title":"twist2accel","text":""},{"location":"localization/twist2accel/#purpose","title":"Purpose","text":"This package is responsible for estimating acceleration using the output of ekf_localizer
. It uses lowpass filter to mitigate the noise.
input/odom
nav_msgs::msg::Odometry
localization odometry input/twist
geometry_msgs::msg::TwistWithCovarianceStamped
twist"},{"location":"localization/twist2accel/#output","title":"Output","text":"Name Type Description output/accel
geometry_msgs::msg::AccelWithCovarianceStamped
estimated acceleration"},{"location":"localization/twist2accel/#parameters","title":"Parameters","text":"Name Type Description use_odom
bool use odometry if true, else use twist input (default: true) accel_lowpass_gain
double lowpass gain for lowpass filter in estimating acceleration (default: 0.9)"},{"location":"localization/twist2accel/#future-work","title":"Future work","text":"Future work includes integrating acceleration into the EKF state.
"},{"location":"localization/yabloc/","title":"YabLoc","text":""},{"location":"localization/yabloc/#yabloc","title":"YabLoc","text":"YabLoc is vision-based localization with vector map. https://youtu.be/Eaf6r_BNFfk
It estimates position by matching road surface markings extracted from images with a vector map. Point cloud maps and LiDAR are not required. YabLoc enables users localize vehicles that are not equipped with LiDAR and in environments where point cloud maps are not available.
"},{"location":"localization/yabloc/#packages","title":"Packages","text":"When launching autoware, if you set pose_source:=yabloc
as an argument, YabLoc will be launched instead of NDT. By default, pose_source
is ndt
.
A sample command to run YabLoc is as follows
ros2 launch autoware_launch logging_simulator.launch.xml \\\nmap_path:=$HOME/autoware_map/sample-map-rosbag\\\nvehicle_model:=sample_vehicle \\\nsensor_model:=sample_sensor_kit \\\npose_source:=yabloc\n
"},{"location":"localization/yabloc/#architecture","title":"Architecture","text":""},{"location":"localization/yabloc/#principle","title":"Principle","text":"The diagram below illustrates the basic principle of YabLoc. It extracts road surface markings by extracting the line segments using the road area obtained from graph-based segmentation. The red line at the center-top of the diagram represents the line segments identified as road surface markings. YabLoc transforms these segments for each particle and determines the particle's weight by comparing them with the cost map generated from Lanelet2.
"},{"location":"localization/yabloc/#visualization","title":"Visualization","text":""},{"location":"localization/yabloc/#core-visualization-topics","title":"Core visualization topics","text":"These topics are not visualized by default.
index topic name description 1/localization/yabloc/pf/predicted_particle_marker
particle distribution of particle filter. Red particles are probable candidate. 2 /localization/yabloc/pf/scored_cloud
3D projected line segments. the color indicates how well they match the map. 3 /localization/yabloc/image_processing/lanelet2_overlay_image
overlay of lanelet2 (yellow lines) onto image based on estimated pose. If they match well with the actual road markings, it means that the localization performs well."},{"location":"localization/yabloc/#image-topics-for-debug","title":"Image topics for debug","text":"These topics are not visualized by default.
index topic name description 1/localization/yabloc/pf/cost_map_image
cost map made from lanelet2 2 /localization/yabloc/pf/match_image
projected line segments 3 /localization/yabloc/image_processing/image_with_colored_line_segment
classified line segments. green line segments are used in particle correction 4 /localization/yabloc/image_processing/lanelet2_overlay_image
overlay of lanelet2 5 /localization/yabloc/image_processing/segmented_image
graph based segmentation result"},{"location":"localization/yabloc/#limitation","title":"Limitation","text":"This package contains some executable nodes related to map. Also, This provides some yabloc common library.
It estimates the height and tilt of the ground from lanelet2.
"},{"location":"localization/yabloc/yabloc_common/#input-outputs","title":"Input / Outputs","text":""},{"location":"localization/yabloc/yabloc_common/#input","title":"Input","text":"Name Type Descriptioninput/vector_map
autoware_auto_mapping_msgs::msg::HADMapBin
vector map input/pose
geometry_msgs::msg::PoseStamped
estimated self pose"},{"location":"localization/yabloc/yabloc_common/#output","title":"Output","text":"Name Type Description output/ground
std_msgs::msg::Float32MultiArray
estimated ground parameters. it contains x, y, z, normal_x, normal_y, normal_z. output/ground_markers
visualization_msgs::msg::Marker
visualization of estimated ground plane output/ground_status
std_msgs::msg::String
status log of ground plane estimation output/height
std_msgs::msg::Float32
altitude output/near_cloud
sensor_msgs::msg::PointCloud2
point cloud extracted from lanelet2 and used for ground tilt estimation"},{"location":"localization/yabloc/yabloc_common/#parameters","title":"Parameters","text":"Name Type Description Default Range force_zero_tilt boolean if true, the tilt is always determined to be horizontal False N/A K float the number of neighbors for ground search on a map 50 N/A R float radius for ground search on a map [m] 10 N/A"},{"location":"localization/yabloc/yabloc_common/#ll2_decomposer","title":"ll2_decomposer","text":""},{"location":"localization/yabloc/yabloc_common/#purpose_1","title":"Purpose","text":"This node extracts the elements related to the road surface markings and yabloc from lanelet2.
"},{"location":"localization/yabloc/yabloc_common/#input-outputs_1","title":"Input / Outputs","text":""},{"location":"localization/yabloc/yabloc_common/#input_1","title":"Input","text":"Name Type Descriptioninput/vector_map
autoware_auto_mapping_msgs::msg::HADMapBin
vector map"},{"location":"localization/yabloc/yabloc_common/#output_1","title":"Output","text":"Name Type Description output/ll2_bounding_box
sensor_msgs::msg::PointCloud2
bounding boxes extracted from lanelet2 output/ll2_road_marking
sensor_msgs::msg::PointCloud2
road surface markings extracted from lanelet2 output/ll2_sign_board
sensor_msgs::msg::PointCloud2
traffic sign boards extracted from lanelet2 output/sign_board_marker
visualization_msgs::msg::MarkerArray
visualized traffic sign boards"},{"location":"localization/yabloc/yabloc_common/#parameters_1","title":"Parameters","text":"Name Type Description Default Range road_marking_labels array line string types that indicating road surface markings in lanelet2 ['cross_walk', 'zebra_marking', 'line_thin', 'line_thick', 'pedestrian_marking', 'stop_line', 'road_border'] N/A sign_board_labels array line string types that indicating traffic sign boards in lanelet2 ['sign-board'] N/A bounding_box_labels array line string types that indicating not mapped areas in lanelet2 ['none'] N/A"},{"location":"localization/yabloc/yabloc_image_processing/","title":"yabloc_image_processing","text":""},{"location":"localization/yabloc/yabloc_image_processing/#yabloc_image_processing","title":"yabloc_image_processing","text":"This package contains some executable nodes related to image processing.
This node extract all line segments from gray scale image.
"},{"location":"localization/yabloc/yabloc_image_processing/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"localization/yabloc/yabloc_image_processing/#input","title":"Input","text":"Name Type Descriptioninput/image_raw
sensor_msgs::msg::Image
undistorted image"},{"location":"localization/yabloc/yabloc_image_processing/#output","title":"Output","text":"Name Type Description output/image_with_line_segments
sensor_msgs::msg::Image
image with line segments highlighted output/line_segments_cloud
sensor_msgs::msg::PointCloud2
detected line segments as point cloud. each point contains x,y,z, normal_x, normal_y, normal_z and z, and normal_z are always empty."},{"location":"localization/yabloc/yabloc_image_processing/#graph_segmentation","title":"graph_segmentation","text":""},{"location":"localization/yabloc/yabloc_image_processing/#purpose_1","title":"Purpose","text":"This node extract road surface region by graph-based-segmentation.
"},{"location":"localization/yabloc/yabloc_image_processing/#inputs-outputs_1","title":"Inputs / Outputs","text":""},{"location":"localization/yabloc/yabloc_image_processing/#input_1","title":"Input","text":"Name Type Descriptioninput/image_raw
sensor_msgs::msg::Image
undistorted image"},{"location":"localization/yabloc/yabloc_image_processing/#output_1","title":"Output","text":"Name Type Description output/mask_image
sensor_msgs::msg::Image
image with masked segments determined as road surface area output/segmented_image
sensor_msgs::msg::Image
segmented image for visualization"},{"location":"localization/yabloc/yabloc_image_processing/#parameters","title":"Parameters","text":"Name Type Description Default Range target_height_ratio float height on the image to retrieve the candidate road surface 0.85 N/A target_candidate_box_width float size of the square area to search for candidate road surfaces 15 N/A pickup_additional_graph_segment boolean if this is true, additional regions of similar color are retrieved 1 N/A similarity_score_threshold float threshold for picking up additional areas 0.8 N/A sigma float parameter for cv::ximgproc::segmentation::GraphSegmentation 0.5 N/A k float parameter for cv::ximgproc::segmentation::GraphSegmentation 300 N/A min_size float parameter for cv::ximgproc::segmentation::GraphSegmentation 100 N/A"},{"location":"localization/yabloc/yabloc_image_processing/#segment_filter","title":"segment_filter","text":""},{"location":"localization/yabloc/yabloc_image_processing/#purpose_2","title":"Purpose","text":"This is a node that integrates the results of graph_segment and lsd to extract road surface markings.
"},{"location":"localization/yabloc/yabloc_image_processing/#inputs-outputs_2","title":"Inputs / Outputs","text":""},{"location":"localization/yabloc/yabloc_image_processing/#input_2","title":"Input","text":"Name Type Descriptioninput/line_segments_cloud
sensor_msgs::msg::PointCloud2
detected line segment input/mask_image
sensor_msgs::msg::Image
image with masked segments determined as road surface area input/camera_info
sensor_msgs::msg::CameraInfo
undistorted camera info"},{"location":"localization/yabloc/yabloc_image_processing/#output_2","title":"Output","text":"Name Type Description output/line_segments_cloud
sensor_msgs::msg::PointCloud2
filtered line segments for visualization output/projected_image
sensor_msgs::msg::Image
projected filtered line segments for visualization output/projected_line_segments_cloud
sensor_msgs::msg::PointCloud2
projected filtered line segments"},{"location":"localization/yabloc/yabloc_image_processing/#parameters_1","title":"Parameters","text":"Name Type Description Default Range min_segment_length float min length threshold (if it is negative, it is unlimited) 1.5 N/A max_segment_distance float max distance threshold (if it is negative, it is unlimited) 30 N/A max_lateral_distance float max lateral distance threshold (if it is negative, it is unlimited) 10 N/A publish_image_with_segment_for_debug boolean toggle whether to publish the filtered line segment for debug 1 N/A max_range float range of debug projection visualization 20 N/A image_size float image size of debug projection visualization 800 N/A"},{"location":"localization/yabloc/yabloc_image_processing/#undistort","title":"undistort","text":""},{"location":"localization/yabloc/yabloc_image_processing/#purpose_3","title":"Purpose","text":"This node performs image resizing and undistortion at the same time.
"},{"location":"localization/yabloc/yabloc_image_processing/#inputs-outputs_3","title":"Inputs / Outputs","text":""},{"location":"localization/yabloc/yabloc_image_processing/#input_3","title":"Input","text":"Name Type Descriptioninput/camera_info
sensor_msgs::msg::CameraInfo
camera info input/image_raw
sensor_msgs::msg::Image
raw camera image input/image_raw/compressed
sensor_msgs::msg::CompressedImage
compressed camera image This node subscribes to both compressed image and raw image topics. If raw image is subscribed to even once, compressed image will no longer be subscribed to. This is to avoid redundant decompression within Autoware.
"},{"location":"localization/yabloc/yabloc_image_processing/#output_3","title":"Output","text":"Name Type Descriptionoutput/camera_info
sensor_msgs::msg::CameraInfo
resized camera info output/image_raw
sensor_msgs::msg::CompressedImage
undistorted and resized image"},{"location":"localization/yabloc/yabloc_image_processing/#parameters_2","title":"Parameters","text":"Name Type Description Default Range use_sensor_qos boolean whether to use sensor qos or not True N/A width float resized image width size 800 N/A override_frame_id string value for overriding the camera's frame_id. if blank, frame_id of static_tf is not overwritten N/A"},{"location":"localization/yabloc/yabloc_image_processing/#about-tf_static-overriding","title":"about tf_static overriding","text":"click to open Some nodes requires `/tf_static` from `/base_link` to the frame_id of `/sensing/camera/traffic_light/image_raw/compressed` (e.g. `/traffic_light_left_camera/camera_optical_link`). You can verify that the tf_static is correct with the following command. ros2 run tf2_ros tf2_echo base_link traffic_light_left_camera/camera_optical_link\n
If the wrong `/tf_static` are broadcasted due to using a prototype vehicle, not having accurate calibration data, or some other unavoidable reason, it is useful to give the frame_id in `override_camera_frame_id`. If you give it a non-empty string, `/image_processing/undistort_node` will rewrite the frame_id in camera_info. For example, you can give a different tf_static as follows. ros2 launch yabloc_launch sample_launch.xml override_camera_frame_id:=fake_camera_optical_link\nros2 run tf2_ros static_transform_publisher \\\n--frame-id base_link \\\n--child-frame-id fake_camera_optical_link \\\n--roll -1.57 \\\n--yaw -1.570\n
"},{"location":"localization/yabloc/yabloc_image_processing/#lanelet2_overlay","title":"lanelet2_overlay","text":""},{"location":"localization/yabloc/yabloc_image_processing/#purpose_4","title":"Purpose","text":"This node overlays lanelet2 on the camera image based on the estimated self-position.
"},{"location":"localization/yabloc/yabloc_image_processing/#inputs-outputs_4","title":"Inputs / Outputs","text":""},{"location":"localization/yabloc/yabloc_image_processing/#input_4","title":"Input","text":"Name Type Descriptioninput/pose
geometry_msgs::msg::PoseStamped
estimated self pose input/projected_line_segments_cloud
sensor_msgs::msg::PointCloud2
projected line segments including non-road markings input/camera_info
sensor_msgs::msg::CameraInfo
undistorted camera info input/image_raw
sensor_msgs::msg::Image
undistorted camera image input/ground
std_msgs::msg::Float32MultiArray
ground tilt input/ll2_road_marking
sensor_msgs::msg::PointCloud2
lanelet2 elements regarding road surface markings input/ll2_sign_board
sensor_msgs::msg::PointCloud2
lanelet2 elements regarding traffic sign boards"},{"location":"localization/yabloc/yabloc_image_processing/#output_4","title":"Output","text":"Name Type Description output/lanelet2_overlay_image
sensor_msgs::msg::Image
lanelet2 overlaid image output/projected_marker
visualization_msgs::msg::Marker
3d projected line segments including non-road markings"},{"location":"localization/yabloc/yabloc_image_processing/#line_segments_overlay","title":"line_segments_overlay","text":""},{"location":"localization/yabloc/yabloc_image_processing/#purpose_5","title":"Purpose","text":"This node visualize classified line segments on the camera image
"},{"location":"localization/yabloc/yabloc_image_processing/#inputs-outputs_5","title":"Inputs / Outputs","text":""},{"location":"localization/yabloc/yabloc_image_processing/#input_5","title":"Input","text":"Name Type Descriptioninput/line_segments_cloud
sensor_msgs::msg::PointCloud2
classified line segments input/image_raw
sensor_msgs::msg::Image
undistorted camera image"},{"location":"localization/yabloc/yabloc_image_processing/#output_5","title":"Output","text":"Name Type Description output/image_with_colored_line_segments
sensor_msgs::msg::Image
image with highlighted line segments"},{"location":"localization/yabloc/yabloc_monitor/","title":"yabloc_monitor","text":""},{"location":"localization/yabloc/yabloc_monitor/#yabloc_monitor","title":"yabloc_monitor","text":"YabLoc monitor is a node that monitors the status of the YabLoc localization system. It is a wrapper node that monitors the status of the YabLoc localization system and publishes the status as diagnostics.
"},{"location":"localization/yabloc/yabloc_monitor/#feature","title":"Feature","text":""},{"location":"localization/yabloc/yabloc_monitor/#availability","title":"Availability","text":"The node monitors the final output pose of YabLoc to verify the availability of YabLoc.
"},{"location":"localization/yabloc/yabloc_monitor/#others","title":"Others","text":"To be added,
"},{"location":"localization/yabloc/yabloc_monitor/#interfaces","title":"Interfaces","text":""},{"location":"localization/yabloc/yabloc_monitor/#input","title":"Input","text":"Name Type Description~/input/yabloc_pose
geometry_msgs/PoseStamped
The final output pose of YabLoc"},{"location":"localization/yabloc/yabloc_monitor/#output","title":"Output","text":"Name Type Description /diagnostics
diagnostic_msgs/DiagnosticArray
Diagnostics outputs"},{"location":"localization/yabloc/yabloc_monitor/#parameters","title":"Parameters","text":"Name Type Description Default Range availability/timestamp_tolerance float tolerable time difference between current time and latest estimated pose 1 N/A"},{"location":"localization/yabloc/yabloc_particle_filter/","title":"yabLoc_particle_filter","text":""},{"location":"localization/yabloc/yabloc_particle_filter/#yabloc_particle_filter","title":"yabLoc_particle_filter","text":"This package contains some executable nodes related to particle filter.
input/initialpose
geometry_msgs::msg::PoseWithCovarianceStamped
to specify the initial position of particles input/twist_with_covariance
geometry_msgs::msg::TwistWithCovarianceStamped
linear velocity and angular velocity of prediction update input/height
std_msgs::msg::Float32
ground height input/weighted_particles
yabloc_particle_filter::msg::ParticleArray
particles weighted by corrector nodes"},{"location":"localization/yabloc/yabloc_particle_filter/#output","title":"Output","text":"Name Type Description output/pose_with_covariance
geometry_msgs::msg::PoseWithCovarianceStamped
particle centroid with covariance output/pose
geometry_msgs::msg::PoseStamped
particle centroid with covariance output/predicted_particles
yabloc_particle_filter::msg::ParticleArray
particles weighted by predictor nodes debug/init_marker
visualization_msgs::msg::Marker
debug visualization of initial position debug/particles_marker_array
visualization_msgs::msg::MarkerArray
particles visualization. published if visualize
is true"},{"location":"localization/yabloc/yabloc_particle_filter/#parameters","title":"Parameters","text":"Name Type Description Default Range visualize boolean whether particles are also published in visualization_msgs or not True N/A static_linear_covariance float overriding covariance of /twist_with_covariance
0.04 N/A static_angular_covariance float overriding covariance of /twist_with_covariance
0.006 N/A resampling_interval_seconds float the interval of particle resampling 1.0 N/A num_of_particles float the number of particles 500 N/A prediction_rate float frequency of forecast updates, in Hz 50.0 N/A cov_xx_yy array the covariance of initial pose [2.0, 0.25] N/A"},{"location":"localization/yabloc/yabloc_particle_filter/#services","title":"Services","text":"Name Type Description yabloc_trigger_srv
std_srvs::srv::SetBool
activation and deactivation of yabloc estimation"},{"location":"localization/yabloc/yabloc_particle_filter/#gnss_particle_corrector","title":"gnss_particle_corrector","text":""},{"location":"localization/yabloc/yabloc_particle_filter/#purpose_1","title":"Purpose","text":"ublox_msgs::msg::NavPVT
and geometry_msgs::msg::PoseWithCovarianceStamped
.input/height
std_msgs::msg::Float32
ground height input/predicted_particles
yabloc_particle_filter::msg::ParticleArray
predicted particles input/pose_with_covariance
geometry_msgs::msg::PoseWithCovarianceStamped
gnss measurement. used if use_ublox_msg
is false input/navpvt
ublox_msgs::msg::NavPVT
gnss measurement. used if use_ublox_msg
is true"},{"location":"localization/yabloc/yabloc_particle_filter/#output_1","title":"Output","text":"Name Type Description output/weighted_particles
yabloc_particle_filter::msg::ParticleArray
weighted particles debug/gnss_range_marker
visualization_msgs::msg::MarkerArray
gnss weight distribution debug/particles_marker_array
visualization_msgs::msg::MarkerArray
particles visualization. published if visualize
is true"},{"location":"localization/yabloc/yabloc_particle_filter/#parameters_1","title":"Parameters","text":"Name Type Description Default Range acceptable_max_delay float how long to hold the predicted particles 1 N/A visualize boolean whether publish particles as marker_array or not 0 N/A mahalanobis_distance_threshold float if the Mahalanobis distance to the GNSS for particle exceeds this, the correction skips. 30 N/A for_fixed/max_weight float gnss weight distribution used when observation is fixed 5 N/A for_fixed/flat_radius float gnss weight distribution used when observation is fixed 0.5 N/A for_fixed/max_radius float gnss weight distribution used when observation is fixed 10 N/A for_fixed/min_weight float gnss weight distribution used when observation is fixed 0.5 N/A for_not_fixed/max_weight float gnss weight distribution used when observation is not fixed 1 N/A for_not_fixed/flat_radius float gnss weight distribution used when observation is not fixed 5 N/A for_not_fixed/max_radius float gnss weight distribution used when observation is not fixed 20 N/A for_not_fixed/min_weight float gnss weight distribution used when observation is not fixed 0.5 N/A"},{"location":"localization/yabloc/yabloc_particle_filter/#camera_particle_corrector","title":"camera_particle_corrector","text":""},{"location":"localization/yabloc/yabloc_particle_filter/#purpose_2","title":"Purpose","text":"input/predicted_particles
yabloc_particle_filter::msg::ParticleArray
predicted particles input/ll2_bounding_box
sensor_msgs::msg::PointCloud2
road surface markings converted to line segments input/ll2_road_marking
sensor_msgs::msg::PointCloud2
road surface markings converted to line segments input/projected_line_segments_cloud
sensor_msgs::msg::PointCloud2
projected line segments input/pose
geometry_msgs::msg::PoseStamped
reference to retrieve the area map around the self location"},{"location":"localization/yabloc/yabloc_particle_filter/#output_2","title":"Output","text":"Name Type Description output/weighted_particles
yabloc_particle_filter::msg::ParticleArray
weighted particles debug/cost_map_image
sensor_msgs::msg::Image
cost map created from lanelet2 debug/cost_map_range
visualization_msgs::msg::MarkerArray
cost map boundary debug/match_image
sensor_msgs::msg::Image
projected line segments image debug/scored_cloud
sensor_msgs::msg::PointCloud2
weighted 3d line segments debug/scored_post_cloud
sensor_msgs::msg::PointCloud2
weighted 3d line segments which are iffy debug/state_string
std_msgs::msg::String
string describing the node state debug/particles_marker_array
visualization_msgs::msg::MarkerArray
particles visualization. published if visualize
is true"},{"location":"localization/yabloc/yabloc_particle_filter/#parameters_2","title":"Parameters","text":"Name Type Description Default Range acceptable_max_delay float how long to hold the predicted particles 1 N/A visualize boolean whether publish particles as marker_array or not 0 N/A image_size float image size of debug/cost_map_image 800 N/A max_range float width of hierarchical cost map 40 N/A gamma float gamma value of the intensity gradient of the cost map 5 N/A min_prob float minimum particle weight the corrector node gives 0.1 N/A far_weight_gain float exp(-far_weight_gain_ * squared_distance_from_camera)
is weight gain. if this is large, the nearby road markings will be more important 0.001 N/A enabled_at_first boolean if it is false, this node is not activated at first. you can activate by service call 1 N/A"},{"location":"localization/yabloc/yabloc_particle_filter/#services_1","title":"Services","text":"Name Type Description switch_srv
std_srvs::srv::SetBool
activation and deactivation of correction"},{"location":"localization/yabloc/yabloc_pose_initializer/","title":"yabloc_pose_initializer","text":""},{"location":"localization/yabloc/yabloc_pose_initializer/#yabloc_pose_initializer","title":"yabloc_pose_initializer","text":"This package contains a node related to initial pose estimation.
This package requires the pre-trained semantic segmentation model for runtime. This model is usually downloaded by ansible
during env preparation phase of the installation. It is also possible to download it manually. Even if the model is not downloaded, initialization will still complete, but the accuracy may be compromised.
To download and extract the model manually:
$ mkdir -p ~/autoware_data/yabloc_pose_initializer/\n$ wget -P ~/autoware_data/yabloc_pose_initializer/ \\\nhttps://s3.ap-northeast-2.wasabisys.com/pinto-model-zoo/136_road-segmentation-adas-0001/resources.tar.gz\n$ tar xzf ~/autoware_data/yabloc_pose_initializer/resources.tar.gz -C ~/autoware_data/yabloc_pose_initializer/\n
"},{"location":"localization/yabloc/yabloc_pose_initializer/#note","title":"Note","text":"This package makes use of external code. The trained files are provided by apollo. The trained files are automatically downloaded during env preparation.
Original model URL
https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/intel/road-segmentation-adas-0001
Open Model Zoo is licensed under Apache License Version 2.0.
Converted model URL
https://github.com/PINTO0309/PINTO_model_zoo/tree/main/136_road-segmentation-adas-0001
model conversion scripts are released under the MIT license
"},{"location":"localization/yabloc/yabloc_pose_initializer/#special-thanks","title":"Special thanks","text":"input/camera_info
sensor_msgs::msg::CameraInfo
undistorted camera info input/image_raw
sensor_msgs::msg::Image
undistorted camera image input/vector_map
autoware_auto_mapping_msgs::msg::HADMapBin
vector map"},{"location":"localization/yabloc/yabloc_pose_initializer/#output","title":"Output","text":"Name Type Description output/candidates
visualization_msgs::msg::MarkerArray
initial pose candidates"},{"location":"localization/yabloc/yabloc_pose_initializer/#parameters","title":"Parameters","text":"Name Type Description Default Range angle_resolution float how many divisions of 1 sigma angle range 30 N/A"},{"location":"localization/yabloc/yabloc_pose_initializer/#services","title":"Services","text":"Name Type Description yabloc_align_srv
tier4_localization_msgs::srv::PoseWithCovarianceStamped
initial pose estimation request"},{"location":"map/map_height_fitter/","title":"map_height_fitter","text":""},{"location":"map/map_height_fitter/#map_height_fitter","title":"map_height_fitter","text":"This library fits the given point with the ground of the point cloud map. The map loading operation is switched by the parameter enable_partial_load
of the node specified by map_loader_name
. The node using this library must use multi thread executor.
This package provides the features of loading various maps.
"},{"location":"map/map_loader/#pointcloud_map_loader","title":"pointcloud_map_loader","text":""},{"location":"map/map_loader/#feature","title":"Feature","text":"pointcloud_map_loader
provides pointcloud maps to the other Autoware nodes in various configurations. Currently, it supports the following two types:
NOTE: We strongly recommend to use divided maps when using large pointcloud map to enable the latter two features (partial and differential load). Please go through the prerequisites section for more details, and follow the instruction for dividing the map and preparing the metadata.
"},{"location":"map/map_loader/#prerequisites","title":"Prerequisites","text":""},{"location":"map/map_loader/#prerequisites-on-pointcloud-map-files","title":"Prerequisites on pointcloud map file(s)","text":"You may provide either a single .pcd file or multiple .pcd files. If you are using multiple PCD data, it MUST obey the following rules:
map_projection_loader
, in order to be consistent with the lanelet2 map and other packages that converts between local and geodetic coordinates. For more information, please refer to the readme of map_projection_loader
.The metadata should look like this:
x_resolution: 20.0\ny_resolution: 20.0\nA.pcd: [1200, 2500] # -> 1200 < x < 1220, 2500 < y < 2520\nB.pcd: [1220, 2500] # -> 1220 < x < 1240, 2500 < y < 2520\nC.pcd: [1200, 2520] # -> 1200 < x < 1220, 2520 < y < 2540\nD.pcd: [1240, 2520] # -> 1240 < x < 1260, 2520 < y < 2540\n
where,
x_resolution
and y_resolution
A.pcd
, B.pcd
, etc, are the names of PCD files.[1200, 2500]
are the values indicate that for this PCD file, x coordinates are between 1200 and 1220 (x_resolution
+ x_coordinate
) and y coordinates are between 2500 and 2520 (y_resolution
+ y_coordinate
).You may use pointcloud_divider from MAP IV for dividing pointcloud map as well as generating the compatible metadata.yaml.
"},{"location":"map/map_loader/#directory-structure-of-these-files","title":"Directory structure of these files","text":"If you only have one pointcloud map, Autoware will assume the following directory structure by default.
sample-map-rosbag\n\u251c\u2500\u2500 lanelet2_map.osm\n\u251c\u2500\u2500 pointcloud_map.pcd\n
If you have multiple rosbags, an example directory structure would be as follows. Note that you need to have a metadata when you have multiple pointcloud map files.
sample-map-rosbag\n\u251c\u2500\u2500 lanelet2_map.osm\n\u251c\u2500\u2500 pointcloud_map.pcd\n\u2502 \u251c\u2500\u2500 A.pcd\n\u2502 \u251c\u2500\u2500 B.pcd\n\u2502 \u251c\u2500\u2500 C.pcd\n\u2502 \u2514\u2500\u2500 ...\n\u251c\u2500\u2500 map_projector_info.yaml\n\u2514\u2500\u2500 pointcloud_map_metadata.yaml\n
"},{"location":"map/map_loader/#specific-features","title":"Specific features","text":""},{"location":"map/map_loader/#publish-raw-pointcloud-map-ros-2-topic","title":"Publish raw pointcloud map (ROS 2 topic)","text":"The node publishes the raw pointcloud map loaded from the .pcd
file(s).
The node publishes the downsampled pointcloud map loaded from the .pcd
file(s). You can specify the downsample resolution by changing the leaf_size
parameter.
The node publishes the pointcloud metadata attached with an ID. Metadata is loaded from the .yaml
file. Please see the description of PointCloudMapMetaData.msg
for details.
Here, we assume that the pointcloud maps are divided into grids.
Given a query from a client node, the node sends a set of pointcloud maps that overlaps with the queried area. Please see the description of GetPartialPointCloudMap.srv
for details.
Here, we assume that the pointcloud maps are divided into grids.
Given a query and set of map IDs, the node sends a set of pointcloud maps that overlap with the queried area and are not included in the set of map IDs. Please see the description of GetDifferentialPointCloudMap.srv
for details.
Here, we assume that the pointcloud maps are divided into grids.
Given IDs query from a client node, the node sends a set of pointcloud maps (each of which attached with unique ID) specified by query. Please see the description of GetSelectedPointCloudMap.srv
for details.
output/pointcloud_map
(sensor_msgs/msg/PointCloud2) : Raw pointcloud mapoutput/pointcloud_map_metadata
(autoware_map_msgs/msg/PointCloudMapMetaData) : Metadata of pointcloud mapoutput/debug/downsampled_pointcloud_map
(sensor_msgs/msg/PointCloud2) : Downsampled pointcloud mapservice/get_partial_pcd_map
(autoware_map_msgs/srv/GetPartialPointCloudMap) : Partial pointcloud mapservice/get_differential_pcd_map
(autoware_map_msgs/srv/GetDifferentialPointCloudMap) : Differential pointcloud mapservice/get_selected_pcd_map
(autoware_map_msgs/srv/GetSelectedPointCloudMap) : Selected pointcloud maplanelet2_map_loader loads Lanelet2 file and publishes the map data as autoware_auto_mapping_msgs/HADMapBin message. The node projects lan/lon coordinates into arbitrary coordinates defined in /map/map_projector_info
from map_projection_loader
. Please see tier4_autoware_msgs/msg/MapProjectorInfo.msg for supported projector types.
ros2 run map_loader lanelet2_map_loader --ros-args -p lanelet2_map_path:=path/to/map.osm
lanelet2_map_visualization visualizes autoware_auto_mapping_msgs/HADMapBin messages into visualization_msgs/MarkerArray.
"},{"location":"map/map_loader/#how-to-run_1","title":"How to Run","text":"ros2 run map_loader lanelet2_map_visualization
map_projection_loader
is responsible for publishing map_projector_info
that defines in which kind of coordinate Autoware is operating. This is necessary information especially when you want to convert from global (geoid) to local coordinate or the other way around.
map_projector_info_path
DOES exist, this node loads it and publishes the map projection information accordingly.map_projector_info_path
does NOT exist, the node assumes that you are using the MGRS
projection type, and loads the lanelet2 map instead to extract the MGRS grid.You need to provide a YAML file, namely map_projector_info.yaml
under the map_path
directory. For pointcloud_map_metadata.yaml
, please refer to the Readme of map_loader
.
sample-map-rosbag\n\u251c\u2500\u2500 lanelet2_map.osm\n\u251c\u2500\u2500 pointcloud_map.pcd\n\u251c\u2500\u2500 map_projector_info.yaml\n\u2514\u2500\u2500 pointcloud_map_metadata.yaml\n
"},{"location":"map/map_projection_loader/#using-local-coordinate","title":"Using local coordinate","text":"# map_projector_info.yaml\nprojector_type: local\n
"},{"location":"map/map_projection_loader/#limitation","title":"Limitation","text":"The functionality that requires latitude and longitude will become unavailable.
The currently identified unavailable functionalities are:
If you want to use MGRS, please specify the MGRS grid as well.
# map_projector_info.yaml\nprojector_type: MGRS\nvertical_datum: WGS84\nmgrs_grid: 54SUE\n
"},{"location":"map/map_projection_loader/#limitation_1","title":"Limitation","text":"It cannot be used with maps that span across two or more MGRS grids. Please use it only when it falls within the scope of a single MGRS grid.
"},{"location":"map/map_projection_loader/#using-localcartesianutm","title":"Using LocalCartesianUTM","text":"If you want to use local cartesian UTM, please specify the map origin as well.
# map_projector_info.yaml\nprojector_type: LocalCartesianUTM\nvertical_datum: WGS84\nmap_origin:\nlatitude: 35.6762 # [deg]\nlongitude: 139.6503 # [deg]\naltitude: 0.0 # [m]\n
"},{"location":"map/map_projection_loader/#using-transversemercator","title":"Using TransverseMercator","text":"If you want to use Transverse Mercator projection, please specify the map origin as well.
# map_projector_info.yaml\nprojector_type: TransverseMercator\nvertical_datum: WGS84\nmap_origin:\nlatitude: 35.6762 # [deg]\nlongitude: 139.6503 # [deg]\naltitude: 0.0 # [m]\n
"},{"location":"map/map_projection_loader/#published-topics","title":"Published Topics","text":"map_projector_info_path
does not exist)"},{"location":"map/map_tf_generator/Readme/","title":"map_tf_generator","text":""},{"location":"map/map_tf_generator/Readme/#map_tf_generator","title":"map_tf_generator","text":""},{"location":"map/map_tf_generator/Readme/#purpose","title":"Purpose","text":"The nodes in this package broadcast the viewer
frame for visualization of the map in RViz.
Note that there is no module to need the viewer
frame and this is used only for visualization.
The following are the supported methods to calculate the position of the viewer
frame:
pcd_map_tf_generator_node
outputs the geometric center of all points in the PCD.vector_map_tf_generator_node
outputs the geometric center of all points in the point layer./map/pointcloud_map
sensor_msgs::msg::PointCloud2
Subscribe pointcloud map to calculate position of viewer
frames"},{"location":"map/map_tf_generator/Readme/#vector_map_tf_generator","title":"vector_map_tf_generator","text":"Name Type Description /map/vector_map
autoware_auto_mapping_msgs::msg::HADMapBin
Subscribe vector map to calculate position of viewer
frames"},{"location":"map/map_tf_generator/Readme/#output","title":"Output","text":"Name Type Description /tf_static
tf2_msgs/msg/TFMessage
Broadcast viewer
frames"},{"location":"map/map_tf_generator/Readme/#parameters","title":"Parameters","text":""},{"location":"map/map_tf_generator/Readme/#node-parameters","title":"Node Parameters","text":"None
"},{"location":"map/map_tf_generator/Readme/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Explanationviewer_frame
string viewer Name of viewer
frame map_frame
string map The parent frame name of viewer frame"},{"location":"map/map_tf_generator/Readme/#assumptions-known-limits","title":"Assumptions / Known limits","text":"TBD.
"},{"location":"perception/bytetrack/","title":"bytetrack","text":""},{"location":"perception/bytetrack/#bytetrack","title":"bytetrack","text":""},{"location":"perception/bytetrack/#purpose","title":"Purpose","text":"The core algorithm, named ByteTrack
, mainly aims to perform multi-object tracking. Because the algorithm associates almost every detection box including ones with low detection scores, the number of false negatives is expected to decrease by using it.
demo video
"},{"location":"perception/bytetrack/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"perception/bytetrack/#cite","title":"Cite","text":"The paper just says that the 2d tracking algorithm is a simple Kalman filter. Original codes use the top-left-corner
and aspect ratio
and size
as the state vector.
This is sometimes unstable because the aspectratio can be changed by the occlusion. So, we use the top-left
and size
as the state vector.
Kalman filter settings can be controlled by the parameters in config/bytetrack_node.param.yaml
.
in/rect
tier4_perception_msgs/DetectedObjectsWithFeature
The detected objects with 2D bounding boxes"},{"location":"perception/bytetrack/#output","title":"Output","text":"Name Type Description out/objects
tier4_perception_msgs/DetectedObjectsWithFeature
The detected objects with 2D bounding boxes out/objects/debug/uuid
tier4_perception_msgs/DynamicObjectArray
The universally unique identifiers (UUID) for each object"},{"location":"perception/bytetrack/#bytetrack_visualizer","title":"bytetrack_visualizer","text":""},{"location":"perception/bytetrack/#input_1","title":"Input","text":"Name Type Description in/image
sensor_msgs/Image
or sensor_msgs/CompressedImage
The input image on which object detection is performed in/rect
tier4_perception_msgs/DetectedObjectsWithFeature
The detected objects with 2D bounding boxes in/uuid
tier4_perception_msgs/DynamicObjectArray
The universally unique identifiers (UUID) for each object"},{"location":"perception/bytetrack/#output_1","title":"Output","text":"Name Type Description out/image
sensor_msgs/Image
The image that detection bounding boxes and their UUIDs are drawn"},{"location":"perception/bytetrack/#parameters","title":"Parameters","text":""},{"location":"perception/bytetrack/#bytetrack_node_1","title":"bytetrack_node","text":"Name Type Default Value Description track_buffer_length
int 30 The frame count that a tracklet is considered to be lost"},{"location":"perception/bytetrack/#bytetrack_visualizer_1","title":"bytetrack_visualizer","text":"Name Type Default Value Description use_raw
bool false The flag for the node to switch sensor_msgs/Image
or sensor_msgs/CompressedImage
as input"},{"location":"perception/bytetrack/#assumptionsknown-limits","title":"Assumptions/Known limits","text":""},{"location":"perception/bytetrack/#reference-repositories","title":"Reference repositories","text":"The codes under the lib
directory are copied from the original codes and modified. The original codes belong to the MIT license stated as follows, while this ported packages are provided with Apache License 2.0:
MIT License
Copyright (c) 2021 Yifu Zhang
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
"},{"location":"perception/cluster_merger/","title":"cluster merger","text":""},{"location":"perception/cluster_merger/#cluster-merger","title":"cluster merger","text":""},{"location":"perception/cluster_merger/#purpose","title":"Purpose","text":"cluster_merger is a package for merging pointcloud clusters as detected objects with feature type.
"},{"location":"perception/cluster_merger/#inner-working-algorithms","title":"Inner-working / Algorithms","text":"The clusters of merged topics are simply concatenated from clusters of input topics.
"},{"location":"perception/cluster_merger/#input-output","title":"Input / Output","text":""},{"location":"perception/cluster_merger/#input","title":"Input","text":"Name Type Descriptioninput/cluster0
tier4_perception_msgs::msg::DetectedObjectsWithFeature
pointcloud clusters input/cluster1
tier4_perception_msgs::msg::DetectedObjectsWithFeature
pointcloud clusters"},{"location":"perception/cluster_merger/#output","title":"Output","text":"Name Type Description output/clusters
autoware_auto_perception_msgs::msg::DetectedObjects
merged clusters"},{"location":"perception/cluster_merger/#parameters","title":"Parameters","text":"Name Type Description Default value output_frame_id
string The header frame_id of output topic. base_link"},{"location":"perception/cluster_merger/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/cluster_merger/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/cluster_merger/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/cluster_merger/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/cluster_merger/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/compare_map_segmentation/","title":"compare_map_segmentation","text":""},{"location":"perception/compare_map_segmentation/#compare_map_segmentation","title":"compare_map_segmentation","text":""},{"location":"perception/compare_map_segmentation/#purpose","title":"Purpose","text":"The compare_map_segmentation
is a node that filters the ground points from the input pointcloud by using map info (e.g. pcd, elevation map or split map pointcloud from map_loader interface).
Compare the z of the input points with the value of elevation_map. The height difference is calculated by the binary integration of neighboring cells. Remove points whose height difference is below the height_diff_thresh
.
"},{"location":"perception/compare_map_segmentation/#distance-based-compare-map-filter","title":"Distance Based Compare Map Filter","text":"
This filter compares the input pointcloud with the map pointcloud using the nearestKSearch
function of kdtree
and removes points that are close to the map point cloud. The map pointcloud can be loaded statically at once at the beginning or dynamically as the vehicle moves.
The filter loads the map point cloud, which can be loaded statically at the beginning or dynamically during vehicle movement, and creates a voxel grid of the map point cloud. The filter uses the getCentroidIndexAt function in combination with the getGridCoordinates function from the VoxelGrid class to find input points that are inside the voxel grid and removes them.
"},{"location":"perception/compare_map_segmentation/#voxel-based-compare-map-filter","title":"Voxel Based Compare Map Filter","text":"The filter loads the map pointcloud (static loading whole map at once at beginning or dynamic loading during vehicle moving) and utilizes VoxelGrid to downsample map pointcloud.
For each point of input pointcloud, the filter use getCentroidIndexAt
combine with getGridCoordinates
function from VoxelGrid class to check if the downsampled map point existing surrounding input points. Remove the input point which has downsampled map point in voxels containing or being close to the point.
This filter is a combination of the distance_based_compare_map_filter and voxel_based_approximate_compare_map_filter. The filter loads the map point cloud, which can be loaded statically at the beginning or dynamically during vehicle movement, and creates a voxel grid and a k-d tree of the map point cloud. The filter uses the getCentroidIndexAt function in combination with the getGridCoordinates function from the VoxelGrid class to find input points that are inside the voxel grid and removes them. For points that do not belong to any voxel grid, they are compared again with the map point cloud using the radiusSearch function of the k-d tree and are removed if they are close enough to the map.
"},{"location":"perception/compare_map_segmentation/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/compare_map_segmentation/#compare-elevation-map-filter_1","title":"Compare Elevation Map Filter","text":""},{"location":"perception/compare_map_segmentation/#input","title":"Input","text":"Name Type Description~/input/points
sensor_msgs::msg::PointCloud2
reference points ~/input/elevation_map
grid_map::msg::GridMap
elevation map"},{"location":"perception/compare_map_segmentation/#output","title":"Output","text":"Name Type Description ~/output/points
sensor_msgs::msg::PointCloud2
filtered points"},{"location":"perception/compare_map_segmentation/#parameters","title":"Parameters","text":"Name Type Description Default value map_layer_name
string elevation map layer name elevation map_frame
float frame_id of the map that is temporarily used before elevation_map is subscribed map height_diff_thresh
float Remove points whose height difference is below this value [m] 0.15"},{"location":"perception/compare_map_segmentation/#other-filters","title":"Other Filters","text":""},{"location":"perception/compare_map_segmentation/#input_1","title":"Input","text":"Name Type Description ~/input/points
sensor_msgs::msg::PointCloud2
reference points ~/input/map
sensor_msgs::msg::PointCloud2
map (in case static map loading) /localization/kinematic_state
nav_msgs::msg::Odometry
current ego-vehicle pose (in case dynamic map loading)"},{"location":"perception/compare_map_segmentation/#output_1","title":"Output","text":"Name Type Description ~/output/points
sensor_msgs::msg::PointCloud2
filtered points"},{"location":"perception/compare_map_segmentation/#parameters_1","title":"Parameters","text":"Name Type Description Default value use_dynamic_map_loading
bool map loading mode selection, true
for dynamic map loading, false
for static map loading, recommended for no-split map pointcloud true distance_threshold
float Threshold distance to compare input points with map points [m] 0.5 map_update_distance_threshold
float Threshold of vehicle movement distance when map update is necessary (in dynamic map loading) [m] 10.0 map_loader_radius
float Radius of map need to be loaded (in dynamic map loading) [m] 150.0 timer_interval_ms
int Timer interval to check if the map update is necessary (in dynamic map loading) [ms] 100 publish_debug_pcd
bool Enable to publish voxelized updated map in debug/downsampled_map/pointcloud
for debugging. It might cause additional computation cost false downsize_ratio_z_axis
double Positive ratio to reduce voxel_leaf_size and neighbor point distance threshold in z axis 0.5"},{"location":"perception/compare_map_segmentation/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/compare_map_segmentation/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/compare_map_segmentation/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/compare_map_segmentation/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/compare_map_segmentation/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/crosswalk_traffic_light_estimator/","title":"crosswalk_traffic_light_estimator","text":""},{"location":"perception/crosswalk_traffic_light_estimator/#crosswalk_traffic_light_estimator","title":"crosswalk_traffic_light_estimator","text":""},{"location":"perception/crosswalk_traffic_light_estimator/#purpose","title":"Purpose","text":"crosswalk_traffic_light_estimator
is a module that estimates pedestrian traffic signals from HDMap and detected vehicle traffic signals.
~/input/vector_map
autoware_auto_mapping_msgs::msg::HADMapBin
vector map ~/input/route
autoware_planning_msgs::msg::LaneletRoute
route ~/input/classified/traffic_signals
tier4_perception_msgs::msg::TrafficSignalArray
classified signals"},{"location":"perception/crosswalk_traffic_light_estimator/#output","title":"Output","text":"Name Type Description ~/output/traffic_signals
tier4_perception_msgs::msg::TrafficSignalArray
output that contains estimated pedestrian traffic signals"},{"location":"perception/crosswalk_traffic_light_estimator/#parameters","title":"Parameters","text":"Name Type Description Default value use_last_detect_color
bool
If this parameter is true
, this module estimates pedestrian's traffic signal as RED not only when vehicle's traffic signal is detected as GREEN/AMBER but also when detection results change GREEN/AMBER to UNKNOWN. (If detection results change RED or AMBER to UNKNOWN, this module estimates pedestrian's traffic signal as UNKNOWN.) If this parameter is false
, this module use only latest detection results for estimation. (Only when the detection result is GREEN/AMBER, this module estimates pedestrian's traffic signal as RED.) true
last_detect_color_hold_time
double
The time threshold to hold for last detect color. 2.0
"},{"location":"perception/crosswalk_traffic_light_estimator/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"If traffic between pedestrians and vehicles is controlled by traffic signals, the crosswalk traffic signal maybe RED in order to prevent pedestrian from crossing when the following conditions are satisfied.
"},{"location":"perception/crosswalk_traffic_light_estimator/#situation1","title":"Situation1","text":"The detected_object_feature_remover
is a package to convert topic-type from DetectedObjectWithFeatureArray
to DetectedObjects
.
~/input
tier4_perception_msgs::msg::DetectedObjectWithFeatureArray
detected objects with feature field"},{"location":"perception/detected_object_feature_remover/#output","title":"Output","text":"Name Type Description ~/output
autoware_auto_perception_msgs::msg::DetectedObjects
detected objects"},{"location":"perception/detected_object_feature_remover/#parameters","title":"Parameters","text":"None
"},{"location":"perception/detected_object_feature_remover/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/detected_object_validation/","title":"detected_object_validation","text":""},{"location":"perception/detected_object_validation/#detected_object_validation","title":"detected_object_validation","text":""},{"location":"perception/detected_object_validation/#purpose","title":"Purpose","text":"The purpose of this package is to eliminate obvious false positives of DetectedObjects.
"},{"location":"perception/detected_object_validation/#referencesexternal-links","title":"References/External links","text":"The object_lanelet_filter
is a node that filters detected object by using vector map. The objects only inside of the vector map will be published.
input/vector_map
autoware_auto_mapping_msgs::msg::HADMapBin
vector map input/object
autoware_auto_perception_msgs::msg::DetectedObjects
input detected objects"},{"location":"perception/detected_object_validation/object-lanelet-filter/#output","title":"Output","text":"Name Type Description output/object
autoware_auto_perception_msgs::msg::DetectedObjects
filtered detected objects"},{"location":"perception/detected_object_validation/object-lanelet-filter/#parameters","title":"Parameters","text":""},{"location":"perception/detected_object_validation/object-lanelet-filter/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description filter_target_label.UNKNOWN
bool false If true, unknown objects are filtered. filter_target_label.CAR
bool false If true, car objects are filtered. filter_target_label.TRUCK
bool false If true, truck objects are filtered. filter_target_label.BUS
bool false If true, bus objects are filtered. filter_target_label.TRAILER
bool false If true, trailer objects are filtered. filter_target_label.MOTORCYCLE
bool false If true, motorcycle objects are filtered. filter_target_label.BICYCLE
bool false If true, bicycle objects are filtered. filter_target_label.PEDESTRIAN
bool false If true, pedestrian objects are filtered."},{"location":"perception/detected_object_validation/object-lanelet-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":"The lanelet filter is performed based on the shape polygon and bounding box of the objects.
"},{"location":"perception/detected_object_validation/object-lanelet-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/detected_object_validation/object-lanelet-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/detected_object_validation/object-lanelet-filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/detected_object_validation/object-lanelet-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/detected_object_validation/object-position-filter/","title":"object_position_filter","text":""},{"location":"perception/detected_object_validation/object-position-filter/#object_position_filter","title":"object_position_filter","text":""},{"location":"perception/detected_object_validation/object-position-filter/#purpose","title":"Purpose","text":"The object_position_filter
is a node that filters detected object based on x,y values. The objects only inside of the x, y bound will be published.
input/object
autoware_auto_perception_msgs::msg::DetectedObjects
input detected objects"},{"location":"perception/detected_object_validation/object-position-filter/#output","title":"Output","text":"Name Type Description output/object
autoware_auto_perception_msgs::msg::DetectedObjects
filtered detected objects"},{"location":"perception/detected_object_validation/object-position-filter/#parameters","title":"Parameters","text":""},{"location":"perception/detected_object_validation/object-position-filter/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description filter_target_label.UNKNOWN
bool false If true, unknown objects are filtered. filter_target_label.CAR
bool false If true, car objects are filtered. filter_target_label.TRUCK
bool false If true, truck objects are filtered. filter_target_label.BUS
bool false If true, bus objects are filtered. filter_target_label.TRAILER
bool false If true, trailer objects are filtered. filter_target_label.MOTORCYCLE
bool false If true, motorcycle objects are filtered. filter_target_label.BICYCLE
bool false If true, bicycle objects are filtered. filter_target_label.PEDESTRIAN
bool false If true, pedestrian objects are filtered. upper_bound_x
float 100.00 Bound for filtering. Only used if filter_by_xy_position is true lower_bound_x
float 0.00 Bound for filtering. Only used if filter_by_xy_position is true upper_bound_y
float 50.00 Bound for filtering. Only used if filter_by_xy_position is true lower_bound_y
float -50.00 Bound for filtering. Only used if filter_by_xy_position is true"},{"location":"perception/detected_object_validation/object-position-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":"Filtering is performed based on the center position of the object.
"},{"location":"perception/detected_object_validation/object-position-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/detected_object_validation/object-position-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/detected_object_validation/object-position-filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/detected_object_validation/object-position-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/detected_object_validation/obstacle-pointcloud-based-validator/","title":"obstacle pointcloud based validator","text":""},{"location":"perception/detected_object_validation/obstacle-pointcloud-based-validator/#obstacle-pointcloud-based-validator","title":"obstacle pointcloud based validator","text":""},{"location":"perception/detected_object_validation/obstacle-pointcloud-based-validator/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"If the number of obstacle point groups in the DetectedObjects is small, it is considered a false positive and removed. The obstacle point cloud can be a point cloud after compare map filtering or a ground filtered point cloud.
In the debug image above, the red DetectedObject is the validated object. The blue object is the deleted object.
"},{"location":"perception/detected_object_validation/obstacle-pointcloud-based-validator/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/detected_object_validation/obstacle-pointcloud-based-validator/#input","title":"Input","text":"Name Type Description~/input/detected_objects
autoware_auto_perception_msgs::msg::DetectedObjects
DetectedObjects ~/input/obstacle_pointcloud
sensor_msgs::msg::PointCloud2
Obstacle point cloud of dynamic objects"},{"location":"perception/detected_object_validation/obstacle-pointcloud-based-validator/#output","title":"Output","text":"Name Type Description ~/output/objects
autoware_auto_perception_msgs::msg::DetectedObjects
validated DetectedObjects"},{"location":"perception/detected_object_validation/obstacle-pointcloud-based-validator/#parameters","title":"Parameters","text":"Name Type Description using_2d_validator
bool The xy-plane projected (2D) obstacle point clouds will be used for validation min_points_num
int The minimum number of obstacle point clouds in DetectedObjects max_points_num
int The max number of obstacle point clouds in DetectedObjects min_points_and_distance_ratio
float Threshold value of the number of point clouds per object when the distance from baselink is 1m, because the number of point clouds varies with the distance from baselink. enable_debugger
bool Whether to create debug topics or not?"},{"location":"perception/detected_object_validation/obstacle-pointcloud-based-validator/#assumptions-known-limits","title":"Assumptions / Known limits","text":"Currently, only represented objects as BoundingBox or Cylinder are supported.
"},{"location":"perception/detected_object_validation/occupancy-grid-based-validator/","title":"occupancy grid based validator","text":""},{"location":"perception/detected_object_validation/occupancy-grid-based-validator/#occupancy-grid-based-validator","title":"occupancy grid based validator","text":""},{"location":"perception/detected_object_validation/occupancy-grid-based-validator/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"Compare the occupancy grid map with the DetectedObject, and if a larger percentage of obstacles are in freespace, delete them.
Basically, it takes an occupancy grid map as input and generates a binary image of freespace or other.
A mask image is generated for each DetectedObject and the average value (percentage) in the mask image is calculated. If the percentage is low, it is deleted.
"},{"location":"perception/detected_object_validation/occupancy-grid-based-validator/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/detected_object_validation/occupancy-grid-based-validator/#input","title":"Input","text":"Name Type Description~/input/detected_objects
autoware_auto_perception_msgs::msg::DetectedObjects
DetectedObjects ~/input/occupancy_grid_map
nav_msgs::msg::OccupancyGrid
OccupancyGrid with no time series calculation is preferred."},{"location":"perception/detected_object_validation/occupancy-grid-based-validator/#output","title":"Output","text":"Name Type Description ~/output/objects
autoware_auto_perception_msgs::msg::DetectedObjects
validated DetectedObjects"},{"location":"perception/detected_object_validation/occupancy-grid-based-validator/#parameters","title":"Parameters","text":"Name Type Description mean_threshold
float The percentage threshold of allowed non-freespace. enable_debug
bool Whether to display debug images or not?"},{"location":"perception/detected_object_validation/occupancy-grid-based-validator/#assumptions-known-limits","title":"Assumptions / Known limits","text":"Currently, only vehicle represented as BoundingBox are supported.
"},{"location":"perception/detection_by_tracker/","title":"detection_by_tracker","text":""},{"location":"perception/detection_by_tracker/#detection_by_tracker","title":"detection_by_tracker","text":""},{"location":"perception/detection_by_tracker/#purpose","title":"Purpose","text":"This package feeds back the tracked objects to the detection module to keep it stable and keep detecting objects.
The detection by tracker takes as input an unknown object containing a cluster of points and a tracker. The unknown object is optimized to fit the size of the tracker so that it can continue to be detected.
"},{"location":"perception/detection_by_tracker/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"The detection by tracker receives an unknown object containing a point cloud and a tracker, where the unknown object is mainly shape-fitted using euclidean clustering. Shape fitting using euclidean clustering and other methods has a problem called under segmentation and over segmentation.
Adapted from [3]
Simply looking at the overlap between the unknown object and the tracker does not work. We need to take measures for under segmentation and over segmentation.
"},{"location":"perception/detection_by_tracker/#policy-for-dealing-with-over-segmentation","title":"Policy for dealing with over segmentation","text":"~/input/initial_objects
tier4_perception_msgs::msg::DetectedObjectsWithFeature
unknown objects ~/input/tracked_objects
tier4_perception_msgs::msg::TrackedObjects
trackers"},{"location":"perception/detection_by_tracker/#output","title":"Output","text":"Name Type Description ~/output
autoware_auto_perception_msgs::msg::DetectedObjects
objects"},{"location":"perception/detection_by_tracker/#parameters","title":"Parameters","text":"Name Type Description Default value tracker_ignore_label.UNKNOWN
bool
If true, the node will ignore the tracker if its label is unknown. true
tracker_ignore_label.CAR
bool
If true, the node will ignore the tracker if its label is CAR. false
tracker_ignore_label.PEDESTRIAN
bool
If true, the node will ignore the tracker if its label is pedestrian. false
tracker_ignore_label.BICYCLE
bool
If true, the node will ignore the tracker if its label is bicycle. false
tracker_ignore_label.MOTORCYCLE
bool
If true, the node will ignore the tracker if its label is MOTORCYCLE. false
tracker_ignore_label.BUS
bool
If true, the node will ignore the tracker if its label is bus. false
tracker_ignore_label.TRUCK
bool
If true, the node will ignore the tracker if its label is truck. false
tracker_ignore_label.TRAILER
bool
If true, the node will ignore the tracker if its label is TRAILER. false
"},{"location":"perception/detection_by_tracker/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/detection_by_tracker/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/detection_by_tracker/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/detection_by_tracker/#optional-referencesexternal-links","title":"(Optional) References/External links","text":"[1] M. Himmelsbach, et al. \"Tracking and classification of arbitrary objects with bottom-up/top-down detection.\" (2012).
[2] Arya Senna Abdul Rachman, Arya. \"3D-LIDAR Multi Object Tracking for Autonomous Driving: Multi-target Detection and Tracking under Urban Road Uncertainties.\" (2017).
[3] David Held, et al. \"A Probabilistic Framework for Real-time 3D Segmentation using Spatial, Temporal, and Semantic Cues.\" (2016).
"},{"location":"perception/detection_by_tracker/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/elevation_map_loader/","title":"elevation_map_loader","text":""},{"location":"perception/elevation_map_loader/#elevation_map_loader","title":"elevation_map_loader","text":""},{"location":"perception/elevation_map_loader/#purpose","title":"Purpose","text":"This package provides elevation map for compare_map_segmentation.
"},{"location":"perception/elevation_map_loader/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"Generate elevation_map from subscribed pointcloud_map and vector_map and publish it. Save the generated elevation_map locally and load it from next time.
The elevation value of each cell is the average value of z of the points of the lowest cluster. Cells with No elevation value can be inpainted using the values of neighboring cells.
"},{"location":"perception/elevation_map_loader/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/elevation_map_loader/#input","title":"Input","text":"Name Type Description
input/pointcloud_map
sensor_msgs::msg::PointCloud2
The point cloud map input/vector_map
autoware_auto_mapping_msgs::msg::HADMapBin
(Optional) The binary data of lanelet2 map input/pointcloud_map_metadata
autoware_map_msgs::msg::PointCloudMapMetaData
(Optional) The metadata of point cloud map"},{"location":"perception/elevation_map_loader/#output","title":"Output","text":"Name Type Description output/elevation_map
grid_map_msgs::msg::GridMap
The elevation map output/elevation_map_cloud
sensor_msgs::msg::PointCloud2
(Optional) The point cloud generated from the value of elevation map"},{"location":"perception/elevation_map_loader/#service","title":"Service","text":"Name Type Description service/get_selected_pcd_map
autoware_map_msgs::srv::GetSelectedPointCloudMap
(Optional) service to request point cloud map. If pointcloud_map_loader uses selected pointcloud map loading via ROS 2 service, use this."},{"location":"perception/elevation_map_loader/#parameters","title":"Parameters","text":""},{"location":"perception/elevation_map_loader/#node-parameters","title":"Node parameters","text":"Name Type Description Default value map_layer_name std::string elevation_map layer name elevation param_file_path std::string GridMap parameters config path_default elevation_map_directory std::string elevation_map file (bag2) path_default map_frame std::string map_frame when loading elevation_map file map use_inpaint bool Whether to inpaint empty cells true inpaint_radius float Radius of a circular neighborhood of each point inpainted that is considered by the algorithm [m] 0.3 use_elevation_map_cloud_publisher bool Whether to publish output/elevation_map_cloud
false use_lane_filter bool Whether to filter elevation_map with vector_map false lane_margin float Margin distance from the lane polygon of the area to be included in the inpainting mask [m]. Used only when use_lane_filter=True. 0.0 use_sequential_load bool Whether to get point cloud map by service false sequential_map_load_num int The number of point cloud maps to load at once (only used when use_sequential_load is set true). This should not be larger than number of all point cloud map cells. 1"},{"location":"perception/elevation_map_loader/#gridmap-parameters","title":"GridMap parameters","text":"The parameters are described on config/elevation_map_parameters.yaml
.
See: https://github.com/ANYbotics/grid_map/tree/ros2/grid_map_pcl
Resulting grid map parameters.
Name Type Description Default value pcl_grid_map_extraction/grid_map/min_num_points_per_cell int Minimum number of points in the point cloud that have to fall within any of the grid map cells. Otherwise the cell elevation will be set to NaN. 3 pcl_grid_map_extraction/grid_map/resolution float Resolution of the grid map. Width and length are computed automatically. 0.3 pcl_grid_map_extraction/grid_map/height_type int The parameter that determine the elevation of a cell0: Smallest value among the average values of each cluster
, 1: Mean value of the cluster with the most points
1 pcl_grid_map_extraction/grid_map/height_thresh float Height range from the smallest cluster (Only for height_type 1) 1.0"},{"location":"perception/elevation_map_loader/#point-cloud-pre-processing-parameters","title":"Point Cloud Pre-processing Parameters","text":""},{"location":"perception/elevation_map_loader/#rigid-body-transform-parameters","title":"Rigid body transform parameters","text":"Rigid body transform that is applied to the point cloud before computing elevation.
Name Type Description Default value pcl_grid_map_extraction/cloud_transform/translation float Translation (xyz) that is applied to the input point cloud before computing elevation. 0.0 pcl_grid_map_extraction/cloud_transform/rotation float Rotation (intrinsic rotation, convention X-Y'-Z'') that is applied to the input point cloud before computing elevation. 0.0"},{"location":"perception/elevation_map_loader/#cluster-extraction-parameters","title":"Cluster extraction parameters","text":"Cluster extraction is based on pcl algorithms. See https://pointclouds.org/documentation/tutorials/cluster_extraction.html for more details.
Name Type Description Default value pcl_grid_map_extraction/cluster_extraction/cluster_tolerance float Distance between points below which they will still be considered part of one cluster. 0.2 pcl_grid_map_extraction/cluster_extraction/min_num_points int Min number of points that a cluster needs to have (otherwise it will be discarded). 3 pcl_grid_map_extraction/cluster_extraction/max_num_points int Max number of points that a cluster can have (otherwise it will be discarded). 1000000"},{"location":"perception/elevation_map_loader/#outlier-removal-parameters","title":"Outlier removal parameters","text":"See https://pointclouds.org/documentation/tutorials/statistical_outlier.html for more explanation on outlier removal.
Name Type Description Default value pcl_grid_map_extraction/outlier_removal/is_remove_outliers float Whether to perform statistical outlier removal. false pcl_grid_map_extraction/outlier_removal/mean_K float Number of neighbors to analyze for estimating statistics of a point. 10 pcl_grid_map_extraction/outlier_removal/stddev_threshold float Number of standard deviations under which points are considered to be inliers. 1.0"},{"location":"perception/elevation_map_loader/#subsampling-parameters","title":"Subsampling parameters","text":"See https://pointclouds.org/documentation/tutorials/voxel_grid.html for more explanation on point cloud downsampling.
Name Type Description Default value pcl_grid_map_extraction/downsampling/is_downsample_cloud bool Whether to perform downsampling or not. false pcl_grid_map_extraction/downsampling/voxel_size float Voxel sizes (xyz) in meters. 0.02"},{"location":"perception/euclidean_cluster/","title":"euclidean_cluster","text":""},{"location":"perception/euclidean_cluster/#euclidean_cluster","title":"euclidean_cluster","text":""},{"location":"perception/euclidean_cluster/#purpose","title":"Purpose","text":"euclidean_cluster is a package for clustering points into smaller parts to classify objects.
This package has two clustering methods: euclidean_cluster
and voxel_grid_based_euclidean_cluster
.
pcl::EuclideanClusterExtraction
is applied to points. See official document for details.
pcl::VoxelGrid
.pcl::EuclideanClusterExtraction
.input
sensor_msgs::msg::PointCloud2
input pointcloud"},{"location":"perception/euclidean_cluster/#output","title":"Output","text":"Name Type Description output
tier4_perception_msgs::msg::DetectedObjectsWithFeature
cluster pointcloud debug/clusters
sensor_msgs::msg::PointCloud2
colored cluster pointcloud for visualization"},{"location":"perception/euclidean_cluster/#parameters","title":"Parameters","text":""},{"location":"perception/euclidean_cluster/#core-parameters","title":"Core Parameters","text":""},{"location":"perception/euclidean_cluster/#euclidean_cluster_2","title":"euclidean_cluster","text":"Name Type Description use_height
bool use point.z for clustering min_cluster_size
int the minimum number of points that a cluster needs to contain in order to be considered valid max_cluster_size
int the maximum number of points that a cluster needs to contain in order to be considered valid tolerance
float the spatial cluster tolerance as a measure in the L2 Euclidean space"},{"location":"perception/euclidean_cluster/#voxel_grid_based_euclidean_cluster_1","title":"voxel_grid_based_euclidean_cluster","text":"Name Type Description use_height
bool use point.z for clustering min_cluster_size
int the minimum number of points that a cluster needs to contain in order to be considered valid max_cluster_size
int the maximum number of points that a cluster needs to contain in order to be considered valid tolerance
float the spatial cluster tolerance as a measure in the L2 Euclidean space voxel_leaf_size
float the voxel leaf size of x and y min_points_number_per_voxel
int the minimum number of points for a voxel"},{"location":"perception/euclidean_cluster/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/euclidean_cluster/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/euclidean_cluster/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/euclidean_cluster/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/euclidean_cluster/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":"The use_height
option of voxel_grid_based_euclidean_cluster
isn't implemented yet.
This package contains a front vehicle velocity estimation for offline perception module analysis. This package can:
~/input/objects
autoware_auto_perception_msgs/msg/DetectedObject.msg 3D detected objects. ~/input/pointcloud
sensor_msgs/msg/PointCloud2.msg LiDAR pointcloud. ~/input/odometry
nav_msgs::msg::Odometry.msg Odometry data."},{"location":"perception/front_vehicle_velocity_estimator/#output","title":"Output","text":"Name Type Description ~/output/objects
autoware_auto_perception_msgs/msg/DetectedObjects.msg 3D detected object with twist. ~/debug/nearest_neighbor_pointcloud
sensor_msgs/msg/PointCloud2.msg The pointcloud msg of nearest neighbor point."},{"location":"perception/front_vehicle_velocity_estimator/#node-parameter","title":"Node parameter","text":"Name Type Description Default value update_rate_hz
double The update rate [hz]. 10.0"},{"location":"perception/front_vehicle_velocity_estimator/#core-parameter","title":"Core parameter","text":"Name Type Description Default value moving_average_num
int The moving average number for velocity estimation. 1 threshold_pointcloud_z_high
float The threshold for z position value of point when choosing nearest neighbor point within front vehicle [m]. If z > threshold_pointcloud_z_high
, the point is considered to noise. 1.0f threshold_pointcloud_z_low
float The threshold for z position value of point when choosing nearest neighbor point within front vehicle [m]. If z < threshold_pointcloud_z_low
, the point is considered to noise like ground. 0.6f threshold_relative_velocity
double The threshold for min and max of estimated relative velocity (\\(v_{re}\\)) [m/s]. If \\(v_{re}\\) < - threshold_relative_velocity
, then \\(v_{re}\\) = - threshold_relative_velocity
. If \\(v_{re}\\) > threshold_relative_velocity
, then \\(v_{re}\\) = threshold_relative_velocity
. 10.0 threshold_absolute_velocity
double The threshold for max of estimated absolute velocity (\\(v_{ae}\\)) [m/s]. If \\(v_{ae}\\) > threshold_absolute_velocity
, then \\(v_{ae}\\) = threshold_absolute_velocity
. 20.0"},{"location":"perception/ground_segmentation/","title":"ground_segmentation","text":""},{"location":"perception/ground_segmentation/#ground_segmentation","title":"ground_segmentation","text":""},{"location":"perception/ground_segmentation/#purpose","title":"Purpose","text":"The ground_segmentation
is a node that remove the ground points from the input pointcloud.
Detail description of each ground segmentation algorithm is in the following links.
Filter Name Description Detail ray_ground_filter A method of removing the ground based on the geometrical relationship between points lined up on radiation link scan_ground_filter Almost the same method asray_ground_filter
, but with slightly improved performance link ransac_ground_filter A method of removing the ground by approximating the ground to a plane link"},{"location":"perception/ground_segmentation/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/ground_segmentation/#input","title":"Input","text":"Name Type Description ~/input/points
sensor_msgs::msg::PointCloud2
reference points ~/input/indices
pcl_msgs::msg::Indices
reference indices"},{"location":"perception/ground_segmentation/#output","title":"Output","text":"Name Type Description ~/output/points
sensor_msgs::msg::PointCloud2
filtered points"},{"location":"perception/ground_segmentation/#parameters","title":"Parameters","text":""},{"location":"perception/ground_segmentation/#node-parameters","title":"Node Parameters","text":"Name Type Default Value Description input_frame
string \" \" input frame id output_frame
string \" \" output frame id max_queue_size
int 5 max queue size of input/output topics use_indices
bool false flag to use pointcloud indices latched_indices
bool false flag to latch pointcloud indices approximate_sync
bool false flag to use approximate sync option"},{"location":"perception/ground_segmentation/#assumptions-known-limits","title":"Assumptions / Known limits","text":"pointcloud_preprocessor::Filter
is implemented based on pcl_perception [1] because of this issue.
[1] https://github.com/ros-perception/perception_pcl/blob/ros2/pcl_ros/src/pcl_ros/filters/filter.cpp
"},{"location":"perception/ground_segmentation/docs/ransac-ground-filter/","title":"RANSAC Ground Filter","text":""},{"location":"perception/ground_segmentation/docs/ransac-ground-filter/#ransac-ground-filter","title":"RANSAC Ground Filter","text":""},{"location":"perception/ground_segmentation/docs/ransac-ground-filter/#purpose","title":"Purpose","text":"The purpose of this node is that remove the ground points from the input pointcloud.
"},{"location":"perception/ground_segmentation/docs/ransac-ground-filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"Apply the input points to the plane, and set the points at a certain distance from the plane as points other than the ground. Normally, whn using this method, the input points is filtered so that it is almost flat before use. Since the drivable area is often flat, there are methods such as filtering by lane.
"},{"location":"perception/ground_segmentation/docs/ransac-ground-filter/#inputs-outputs","title":"Inputs / Outputs","text":"This implementation inherits pointcloud_preprocessor::Filter
class, please refer README.
This implementation inherits pointcloud_preprocessor::Filter
class, please refer README.
base_frame
string base_link frame unit_axis
string The axis which we need to search ground plane max_iterations
int The maximum number of iterations outlier_threshold
double The distance threshold to the model [m] plane_slope_threshold
double The slope threshold to prevent mis-fitting [deg] voxel_size_x
double voxel size x [m] voxel_size_y
double voxel size y [m] voxel_size_z
double voxel size z [m] height_threshold
double The height threshold from ground plane for no ground points [m] debug
bool whether to output debug information"},{"location":"perception/ground_segmentation/docs/ransac-ground-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":"https://pcl.readthedocs.io/projects/tutorials/en/latest/planar_segmentation.html
"},{"location":"perception/ground_segmentation/docs/ransac-ground-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/ground_segmentation/docs/ray-ground-filter/","title":"Ray Ground Filter","text":""},{"location":"perception/ground_segmentation/docs/ray-ground-filter/#ray-ground-filter","title":"Ray Ground Filter","text":""},{"location":"perception/ground_segmentation/docs/ray-ground-filter/#purpose","title":"Purpose","text":"The purpose of this node is that remove the ground points from the input pointcloud.
"},{"location":"perception/ground_segmentation/docs/ray-ground-filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"The points is separated radially (Ray), and the ground is classified for each Ray sequentially from the point close to ego-vehicle based on the geometric information such as the distance and angle between the points.
"},{"location":"perception/ground_segmentation/docs/ray-ground-filter/#inputs-outputs","title":"Inputs / Outputs","text":"This implementation inherits pointcloud_preprocessor::Filter
class, please refer README.
This implementation inherits pointcloud_preprocessor::Filter
class, please refer README.
input_frame
string frame id of input pointcloud output_frame
string frame id of output pointcloud general_max_slope
double The triangle created by general_max_slope
is called the global cone. If the point is outside the global cone, it is judged to be a point that is no on the ground initial_max_slope
double Generally, the point where the object first hits is far from ego-vehicle because of sensor blind spot, so resolution is different from that point and thereafter, so this parameter exists to set a separate local_max_slope
local_max_slope
double The triangle created by local_max_slope
is called the local cone. This parameter for classification based on the continuity of points min_height_threshold
double This parameter is used instead of height_threshold
because it's difficult to determine continuity in the local cone when the points are too close to each other. radial_divider_angle
double The angle of ray concentric_divider_distance
double Only check points which radius is larger than concentric_divider_distance
reclass_distance_threshold
double To check if point is to far from previous one, if so classify again min_x
double The parameter to set vehicle footprint manually max_x
double The parameter to set vehicle footprint manually min_y
double The parameter to set vehicle footprint manually max_y
double The parameter to set vehicle footprint manually"},{"location":"perception/ground_segmentation/docs/ray-ground-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":"The input_frame is set as parameter but it must be fixed as base_link for the current algorithm.
"},{"location":"perception/ground_segmentation/docs/ray-ground-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/ground_segmentation/docs/ray-ground-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/ground_segmentation/docs/ray-ground-filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/ground_segmentation/docs/ray-ground-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/ground_segmentation/docs/scan-ground-filter/","title":"Scan Ground Filter","text":""},{"location":"perception/ground_segmentation/docs/scan-ground-filter/#scan-ground-filter","title":"Scan Ground Filter","text":""},{"location":"perception/ground_segmentation/docs/scan-ground-filter/#purpose","title":"Purpose","text":"The purpose of this node is that remove the ground points from the input pointcloud.
"},{"location":"perception/ground_segmentation/docs/scan-ground-filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"This algorithm works by following steps,
This implementation inherits pointcloud_preprocessor::Filter
class, please refer README.
This implementation inherits pointcloud_preprocessor::Filter
class, please refer README.
input_frame
string \"base_link\" frame id of input pointcloud output_frame
string \"base_link\" frame id of output pointcloud global_slope_max_angle_deg
double 8.0 The global angle to classify as the ground or object [deg].A large threshold may reduce false positive of high slope road classification but it may lead to increase false negative of non-ground classification, particularly for small objects. local_slope_max_angle_deg
double 10.0 The local angle to classify as the ground or object [deg] when comparing with adjacent point.A small value enhance accuracy classification of object with inclined surface. This should be considered together with split_points_distance_tolerance
value. radial_divider_angle_deg
double 1.0 The angle which divide the whole pointcloud to sliced group [deg] split_points_distance_tolerance
double 0.2 The xy-distance threshold to distinguish far and near [m] split_height_distance
double 0.2 The height threshold to distinguish ground and non-ground pointcloud when comparing with adjacent points [m]. A small threshold improves classification of non-ground point, especially for high elevation resolution pointcloud lidar. However, it might cause false positive for small step-like road surface or misaligned multiple lidar configuration. use_virtual_ground_point
bool true whether to use the ground center of front wheels as the virtual ground point. detection_range_z_max
float 2.5 Maximum height of detection range [m], applied only for elevation_grid_mode center_pcl_shift
float 0.0 The x-axis offset of addition LiDARs from vehicle center of mass [m], recommended to use only for additional LiDARs in elevation_grid_mode non_ground_height_threshold
float 0.2 Height threshold of non ground objects [m] as split_height_distance
and applied only for elevation_grid_mode grid_mode_switch_radius
float 20.0 The distance where grid division mode change from by distance to by vertical angle [m], applied only for elevation_grid_mode grid_size_m
float 0.5 The first grid size [m], applied only for elevation_grid_mode.A large value enhances the prediction stability for ground surface. suitable for rough surface or multiple lidar configuration. gnd_grid_buffer_size
uint16 4 Number of grids using to estimate local ground slope, applied only for elevation_grid_mode low_priority_region_x
float -20.0 The non-zero x threshold in back side from which small objects detection is low priority [m] elevation_grid_mode
bool true Elevation grid scan mode option use_recheck_ground_cluster
bool true Enable recheck ground cluster"},{"location":"perception/ground_segmentation/docs/scan-ground-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":"The input_frame is set as parameter but it must be fixed as base_link for the current algorithm.
"},{"location":"perception/ground_segmentation/docs/scan-ground-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/ground_segmentation/docs/scan-ground-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/ground_segmentation/docs/scan-ground-filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":"The elevation grid idea is referred from \"Shen Z, Liang H, Lin L, Wang Z, Huang W, Yu J. Fast Ground Segmentation for 3D LiDAR Point Cloud Based on Jump-Convolution-Process. Remote Sensing. 2021; 13(16):3239. https://doi.org/10.3390/rs13163239\"
"},{"location":"perception/ground_segmentation/docs/scan-ground-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":"heatmap_visualizer is a package for visualizing heatmap of detected 3D objects' positions on the BEV space.
This package is used for qualitative evaluation and trend analysis of the detector, it means, for instance, the heatmap shows \"This detector performs good for near around of our vehicle, but far is bad\".
"},{"location":"perception/heatmap_visualizer/#how-to-run","title":"How to run","text":"ros2 launch heatmap_visualizer heatmap_visualizer.launch.xml input/objects:=<DETECTED_OBJECTS_TOPIC>\n
"},{"location":"perception/heatmap_visualizer/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"In this implementation, create heatmap of the center position of detected objects for each classes, for instance, CAR, PEDESTRIAN, etc, and publish them as occupancy grid maps.
In the above figure, the pink represents high detection frequency area and blue one is low, or black represents there is no detection.
As inner-workings, add center positions of detected objects to index of each corresponding grid map cell in a buffer. The created heatmap will be published by each specific frame, which can be specified with frame_count
. Note that the buffer to be add the positions is not reset per publishing. When publishing, firstly these values are normalized to [0, 1] using maximum and minimum values in the buffer. Secondly, they are scaled to integer in [0, 100] because nav_msgs::msg::OccupancyGrid
only allow the value in [0, 100].
~/input/objects
autoware_auto_perception_msgs::msg::DetectedObjects
detected objects"},{"location":"perception/heatmap_visualizer/#output","title":"Output","text":"Name Type Description ~/output/objects/<CLASS_NAME>
nav_msgs::msg::OccupancyGrid
visualized heatmap"},{"location":"perception/heatmap_visualizer/#parameters","title":"Parameters","text":""},{"location":"perception/heatmap_visualizer/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description publish_frame_count
int 50
The number of frames to publish heatmap heatmap_frame_id
string base_link
The frame ID of heatmap to be respected heatmap_length
float 200.0
A length of map in meter heatmap_resolution
float 0.8
A resolution of map use_confidence
bool false
A flag if use confidence score as heatmap value class_names
array [\"UNKNOWN\", \"CAR\", \"TRUCK\", \"BUS\", \"TRAILER\", \"BICYCLE\", \"MOTORBIKE\", \"PEDESTRIAN\"]
An array of class names to be published rename_to_car
bool true
A flag if rename car like vehicle to car"},{"location":"perception/heatmap_visualizer/#assumptions-known-limits","title":"Assumptions / Known limits","text":"The heatmap depends on the data to be used, so if the objects in data are sparse the heatmap will be sparse.
"},{"location":"perception/heatmap_visualizer/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/heatmap_visualizer/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/heatmap_visualizer/#referencesexternal-links","title":"References/External links","text":""},{"location":"perception/heatmap_visualizer/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/image_projection_based_fusion/","title":"image_projection_based_fusion","text":""},{"location":"perception/image_projection_based_fusion/#image_projection_based_fusion","title":"image_projection_based_fusion","text":""},{"location":"perception/image_projection_based_fusion/#purpose","title":"Purpose","text":"The image_projection_based_fusion
is a package to fuse detected obstacles (bounding box or segmentation) from image and 3d pointcloud or obstacles (bounding box, cluster or segmentation).
The offset between each camera and the lidar is set according to their shutter timing. After applying the offset to the timestamp, if the interval between the timestamp of pointcloud topic and the roi message is less than the match threshold, the two messages are matched.
current default value at autoware.universe for TIER IV Robotaxi are: - input_offset_ms: [61.67, 111.67, 45.0, 28.33, 78.33, 95.0] - match_threshold_ms: 30.0
"},{"location":"perception/image_projection_based_fusion/#fusion-and-timer","title":"fusion and timer","text":"The subscription status of the message is signed with 'O'.
1.if a pointcloud message is subscribed under the below condition:
pointcloud roi msg 1 roi msg 2 roi msg 3 subscription status O O OIf the roi msgs can be matched, fuse them and postprocess the pointcloud message. Otherwise, fuse the matched roi msgs and cache the pointcloud.
2.if a pointcloud message is subscribed under the below condition:
pointcloud roi msg 1 roi msg 2 roi msg 3 subscription status O Oif the roi msgs can be matched, fuse them and cache the pointcloud.
3.if a pointcloud message is subscribed under the below condition:
pointcloud roi msg 1 roi msg 2 roi msg 3 subscription status O O OIf the roi msg 3 is subscribed before the next pointcloud message coming or timeout, fuse it if matched, otherwise wait for the next roi msg 3. If the roi msg 3 is not subscribed before the next pointcloud message coming or timeout, postprocess the pointcloud message as it is.
The timeout threshold should be set according to the postprocessing time. E.g, if the postprocessing time is around 50ms, the timeout threshold should be set smaller than 50ms, so that the whole processing time could be less than 100ms. current default value at autoware.universe for XX1: - timeout_ms: 50.0
"},{"location":"perception/image_projection_based_fusion/#known-limits","title":"Known Limits","text":"The rclcpp::TimerBase timer could not break a for loop, therefore even if time is out when fusing a roi msg at the middle, the program will run until all msgs are fused.
"},{"location":"perception/image_projection_based_fusion/#detail-description-of-each-fusions-algorithm-is-in-the-following-links","title":"Detail description of each fusion's algorithm is in the following links","text":"Fusion Name Description Detail roi_cluster_fusion Overwrite a classification label of clusters by that of ROIs from a 2D object detector. link roi_detected_object_fusion Overwrite a classification label of detected objects by that of ROIs from a 2D object detector. link pointpainting_fusion Paint the point cloud with the ROIs from a 2D object detector and feed to a 3D object detector. link roi_pointcloud_fusion Matching pointcloud with ROIs from a 2D object detector to detect unknown-labeled objects link"},{"location":"perception/image_projection_based_fusion/docs/pointpainting-fusion/","title":"pointpainting_fusion","text":""},{"location":"perception/image_projection_based_fusion/docs/pointpainting-fusion/#pointpainting_fusion","title":"pointpainting_fusion","text":""},{"location":"perception/image_projection_based_fusion/docs/pointpainting-fusion/#purpose","title":"Purpose","text":"The pointpainting_fusion
is a package for utilizing the class information detected by a 2D object detection in 3D object detection.
The lidar points are projected onto the output of an image-only 2d object detection network and the class scores are appended to each point. The painted point cloud can then be fed to the centerpoint network.
"},{"location":"perception/image_projection_based_fusion/docs/pointpainting-fusion/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/image_projection_based_fusion/docs/pointpainting-fusion/#input","title":"Input","text":"Name Type Descriptioninput
sensor_msgs::msg::PointCloud2
pointcloud input/camera_info[0-7]
sensor_msgs::msg::CameraInfo
camera information to project 3d points onto image planes input/rois[0-7]
tier4_perception_msgs::msg::DetectedObjectsWithFeature
ROIs from each image input/image_raw[0-7]
sensor_msgs::msg::Image
images for visualization"},{"location":"perception/image_projection_based_fusion/docs/pointpainting-fusion/#output","title":"Output","text":"Name Type Description output
sensor_msgs::msg::PointCloud2
painted pointcloud ~/output/objects
autoware_auto_perception_msgs::msg::DetectedObjects
detected objects ~/debug/image_raw[0-7]
sensor_msgs::msg::Image
images for visualization"},{"location":"perception/image_projection_based_fusion/docs/pointpainting-fusion/#parameters","title":"Parameters","text":""},{"location":"perception/image_projection_based_fusion/docs/pointpainting-fusion/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description score_threshold
float 0.4
detected objects with score less than threshold are ignored densification_world_frame_id
string map
the world frame id to fuse multi-frame pointcloud densification_num_past_frames
int 0
the number of past frames to fuse with the current frame trt_precision
string fp16
TensorRT inference precision: fp32
or fp16
encoder_onnx_path
string \"\"
path to VoxelFeatureEncoder ONNX file encoder_engine_path
string \"\"
path to VoxelFeatureEncoder TensorRT Engine file head_onnx_path
string \"\"
path to DetectionHead ONNX file head_engine_path
string \"\"
path to DetectionHead TensorRT Engine file build_only
bool false
shutdown the node after TensorRT engine file is built"},{"location":"perception/image_projection_based_fusion/docs/pointpainting-fusion/#assumptions-known-limits","title":"Assumptions / Known limits","text":"[1] Vora, Sourabh, et al. \"PointPainting: Sequential fusion for 3d object detection.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020.
[2] CVPR'20 Workshop on Scalability in Autonomous Driving] Waymo Open Dataset Challenge: https://youtu.be/9g9GsI33ol8?t=535 Ding, Zhuangzhuang, et al. \"1st Place Solution for Waymo Open Dataset Challenge--3D Detection and Domain Adaptation.\" arXiv preprint arXiv:2006.15505 (2020).
"},{"location":"perception/image_projection_based_fusion/docs/pointpainting-fusion/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/image_projection_based_fusion/docs/roi-cluster-fusion/","title":"roi_cluster_fusion","text":""},{"location":"perception/image_projection_based_fusion/docs/roi-cluster-fusion/#roi_cluster_fusion","title":"roi_cluster_fusion","text":""},{"location":"perception/image_projection_based_fusion/docs/roi-cluster-fusion/#purpose","title":"Purpose","text":"The roi_cluster_fusion
is a package for filtering clusters that are less likely to be objects and overwriting labels of clusters with that of Region Of Interests (ROIs) by a 2D object detector.
The clusters are projected onto image planes, and then if the ROIs of clusters and ROIs by a detector are overlapped, the labels of clusters are overwritten with that of ROIs by detector. Intersection over Union (IoU) is used to determine if there are overlaps between them.
"},{"location":"perception/image_projection_based_fusion/docs/roi-cluster-fusion/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/image_projection_based_fusion/docs/roi-cluster-fusion/#input","title":"Input","text":"Name Type Descriptioninput
tier4_perception_msgs::msg::DetectedObjectsWithFeature
clustered pointcloud input/camera_info[0-7]
sensor_msgs::msg::CameraInfo
camera information to project 3d points onto image planes input/rois[0-7]
tier4_perception_msgs::msg::DetectedObjectsWithFeature
ROIs from each image input/image_raw[0-7]
sensor_msgs::msg::Image
images for visualization"},{"location":"perception/image_projection_based_fusion/docs/roi-cluster-fusion/#output","title":"Output","text":"Name Type Description output
tier4_perception_msgs::msg::DetectedObjectsWithFeature
labeled cluster pointcloud ~/debug/image_raw[0-7]
sensor_msgs::msg::Image
images for visualization"},{"location":"perception/image_projection_based_fusion/docs/roi-cluster-fusion/#parameters","title":"Parameters","text":"The following figure is an inner pipeline overview of RoI cluster fusion node. Please refer to it for your parameter settings.
"},{"location":"perception/image_projection_based_fusion/docs/roi-cluster-fusion/#core-parameters","title":"Core Parameters","text":"Name Type Descriptionfusion_distance
double If the detected object's distance to frame_id is less than the threshold, the fusion will be processed trust_object_distance
double if the detected object's distance is less than the trust_object_distance
, trust_object_iou_mode
will be used, otherwise non_trust_object_iou_mode
will be used trust_object_iou_mode
string select mode from 3 options {iou
, iou_x
, iou_y
} to calculate IoU in range of [0
, trust_distance
]. iou
: IoU along x-axis and y-axis iou_x
: IoU along x-axis iou_y
: IoU along y-axis non_trust_object_iou_mode
string the IOU mode using in range of [trust_distance
, fusion_distance
] if trust_distance
< fusion_distance
use_cluster_semantic_type
bool if false
, the labels of clusters are overwritten by UNKNOWN
before fusion only_allow_inside_cluster
bool if true
, the only clusters contained inside RoIs by a detector roi_scale_factor
double the scale factor for offset of detector RoIs if only_allow_inside_cluster=true
iou_threshold
double the IoU threshold to overwrite a label of clusters with a label of roi unknown_iou_threshold
double the IoU threshold to fuse cluster with unknown label of roi remove_unknown
bool if true
, remove all UNKNOWN
labeled objects from output rois_number
int the number of input rois debug_mode
bool If true
, subscribe and publish images for visualization."},{"location":"perception/image_projection_based_fusion/docs/roi-cluster-fusion/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/image_projection_based_fusion/docs/roi-cluster-fusion/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/image_projection_based_fusion/docs/roi-cluster-fusion/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/image_projection_based_fusion/docs/roi-cluster-fusion/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/image_projection_based_fusion/docs/roi-cluster-fusion/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/image_projection_based_fusion/docs/roi-detected-object-fusion/","title":"roi_detected_object_fusion","text":""},{"location":"perception/image_projection_based_fusion/docs/roi-detected-object-fusion/#roi_detected_object_fusion","title":"roi_detected_object_fusion","text":""},{"location":"perception/image_projection_based_fusion/docs/roi-detected-object-fusion/#purpose","title":"Purpose","text":"The roi_detected_object_fusion
is a package to overwrite labels of detected objects with that of Region Of Interests (ROIs) by a 2D object detector.
In what follows, we describe the algorithm utilized by roi_detected_object_fusion
(the meaning of each parameter can be found in the Parameters
section):
existence_probability
of a detected object is greater than the threshold, it is accepted without any further processing and published in output
.output
. The Intersection over Union (IoU) is used to determine if there are overlaps between the detections from input
and the ROIs from input/rois
.The DetectedObject has three possible shape choices/implementations, where the polygon's vertices for each case are defined as follows:
BOUNDING_BOX
: The 8 corners of a bounding box.CYLINDER
: The circle is approximated by a hexagon.POLYGON
: Not implemented yet.input
autoware_auto_perception_msgs::msg::DetectedObjects
input detected objects input/camera_info[0-7]
sensor_msgs::msg::CameraInfo
camera information to project 3d points onto image planes. input/rois[0-7]
tier4_perception_msgs::msg::DetectedObjectsWithFeature
ROIs from each image. input/image_raw[0-7]
sensor_msgs::msg::Image
images for visualization."},{"location":"perception/image_projection_based_fusion/docs/roi-detected-object-fusion/#output","title":"Output","text":"Name Type Description output
autoware_auto_perception_msgs::msg::DetectedObjects
detected objects ~/debug/image_raw[0-7]
sensor_msgs::msg::Image
images for visualization, ~/debug/fused_objects
autoware_auto_perception_msgs::msg::DetectedObjects
fused detected objects ~/debug/ignored_objects
autoware_auto_perception_msgs::msg::DetectedObjects
not fused detected objects"},{"location":"perception/image_projection_based_fusion/docs/roi-detected-object-fusion/#parameters","title":"Parameters","text":""},{"location":"perception/image_projection_based_fusion/docs/roi-detected-object-fusion/#core-parameters","title":"Core Parameters","text":"Name Type Description rois_number
int the number of input rois debug_mode
bool If set to true
, the node subscribes to the image topic and publishes an image with debug drawings. passthrough_lower_bound_probability_thresholds
vector[double] If the existence_probability
of a detected object is greater than the threshold, it is published in output. trust_distances
vector[double] If the distance of a detected object from the origin of frame_id is greater than the threshold, it is published in output. min_iou_threshold
double If the iou between detected objects and rois is greater than min_iou_threshold
, the objects are classified as fused. use_roi_probability
float If set to true
, the algorithm uses existence_probability
of ROIs to match with the that of detected objects. roi_probability_threshold
double If the existence_probability
of ROIs is greater than the threshold, matched detected objects are published in output
. can_assign_matrix
vector[int] association matrix between rois and detected_objects to check that two rois on images can be match"},{"location":"perception/image_projection_based_fusion/docs/roi-detected-object-fusion/#assumptions-known-limits","title":"Assumptions / Known limits","text":"POLYGON
, which is a shape of a detected object, isn't supported yet.
The node roi_pointcloud_fusion
is to cluster the pointcloud based on Region Of Interests (ROIs) detected by a 2D object detector, specific for unknown labeled ROI.
input
sensor_msgs::msg::PointCloud2
input pointcloud input/camera_info[0-7]
sensor_msgs::msg::CameraInfo
camera information to project 3d points onto image planes input/rois[0-7]
tier4_perception_msgs::msg::DetectedObjectsWithFeature
ROIs from each image input/image_raw[0-7]
sensor_msgs::msg::Image
images for visualization"},{"location":"perception/image_projection_based_fusion/docs/roi-pointcloud-fusion/#output","title":"Output","text":"Name Type Description output
sensor_msgs::msg::PointCloud2
output pointcloud as default of interface output_clusters
tier4_perception_msgs::msg::DetectedObjectsWithFeature
output clusters debug/clusters
sensor_msgs/msg/PointCloud2
colored cluster pointcloud for visualization"},{"location":"perception/image_projection_based_fusion/docs/roi-pointcloud-fusion/#parameters","title":"Parameters","text":""},{"location":"perception/image_projection_based_fusion/docs/roi-pointcloud-fusion/#core-parameters","title":"Core Parameters","text":"Name Type Description min_cluster_size
int the minimum number of points that a cluster needs to contain in order to be considered valid cluster_2d_tolerance
double cluster tolerance measured in radial direction rois_number
int the number of input rois"},{"location":"perception/image_projection_based_fusion/docs/roi-pointcloud-fusion/#assumptions-known-limits","title":"Assumptions / Known limits","text":"The node segmentation_pointcloud_fusion
is a package for filtering pointcloud that are belong to less interesting region which is defined by semantic or instance segmentation by 2D image segmentation model.
input
sensor_msgs::msg::PointCloud2
input pointcloud input/camera_info[0-7]
sensor_msgs::msg::CameraInfo
camera information to project 3d points onto image planes input/rois[0-7]
tier4_perception_msgs::msg::Image
semantic segmentation mask image input/image_raw[0-7]
sensor_msgs::msg::Image
images for visualization"},{"location":"perception/image_projection_based_fusion/docs/segmentation-pointcloud-fusion/#output","title":"Output","text":"Name Type Description output
sensor_msgs::msg::PointCloud2
output filtered pointcloud"},{"location":"perception/image_projection_based_fusion/docs/segmentation-pointcloud-fusion/#parameters","title":"Parameters","text":""},{"location":"perception/image_projection_based_fusion/docs/segmentation-pointcloud-fusion/#core-parameters","title":"Core Parameters","text":"Name Type Description rois_number
int the number of input rois"},{"location":"perception/image_projection_based_fusion/docs/segmentation-pointcloud-fusion/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/image_projection_based_fusion/docs/segmentation-pointcloud-fusion/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/image_projection_based_fusion/docs/segmentation-pointcloud-fusion/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/image_projection_based_fusion/docs/segmentation-pointcloud-fusion/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/image_projection_based_fusion/docs/segmentation-pointcloud-fusion/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/lidar_apollo_instance_segmentation/","title":"lidar_apollo_instance_segmentation","text":""},{"location":"perception/lidar_apollo_instance_segmentation/#lidar_apollo_instance_segmentation","title":"lidar_apollo_instance_segmentation","text":""},{"location":"perception/lidar_apollo_instance_segmentation/#purpose","title":"Purpose","text":"This node segments 3D pointcloud data from lidar sensors into obstacles, e.g., cars, trucks, bicycles, and pedestrians based on CNN based model and obstacle clustering method.
"},{"location":"perception/lidar_apollo_instance_segmentation/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"See the original design by Apollo.
"},{"location":"perception/lidar_apollo_instance_segmentation/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/lidar_apollo_instance_segmentation/#input","title":"Input","text":"Name Type Descriptioninput/pointcloud
sensor_msgs/PointCloud2
Pointcloud data from lidar sensors"},{"location":"perception/lidar_apollo_instance_segmentation/#output","title":"Output","text":"Name Type Description output/labeled_clusters
tier4_perception_msgs/DetectedObjectsWithFeature
Detected objects with labeled pointcloud cluster. debug/instance_pointcloud
sensor_msgs/PointCloud2
Segmented pointcloud for visualization."},{"location":"perception/lidar_apollo_instance_segmentation/#parameters","title":"Parameters","text":""},{"location":"perception/lidar_apollo_instance_segmentation/#node-parameters","title":"Node Parameters","text":"None
"},{"location":"perception/lidar_apollo_instance_segmentation/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Descriptionscore_threshold
double 0.8 If the score of a detected object is lower than this value, the object is ignored. range
int 60 Half of the length of feature map sides. [m] width
int 640 The grid width of feature map. height
int 640 The grid height of feature map. engine_file
string \"vls-128.engine\" The name of TensorRT engine file for CNN model. prototxt_file
string \"vls-128.prototxt\" The name of prototxt file for CNN model. caffemodel_file
string \"vls-128.caffemodel\" The name of caffemodel file for CNN model. use_intensity_feature
bool true The flag to use intensity feature of pointcloud. use_constant_feature
bool false The flag to use direction and distance feature of pointcloud. target_frame
string \"base_link\" Pointcloud data is transformed into this frame. z_offset
int 2 z offset from target frame. [m]"},{"location":"perception/lidar_apollo_instance_segmentation/#assumptions-known-limits","title":"Assumptions / Known limits","text":"There is no training code for CNN model.
"},{"location":"perception/lidar_apollo_instance_segmentation/#note","title":"Note","text":"This package makes use of three external codes. The trained files are provided by apollo. The trained files are automatically downloaded when you build.
Original URL
Supported lidars are velodyne 16, 64 and 128, but you can also use velodyne 32 and other lidars with good accuracy.
apollo 3D Obstacle Perception description
/******************************************************************************\n* Copyright 2017 The Apollo Authors. All Rights Reserved.\n*\n* Licensed under the Apache License, Version 2.0 (the \"License\");\n* you may not use this file except in compliance with the License.\n* You may obtain a copy of the License at\n*\n* http://www.apache.org/licenses/LICENSE-2.0\n*\n* Unless required by applicable law or agreed to in writing, software\n* distributed under the License is distributed on an \"AS IS\" BASIS,\n* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n* See the License for the specific language governing permissions and\n* limitations under the License.\n*****************************************************************************/\n
tensorRTWrapper : It is used under the lib directory.
MIT License\n\nCopyright (c) 2018 lewes6369\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n
autoware_perception description
/*\n* Copyright 2018-2019 Autoware Foundation. All rights reserved.\n*\n* Licensed under the Apache License, Version 2.0 (the \"License\");\n* you may not use this file except in compliance with the License.\n* You may obtain a copy of the License at\n*\n* http://www.apache.org/licenses/LICENSE-2.0\n*\n* Unless required by applicable law or agreed to in writing, software\n* distributed under the License is distributed on an \"AS IS\" BASIS,\n* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n* See the License for the specific language governing permissions and\n* limitations under the License.\n*/\n
This package will not run without a neural network for its inference. The network is provided by ansible script during the installation of Autoware or can be downloaded manually according to Manual Downloading. This package uses 'get_neural_network' function from tvm_utility package to create and provide proper dependency. See its design page for more information on how to handle user-compiled networks.
"},{"location":"perception/lidar_apollo_segmentation_tvm/#backend","title":"Backend","text":"The backend used for the inference can be selected by setting the lidar_apollo_segmentation_tvm_BACKEND
cmake variable. The current available options are llvm
for a CPU backend, and vulkan
for a GPU backend. It defaults to llvm
.
See the original design by Apollo. The paragraph of interest goes up to, but excluding, the \"MinBox Builder\" paragraph. This package instead relies on further processing by a dedicated shape estimator.
Note: the parameters described in the original design have been modified and are out of date.
"},{"location":"perception/lidar_apollo_segmentation_tvm/#inputs-outputs-api","title":"Inputs / Outputs / API","text":"The package exports a boolean lidar_apollo_segmentation_tvm_BUILT
cmake variable.
Lidar segmentation is based off a core algorithm by Apollo, with modifications from [TIER IV] (https://github.com/tier4/lidar_instance_segmentation_tvm) for the TVM backend.
"},{"location":"perception/lidar_apollo_segmentation_tvm_nodes/","title":"Index","text":""},{"location":"perception/lidar_apollo_segmentation_tvm_nodes/#lidar_apollo_segmentation_tvm_nodes","title":"lidar_apollo_segmentation_tvm_nodes","text":""},{"location":"perception/lidar_apollo_segmentation_tvm_nodes/#purpose-use-cases","title":"Purpose / Use cases","text":"An alternative to Euclidean clustering. This node detects and labels foreground obstacles (e.g. cars, motorcycles, pedestrians) from a point cloud, using a neural network.
"},{"location":"perception/lidar_apollo_segmentation_tvm_nodes/#design","title":"Design","text":"See the design of the algorithm in the core (lidar_apollo_segmentation_tvm) package's design documents.
"},{"location":"perception/lidar_apollo_segmentation_tvm_nodes/#usage","title":"Usage","text":"lidar_apollo_segmentation_tvm
and lidar_apollo_segmentation_tvm_nodes
will not work without a neural network. See the lidar_apollo_segmentation_tvm usage for more information.
The original node from Apollo has a Region Of Interest (ROI) filter. This has the benefit of working with a filtered point cloud that includes only the points inside the ROI (i.e., the drivable road and junction areas) with most of the background obstacles removed (such as buildings and trees around the road region). Not having this filter may negatively impact performance.
"},{"location":"perception/lidar_apollo_segmentation_tvm_nodes/#inputs-outputs-api","title":"Inputs / Outputs / API","text":""},{"location":"perception/lidar_apollo_segmentation_tvm_nodes/#inputs","title":"Inputs","text":"The input are non-ground points as a PointCloud2 message from the sensor_msgs package.
"},{"location":"perception/lidar_apollo_segmentation_tvm_nodes/#outputs","title":"Outputs","text":"The output is a DetectedObjectsWithFeature.
"},{"location":"perception/lidar_apollo_segmentation_tvm_nodes/#parameters","title":"Parameters","text":"Name Type Description Default Range range integer The range of the 2D grid with respect to the origin. 90 >0 score_threshold float The detection confidence score threshold for filtering out the candidate clusters in the post-processing step. 0.1 \u22650.0\u22641.0 use_intensity_feature boolean Enable input channel intensity feature. false N/A use_constant_feature boolean Enable input channel constant feature. false N/A z_offset float Vertical translation of the pointcloud before inference. 0.0 N/A min_height float The minimum height with respect to the origin -5.0 N/A max_height float The maximum height with respect to the origin. 5.0 N/A objectness_thresh float The threshold of objectness for filtering out non-object cells in the obstacle clustering step. 0.5 \u22650.0\u22641.0 min_pts_num integer In the post-processing step, the candidate clusters with less than min_pts_num points are removed. 3 \u22650 height_thresh float If it is non-negative, the points that are higher than the predicted object height by height_thresh are filtered out in the post-processing step. 0.5 N/A data_path string Packages data and artifacts directory path. $(env HOME)/autoware_data N/A"},{"location":"perception/lidar_apollo_segmentation_tvm_nodes/#error-detection-and-handling","title":"Error detection and handling","text":"Abort and warn when the input frame can't be converted to base_link
.
Both the input and output are controlled by the same actor, so the following security concerns are out-of-scope:
Leaking data to another actor would require a flaw in TVM or the host operating system that allows arbitrary memory to be read, a significant security flaw in itself. This is also true for an external actor operating the pipeline early: only the object that initiated the pipeline can run the methods to receive its output.
A Denial-of-Service attack could make the target hardware unusable for other pipelines but would require being able to run code on the CPU, which would already allow a more severe Denial-of-Service attack.
No elevation of privilege is required for this package.
"},{"location":"perception/lidar_apollo_segmentation_tvm_nodes/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":""},{"location":"perception/lidar_apollo_segmentation_tvm_nodes/#related-issues","title":"Related issues","text":""},{"location":"perception/lidar_apollo_segmentation_tvm_nodes/#226-autowareauto-neural-networks-inference-architecture-design","title":"226: Autoware.Auto Neural Networks Inference Architecture Design","text":""},{"location":"perception/lidar_centerpoint/","title":"lidar_centerpoint","text":""},{"location":"perception/lidar_centerpoint/#lidar_centerpoint","title":"lidar_centerpoint","text":""},{"location":"perception/lidar_centerpoint/#purpose","title":"Purpose","text":"lidar_centerpoint is a package for detecting dynamic 3D objects.
"},{"location":"perception/lidar_centerpoint/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"In this implementation, CenterPoint [1] uses a PointPillars-based [2] network to inference with TensorRT.
We trained the models using https://github.com/open-mmlab/mmdetection3d.
"},{"location":"perception/lidar_centerpoint/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/lidar_centerpoint/#input","title":"Input","text":"Name Type Description~/input/pointcloud
sensor_msgs::msg::PointCloud2
input pointcloud"},{"location":"perception/lidar_centerpoint/#output","title":"Output","text":"Name Type Description ~/output/objects
autoware_auto_perception_msgs::msg::DetectedObjects
detected objects debug/cyclic_time_ms
tier4_debug_msgs::msg::Float64Stamped
cyclic time (msg) debug/processing_time_ms
tier4_debug_msgs::msg::Float64Stamped
processing time (ms)"},{"location":"perception/lidar_centerpoint/#parameters","title":"Parameters","text":""},{"location":"perception/lidar_centerpoint/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description score_threshold
float 0.4
detected objects with score less than threshold are ignored densification_world_frame_id
string map
the world frame id to fuse multi-frame pointcloud densification_num_past_frames
int 1
the number of past frames to fuse with the current frame trt_precision
string fp16
TensorRT inference precision: fp32
or fp16
encoder_onnx_path
string \"\"
path to VoxelFeatureEncoder ONNX file encoder_engine_path
string \"\"
path to VoxelFeatureEncoder TensorRT Engine file head_onnx_path
string \"\"
path to DetectionHead ONNX file head_engine_path
string \"\"
path to DetectionHead TensorRT Engine file nms_iou_target_class_names
list[string] - target classes for IoU-based Non Maximum Suppression nms_iou_search_distance_2d
double - If two objects are farther than the value, NMS isn't applied. nms_iou_threshold
double - IoU threshold for the IoU-based Non Maximum Suppression build_only
bool false
shutdown the node after TensorRT engine file is built"},{"location":"perception/lidar_centerpoint/#assumptions-known-limits","title":"Assumptions / Known limits","text":"object.existence_probability
is stored the value of classification confidence of a DNN, not probability.You can download the onnx format of trained models by clicking on the links below.
Centerpoint
was trained in nuScenes
(~28k lidar frames) [8] and TIER IV's internal database (~11k lidar frames) for 60 epochs. Centerpoint tiny
was trained in Argoverse 2
(~110k lidar frames) [9] and TIER IV's internal database (~11k lidar frames) for 20 epochs.
In addition to its use as a standard ROS node, lidar_centerpoint
can also be used to perform inferences in an isolated manner. To do so, execute the following launcher, where pcd_path
is the path of the pointcloud to be used for inference.
ros2 launch lidar_centerpoint single_inference_lidar_centerpoint.launch.xml pcd_path:=test_pointcloud.pcd detections_path:=test_detections.ply\n
lidar_centerpoint
generates a ply
file in the provided detections_path
, which contains the detections as triangle meshes. These detections can be visualized by most 3D tools, but we also integrate a visualization UI using Open3D
which is launched alongside lidar_centerpoint
.
centerpoint
pts_voxel_encoder pts_backbone_neck_head There is a single change due to the limitation in the implementation of this package. num_filters=[32, 32]
of PillarFeatureNet
centerpoint_tiny
pts_voxel_encoder pts_backbone_neck_head The same model as default
of v0
. These changes are compared with this configuration.
"},{"location":"perception/lidar_centerpoint/#v0-20211203","title":"v0 (2021/12/03)","text":"Name URLs Descriptiondefault
pts_voxel_encoder pts_backbone_neck_head There are two changes from the original CenterPoint architecture. num_filters=[32]
of PillarFeatureNet
and ds_layer_strides=[2, 2, 2]
of RPN
"},{"location":"perception/lidar_centerpoint/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/lidar_centerpoint/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/lidar_centerpoint/#referencesexternal-links","title":"References/External links","text":"[1] Yin, Tianwei, Xingyi Zhou, and Philipp Kr\u00e4henb\u00fchl. \"Center-based 3d object detection and tracking.\" arXiv preprint arXiv:2006.11275 (2020).
[2] Lang, Alex H., et al. \"PointPillars: Fast encoders for object detection from point clouds.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019.
[3] https://github.com/tianweiy/CenterPoint
[4] https://github.com/open-mmlab/mmdetection3d
[5] https://github.com/open-mmlab/OpenPCDet
[6] https://github.com/yukkysaito/autoware_perception
[7] https://github.com/NVIDIA-AI-IOT/CUDA-PointPillars
[8] https://www.nuscenes.org/nuscenes
[9] https://www.argoverse.org/av2.html
"},{"location":"perception/lidar_centerpoint/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/lidar_centerpoint/launch/centerpoint_vs_centerpoint-tiny/","title":"Run lidar_centerpoint and lidar_centerpoint-tiny simultaneously","text":""},{"location":"perception/lidar_centerpoint/launch/centerpoint_vs_centerpoint-tiny/#run-lidar_centerpoint-and-lidar_centerpoint-tiny-simultaneously","title":"Run lidar_centerpoint and lidar_centerpoint-tiny simultaneously","text":"This tutorial is for showing centerpoint
and centerpoint_tiny
models\u2019 results simultaneously, making it easier to visualize and compare the performance.
Follow the steps in the Source Installation (link) in Autoware doc.
If you fail to build autoware environment according to lack of memory, then it is recommended to build autoware sequentially.
Source the ROS 2 Galactic setup script.
source /opt/ros/galactic/setup.bash\n
Build the entire autoware repository.
colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release --parallel-workers=1\n
Or you can use a constrained number of CPU to build only one package.
export MAKEFLAGS=\"-j 4\" && MAKE_JOBS=4 colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release --parallel-workers 1 --packages-select PACKAGE_NAME\n
Source the package.
source install/setup.bash\n
"},{"location":"perception/lidar_centerpoint/launch/centerpoint_vs_centerpoint-tiny/#data-preparation","title":"Data Preparation","text":""},{"location":"perception/lidar_centerpoint/launch/centerpoint_vs_centerpoint-tiny/#using-rosbag-dataset","title":"Using rosbag dataset","text":"ros2 bag play /YOUR/ROSBAG/PATH/ --clock 100\n
Don't forget to add clock
in order to sync between two rviz display.
You can also use the sample rosbag provided by autoware here.
If you want to merge several rosbags into one, you can refer to this tool.
"},{"location":"perception/lidar_centerpoint/launch/centerpoint_vs_centerpoint-tiny/#using-realtime-lidar-dataset","title":"Using realtime LiDAR dataset","text":"Set up your Ethernet connection according to 1.1 - 1.3 in this website.
Download Velodyne ROS driver
git clone -b ros2 https://github.com/ros-drivers/velodyne.git\n
Source the ROS 2 Galactic setup script.
source /opt/ros/galactic/setup.bash\n
Compile Velodyne driver
cd velodyne\nrosdep install -y --from-paths . --ignore-src --rosdistro $ROS_DISTRO\ncolcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release\n
Edit the configuration file. Specify the LiDAR device IP address in ./velodyne_driver/config/VLP32C-velodyne_driver_node-params.yaml
velodyne_driver_node:\nros__parameters:\ndevice_ip: 192.168.1.201 //change to your LiDAR device IP address\ngps_time: false\ntime_offset: 0.0\nenabled: true\nread_once: false\nread_fast: false\nrepeat_delay: 0.0\nframe_id: velodyne\nmodel: 32C\nrpm: 600.0\nport: 2368\n
Launch the velodyne driver.
# Terminal 1\nros2 launch velodyne_driver velodyne_driver_node-VLP32C-launch.py\n
Launch the velodyne_pointcloud.
# Terminal 2\nros2 launch velodyne_pointcloud velodyne_convert_node-VLP32C-launch.py\n
Point Cloud data will be available on topic /velodyne_points
. You can check with ros2 topic echo /velodyne_points
.
Check this website if there is any unexpected issue.
"},{"location":"perception/lidar_centerpoint/launch/centerpoint_vs_centerpoint-tiny/#launch-file-setting","title":"Launch file setting","text":"Several fields to check in centerpoint_vs_centerpoint-tiny.launch.xml
before running lidar centerpoint.
input/pointcloud
: set to the topic with input data you want to subscribe.model_path
: set to the path of the model.model_param_path
: set to the path of model's config file.Run
ros2 launch lidar_centerpoint centerpoint_vs_centerpoint-tiny.launch.xml\n
Then you will see two rviz window show immediately. On the left is the result for lidar centerpoint tiny, and on the right is the result for lidar centerpoint.
"},{"location":"perception/lidar_centerpoint/launch/centerpoint_vs_centerpoint-tiny/#troubleshooting","title":"Troubleshooting","text":""},{"location":"perception/lidar_centerpoint/launch/centerpoint_vs_centerpoint-tiny/#bounding-box-blink-on-rviz","title":"Bounding Box blink on rviz","text":"To avoid Bounding Boxes blinking on rviz, you can extend bbox marker lifetime.
Set marker_ptr->lifetime
and marker.lifetime
to a longer lifetime.
marker_ptr->lifetime
are in PATH/autoware/src/universe/autoware.universe/common/autoware_auto_perception_rviz_plugin/src/object_detection/object_polygon_detail.cpp
marker.lifetime
are in PATH/autoware/src/universe/autoware.universe/common/tier4_autoware_utils/include/tier4_autoware_utils/ros/marker_helper.hpp
Make sure to rebuild packages after any change.
"},{"location":"perception/lidar_centerpoint_tvm/","title":"lidar_centerpoint_tvm","text":""},{"location":"perception/lidar_centerpoint_tvm/#lidar_centerpoint_tvm","title":"lidar_centerpoint_tvm","text":""},{"location":"perception/lidar_centerpoint_tvm/#design","title":"Design","text":""},{"location":"perception/lidar_centerpoint_tvm/#usage","title":"Usage","text":"lidar_centerpoint_tvm is a package for detecting dynamic 3D objects using TVM compiled centerpoint module for different backends. To use this package, replace lidar_centerpoint
with lidar_centerpoint_tvm
in perception launch files(for example, lidar_based_detection.launch.xml
is lidar based detection is chosen.).
This package will not build without a neural network for its inference. The network is provided by the tvm_utility
package. See its design page for more information on how to enable downloading pre-compiled networks (by setting the DOWNLOAD_ARTIFACTS
cmake variable), or how to handle user-compiled networks.
The backend used for the inference can be selected by setting the lidar_centerpoint_tvm_BACKEND
cmake variable. The current available options are llvm
for a CPU backend, and vulkan
or opencl
for a GPU backend. It defaults to llvm
.
~/input/pointcloud
sensor_msgs::msg::PointCloud2
input pointcloud"},{"location":"perception/lidar_centerpoint_tvm/#output","title":"Output","text":"Name Type Description ~/output/objects
autoware_auto_perception_msgs::msg::DetectedObjects
detected objects debug/cyclic_time_ms
tier4_debug_msgs::msg::Float64Stamped
cyclic time (msg) debug/processing_time_ms
tier4_debug_msgs::msg::Float64Stamped
processing time (ms)"},{"location":"perception/lidar_centerpoint_tvm/#parameters","title":"Parameters","text":""},{"location":"perception/lidar_centerpoint_tvm/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description score_threshold
float 0.1
detected objects with score less than threshold are ignored densification_world_frame_id
string map
the world frame id to fuse multi-frame pointcloud densification_num_past_frames
int 1
the number of past frames to fuse with the current frame"},{"location":"perception/lidar_centerpoint_tvm/#bounding-box","title":"Bounding Box","text":"The lidar segmentation node establishes a bounding box for the detected obstacles. The L-fit
method of fitting a bounding box to a cluster is used for that.
Due to an accuracy issue of centerpoint
model, vulkan
cannot be used at the moment. As for 'llvm' backend, real-time performance cannot be achieved.
Scatter function can be implemented using either TVMScript or C++. For C++ implementation, please refer to https://github.com/angry-crab/autoware.universe/blob/c020419fe52e359287eccb1b77e93bdc1a681e24/perception/lidar_centerpoint_tvm/lib/network/scatter.cpp#L65
"},{"location":"perception/lidar_centerpoint_tvm/#reference","title":"Reference","text":"[1] Yin, Tianwei, Xingyi Zhou, and Philipp Kr\u00e4henb\u00fchl. \"Center-based 3d object detection and tracking.\" arXiv preprint arXiv:2006.11275 (2020).
[2] Lang, Alex H., et al. \"PointPillars: Fast encoders for object detection from point clouds.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019.
[3] https://github.com/tianweiy/CenterPoint
[4] https://github.com/Abraham423/CenterPoint
[5] https://github.com/open-mmlab/OpenPCDet
"},{"location":"perception/lidar_centerpoint_tvm/#related-issues","title":"Related issues","text":""},{"location":"perception/lidar_centerpoint_tvm/#908-run-lidar-centerpoint-with-tvm","title":"908: Run Lidar Centerpoint with TVM","text":""},{"location":"perception/map_based_prediction/","title":"map_based_prediction","text":""},{"location":"perception/map_based_prediction/#map_based_prediction","title":"map_based_prediction","text":""},{"location":"perception/map_based_prediction/#role","title":"Role","text":"map_based_prediction
is a module to predict the future paths (and their probabilities) of other vehicles and pedestrians according to the shape of the map and the surrounding environment.
Store time-series data of objects to determine the vehicle's route and to detect lane change for several duration. Object Data contains the object's position, speed, and time information.
"},{"location":"perception/map_based_prediction/#get-current-lanelet-and-update-object-history","title":"Get current lanelet and update Object history","text":"Search one or more lanelets satisfying the following conditions for each target object and store them in the ObjectData.
diff_yaw < threshold or diff_yaw > pi - threshold
.Lane Follow
, Left Lane Change
, and Right Lane Change
based on the object history and the reference path obtained in the first step.The conditions for left lane change detection are:
dist_threshold_to_bound_
.time_threshold_to_bound_
.Lane change logics is illustrated in the figure below.An example of how to tune the parameters is described later.
Currently we provide three parameters to tune lane change detection:
dist_threshold_to_bound_
: maximum distance from lane boundary allowed for lane changing vehicletime_threshold_to_bound_
: maximum time allowed for lane change vehicle to reach the boundarycutoff_freq_of_velocity_lpf_
: cutoff frequency of low pass filter for lateral velocityYou can change these parameters in rosparam in the table below.
param name default valuedist_threshold_for_lane_change_detection
1.0
[m] time_threshold_for_lane_change_detection
5.0
[s] cutoff_freq_of_velocity_for_lane_change_detection
0.1
[Hz]"},{"location":"perception/map_based_prediction/#tuning-threshold-parameters","title":"Tuning threshold parameters","text":"Increasing these two parameters will slow down and stabilize the lane change estimation.
Normally, we recommend tuning only time_threshold_for_lane_change_detection
because it is the more important factor for lane change decision.
Lateral velocity calculation is also a very important factor for lane change decision because it is used in the time domain decision.
The predicted time to reach the lane boundary is calculated by
\\[ t_{predicted} = \\dfrac{d_{lat}}{v_{lat}} \\]where \\(d_{lat}\\) and \\(v_{lat}\\) represent the lateral distance to the lane boundary and the lateral velocity, respectively.
Lowering the cutoff frequency of the low-pass filter for lateral velocity will make the lane change decision more stable but slower. Our setting is very conservative, so you may increase this parameter if you want to make the lane change decision faster.
For the additional information, here we show how we calculate lateral velocity.
lateral velocity calculation method equation description [applied] time derivative of lateral distance \\(\\dfrac{\\Delta d_{lat}}{\\Delta t}\\) Currently, we use this method to deal with winding roads. Since this time differentiation easily becomes noisy, we also use a low-pass filter to get smoothed velocity. [not applied] Object Velocity Projection to Lateral Direction \\(v_{obj} \\sin(\\theta)\\) Normally, object velocities are less noisy than the time derivative of lateral distance. But the yaw difference \\(\\theta\\) between the lane and object directions sometimes becomes discontinuous, so we did not adopt this method.Currently, we use the upper method with a low-pass filter to calculate lateral velocity.
"},{"location":"perception/map_based_prediction/#path-generation","title":"Path generation","text":"Path generation is generated on the frenet frame. The path is generated by the following steps:
See paper [2] for more details.
"},{"location":"perception/map_based_prediction/#tuning-lateral-path-shape","title":"Tuning lateral path shape","text":"lateral_control_time_horizon
parameter supports the tuning of the lateral path shape. This parameter is used to calculate the time to reach the reference path. The smaller the value, the more the path will be generated to reach the reference path quickly. (Mostly the center of the lane.)
It is possible to apply a maximum lateral acceleration constraint to generated vehicle paths. This check verifies if it is possible for the vehicle to perform the predicted path without surpassing a lateral acceleration threshold max_lateral_accel
when taking a curve. If it is not possible, it checks if the vehicle can slow down on time to take the curve with a deceleration of min_acceleration_before_curve
and comply with the constraint. If that is also not possible, the path is eliminated.
Currently we provide three parameters to tune the lateral acceleration constraint:
check_lateral_acceleration_constraints_
: to enable the constraint check.max_lateral_accel_
: max acceptable lateral acceleration for predicted paths (absolute value).min_acceleration_before_curve_
: the minimum acceleration the vehicle would theoretically use to slow down before a curve is taken (must be negative).You can change these parameters in rosparam in the table below.
param name default valuecheck_lateral_acceleration_constraints
false
[bool] max_lateral_accel
2.0
[m/s^2] min_acceleration_before_curve
-2.0
[m/s^2]"},{"location":"perception/map_based_prediction/#using-vehicle-acceleration-for-path-prediction-for-vehicle-obstacles","title":"Using Vehicle Acceleration for Path Prediction (for Vehicle Obstacles)","text":"By default, the map_based_prediction
module uses the current obstacle's velocity to compute its predicted path length. However, it is possible to use the obstacle's current acceleration to calculate its predicted path's length.
Since this module tries to predict the vehicle's path several seconds into the future, it is not practical to consider the current vehicle's acceleration as constant (it is not assumed the vehicle will be accelerating for prediction_time_horizon
seconds after detection). Instead, a decaying acceleration model is used. With the decaying acceleration model, a vehicle's acceleration is modeled as:
$\\ a(t) = a_{t0} \\cdot e^{-\\lambda \\cdot t} $
where $\\ a_{t0} $ is the vehicle acceleration at the time of detection $\\ t0 $, and $\\ \\lambda $ is the decay constant $\\ \\lambda = \\ln(2) / hl $ and $\\ hl $ is the exponential's half life.
Furthermore, the integration of $\\ a(t) $ over time gives us equations for velocity, $\\ v(t) $ and distance $\\ x(t) $ as:
$\\ v(t) = v{t0} + a * (1/\\lambda) \\cdot (1 - e^{-\\lambda \\cdot t}) $
and
$\\ x(t) = x{t0} + (v + a{t0} * (1/\\lambda)) \\cdot t + a(1/\u03bb^2)(e^{-\\lambda \\cdot t} - 1) $
With this model, the influence of the vehicle's detected instantaneous acceleration on the predicted path's length is diminished but still considered. This feature also considers that the obstacle might not accelerate past its road's speed limit (multiplied by a tunable factor).
Currently, we provide three parameters to tune the use of obstacle acceleration for path prediction:
use_vehicle_acceleration
: to enable the feature.acceleration_exponential_half_life
: The decaying acceleration model considers that the current vehicle acceleration will be halved after this many seconds.speed_limit_multiplier
: Set the vehicle type obstacle's maximum predicted speed as the legal speed limit in that lanelet times this value. This value should be at least equal or greater than 1.0.You can change these parameters in rosparam
in the table below.
use_vehicle_acceleration
false
[bool] acceleration_exponential_half_life
2.5
[s] speed_limit_multiplier
1.5
[]"},{"location":"perception/map_based_prediction/#path-prediction-for-crosswalk-users","title":"Path prediction for crosswalk users","text":"This module treats Pedestrians and Bicycles as objects using the crosswalk, and outputs prediction path based on map and estimated object's velocity, assuming the object has intention to cross the crosswalk, if the objects satisfies at least one of the following conditions:
If there are a reachable crosswalk entry points within the prediction_time_horizon
and the objects satisfies above condition, this module outputs additional predicted path to cross the opposite side via the crosswalk entry point.
If the target object is inside the road or crosswalk, this module outputs one or two additional prediction path(s) to reach exit point of the crosswalk. The number of prediction paths are depend on whether object is moving or not. If the object is moving, this module outputs one prediction path toward an exit point that existed in the direction of object's movement. One the other hand, if the object has stopped, it is impossible to infer which exit points the object want to go, so this module outputs two prediction paths toward both side exit point.
"},{"location":"perception/map_based_prediction/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/map_based_prediction/#input","title":"Input","text":"Name Type Description~/perception/object_recognition/tracking/objects
autoware_auto_perception_msgs::msg::TrackedObjects
tracking objects without predicted path. ~/vector_map
autoware_auto_mapping_msgs::msg::HADMapBin
binary data of Lanelet2 Map."},{"location":"perception/map_based_prediction/#output","title":"Output","text":"Name Type Description ~/input/objects
autoware_auto_perception_msgs::msg::TrackedObjects
tracking objects. Default is set to /perception/object_recognition/tracking/objects
~/output/objects
autoware_auto_perception_msgs::msg::PredictedObjects
tracking objects with predicted path. ~/objects_path_markers
visualization_msgs::msg::MarkerArray
marker for visualization."},{"location":"perception/map_based_prediction/#parameters","title":"Parameters","text":"Parameter Unit Type Description enable_delay_compensation
[-] bool flag to enable the time delay compensation for the position of the object prediction_time_horizon
[s] double predict time duration for predicted path lateral_control_time_horizon
[s] double time duration for predicted path will reach the reference path (mostly center of the lane) prediction_sampling_delta_time
[s] double sampling time for points in predicted path min_velocity_for_map_based_prediction
[m/s] double apply map-based prediction to the objects with higher velocity than this value min_crosswalk_user_velocity
[m/s] double minimum velocity used when crosswalk user's velocity is calculated max_crosswalk_user_delta_yaw_threshold_for_lanelet
[rad] double maximum yaw difference between crosswalk user and lanelet to use in path prediction for crosswalk users dist_threshold_for_searching_lanelet
[m] double The threshold of the angle used when searching for the lane to which the object belongs delta_yaw_threshold_for_searching_lanelet
[rad] double The threshold of the angle used when searching for the lane to which the object belongs sigma_lateral_offset
[m] double Standard deviation for lateral position of objects sigma_yaw_angle_deg
[deg] double Standard deviation yaw angle of objects object_buffer_time_length
[s] double Time span of object history to store the information history_time_length
[s] double Time span of object information used for prediction prediction_time_horizon_rate_for_validate_shoulder_lane_length
[-] double prediction path will disabled when the estimated path length exceeds lanelet length. This parameter control the estimated path length"},{"location":"perception/map_based_prediction/#assumptions-known-limits","title":"Assumptions / Known limits","text":"The results of the detection are processed by a time series. The main purpose is to give ID and estimate velocity.
"},{"location":"perception/multi_object_tracker/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"This multi object tracker consists of data association and EKF.
"},{"location":"perception/multi_object_tracker/#data-association","title":"Data association","text":"The data association performs maximum score matching, called min cost max flow problem. In this package, mussp[1] is used as solver. In addition, when associating observations to tracers, data association have gates such as the area of the object from the BEV, Mahalanobis distance, and maximum distance, depending on the class label.
"},{"location":"perception/multi_object_tracker/#ekf-tracker","title":"EKF Tracker","text":"Models for pedestrians, bicycles (motorcycles), cars and unknown are available. The pedestrian or bicycle tracker is running at the same time as the respective EKF model in order to enable the transition between pedestrian and bicycle tracking. For big vehicles such as trucks and buses, we have separate models for passenger cars and large vehicles because they are difficult to distinguish from passenger cars and are not stable. Therefore, separate models are prepared for passenger cars and big vehicles, and these models are run at the same time as the respective EKF models to ensure stability.
"},{"location":"perception/multi_object_tracker/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/multi_object_tracker/#input","title":"Input","text":"Name Type Description~/input
autoware_auto_perception_msgs::msg::DetectedObjects
obstacles"},{"location":"perception/multi_object_tracker/#output","title":"Output","text":"Name Type Description ~/output
autoware_auto_perception_msgs::msg::TrackedObjects
modified obstacles"},{"location":"perception/multi_object_tracker/#parameters","title":"Parameters","text":""},{"location":"perception/multi_object_tracker/#core-parameters","title":"Core Parameters","text":"Node parameters are defined in multi_object_tracker.param.yaml and association parameters are defined in data_association.param.yaml.
"},{"location":"perception/multi_object_tracker/#node-parameters","title":"Node parameters","text":"Name Type Description***_tracker
string EKF tracker name for each class world_frame_id
double object kinematics definition frame enable_delay_compensation
bool if True, tracker use timers to schedule publishers and use prediction step to extrapolate object state at desired timestamp publish_rate
double Timer frequency to output with delay compensation"},{"location":"perception/multi_object_tracker/#association-parameters","title":"Association parameters","text":"Name Type Description can_assign_matrix
double Assignment table for data association max_dist_matrix
double Maximum distance table for data association max_area_matrix
double Maximum area table for data association min_area_matrix
double Minimum area table for data association max_rad_matrix
double Maximum angle table for data association"},{"location":"perception/multi_object_tracker/#assumptions-known-limits","title":"Assumptions / Known limits","text":"See the model explanations.
"},{"location":"perception/multi_object_tracker/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/multi_object_tracker/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/multi_object_tracker/#evaluation-of-mussp","title":"Evaluation of muSSP","text":"According to our evaluation, muSSP is faster than normal SSP when the matrix size is more than 100.
Execution time for varying matrix size at 95% sparsity. In real data, the sparsity was often around 95%.
Execution time for varying the sparsity with matrix size 100.
"},{"location":"perception/multi_object_tracker/#optional-referencesexternal-links","title":"(Optional) References/External links","text":"This package makes use of external code.
Name License Original Repository muSSP Apache-2.0 https://github.com/yu-lab-vt/muSSP[1] C. Wang, Y. Wang, Y. Wang, C.-t. Wu, and G. Yu, \u201cmuSSP: Efficient Min-cost Flow Algorithm for Multi-object Tracking,\u201d NeurIPS, 2019
"},{"location":"perception/multi_object_tracker/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/multi_object_tracker/models/","title":"Models used in this module","text":""},{"location":"perception/multi_object_tracker/models/#models-used-in-this-module","title":"Models used in this module","text":""},{"location":"perception/multi_object_tracker/models/#tracking-model","title":"Tracking model","text":""},{"location":"perception/multi_object_tracker/models/#ctrv-model-1","title":"CTRV model [1]","text":"CTRV model is a model that assumes constant turn rate and velocity magnitude.
Kinematic bicycle model uses slip angle \\(\\beta\\) and velocity \\(v\\) to calculate yaw update. The merit of using this model is that it can prevent unintended yaw rotation when the vehicle is stopped.
Remarks that the velocity \\(v_{k}\\) is the norm of velocity of vehicle, not the longitudinal velocity. So the output twist in the object coordinate \\((x,y)\\) is calculated as follows.
\\[ \\begin{aligned} v_{x} &= v_{k} \\cos \\left(\\beta_{k}\\right) \\\\ v_{y} &= v_{k} \\sin \\left(\\beta_{k}\\right) \\end{aligned} \\]"},{"location":"perception/multi_object_tracker/models/#anchor-point-based-estimation","title":"Anchor point based estimation","text":"To separate the estimation of the position and the shape, we use anchor point based position estimation.
"},{"location":"perception/multi_object_tracker/models/#anchor-point-and-tracking-relationships","title":"Anchor point and tracking relationships","text":"Anchor point is set when the tracking is initialized. Its position is equal to the center of the bounding box of the first tracking bounding box.
Here show how anchor point is used in tracking.
Raw detection is converted to anchor point coordinate, and tracking
"},{"location":"perception/multi_object_tracker/models/#manage-anchor-point-offset","title":"Manage anchor point offset","text":"Anchor point should be kept in the same position of the object. In other words, the offset value must be adjusted so that the input BBOX and the output BBOX's closest plane to the ego vehicle are at the same position.
"},{"location":"perception/multi_object_tracker/models/#known-limits-drawbacks","title":"Known limits, drawbacks","text":"[1] Schubert, Robin & Richter, Eric & Wanielik, Gerd. (2008). Comparison and evaluation of advanced motion models for vehicle tracking. 1 - 6. 10.1109/ICIF.2008.4632283.
[2] Kong, Jason & Pfeiffer, Mark & Schildbach, Georg & Borrelli, Francesco. (2015). Kinematic and dynamic vehicle models for autonomous driving control design. 1094-1099. 10.1109/IVS.2015.7225830.
"},{"location":"perception/object_merger/","title":"object_merger","text":""},{"location":"perception/object_merger/#object_merger","title":"object_merger","text":""},{"location":"perception/object_merger/#purpose","title":"Purpose","text":"object_merger is a package for merging detected objects from two methods by data association.
"},{"location":"perception/object_merger/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"The successive shortest path algorithm is used to solve the data association problem (the minimum-cost flow problem). The cost is calculated by the distance between two objects and gate functions are applied to reset cost, s.t. the maximum distance, the maximum area and the minimum area.
"},{"location":"perception/object_merger/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/object_merger/#input","title":"Input","text":"Name Type Descriptioninput/object0
autoware_auto_perception_msgs::msg::DetectedObjects
detection objects input/object1
autoware_auto_perception_msgs::msg::DetectedObjects
detection objects"},{"location":"perception/object_merger/#output","title":"Output","text":"Name Type Description output/object
autoware_auto_perception_msgs::msg::DetectedObjects
modified Objects"},{"location":"perception/object_merger/#parameters","title":"Parameters","text":"Name Type Description can_assign_matrix
double Assignment table for data association max_dist_matrix
double Maximum distance table for data association max_area_matrix
double Maximum area table for data association min_area_matrix
double Minimum area table for data association max_rad_matrix
double Maximum angle table for data association base_link_frame_id
double association frame distance_threshold_list
std::vector<double>
Distance threshold for each class used in judging overlap. The class order depends on ObjectClassification. generalized_iou_threshold
std::vector<double>
Generalized IoU threshold for each class"},{"location":"perception/object_merger/#tips","title":"Tips","text":"distance_threshold_list
precision_threshold_to_judge_overlapped
generalized_iou_threshold
Data association algorithm was the same as that of multi_object_tracker, but the algorithm of multi_object_tracker was already updated.
"},{"location":"perception/object_range_splitter/","title":"object_range_splitter","text":""},{"location":"perception/object_range_splitter/#object_range_splitter","title":"object_range_splitter","text":""},{"location":"perception/object_range_splitter/#purpose","title":"Purpose","text":"object_range_splitter is a package to divide detected objects into two messages by the distance from the origin.
"},{"location":"perception/object_range_splitter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"perception/object_range_splitter/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/object_range_splitter/#input","title":"Input","text":"Name Type Descriptioninput/object
autoware_auto_perception_msgs::msg::DetectedObjects
detected objects"},{"location":"perception/object_range_splitter/#output","title":"Output","text":"Name Type Description output/long_range_object
autoware_auto_perception_msgs::msg::DetectedObjects
long range detected objects output/short_range_object
autoware_auto_perception_msgs::msg::DetectedObjects
short range detected objects"},{"location":"perception/object_range_splitter/#parameters","title":"Parameters","text":"Name Type Description split_range
float the distance boundary to divide detected objects [m]"},{"location":"perception/object_range_splitter/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/object_range_splitter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/object_range_splitter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/object_range_splitter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/object_range_splitter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/object_velocity_splitter/","title":"object_velocity_splitter","text":""},{"location":"perception/object_velocity_splitter/#object_velocity_splitter","title":"object_velocity_splitter","text":"This package contains a object filter module for autoware_auto_perception_msgs/msg/DetectedObject. This package can split DetectedObjects into two messages by object's speed.
"},{"location":"perception/object_velocity_splitter/#input","title":"Input","text":"Name Type Description~/input/objects
autoware_auto_perception_msgs/msg/DetectedObjects.msg 3D detected objects."},{"location":"perception/object_velocity_splitter/#output","title":"Output","text":"Name Type Description ~/output/low_speed_objects
autoware_auto_perception_msgs/msg/DetectedObjects.msg Objects with low speed ~/output/high_speed_objects
autoware_auto_perception_msgs/msg/DetectedObjects.msg Objects with high speed"},{"location":"perception/object_velocity_splitter/#parameters","title":"Parameters","text":"Name Type Description Default value velocity_threshold
double Velocity threshold parameter to split objects [m/s] 3.0"},{"location":"perception/occupancy_grid_map_outlier_filter/","title":"occupancy_grid_map_outlier_filter","text":""},{"location":"perception/occupancy_grid_map_outlier_filter/#occupancy_grid_map_outlier_filter","title":"occupancy_grid_map_outlier_filter","text":""},{"location":"perception/occupancy_grid_map_outlier_filter/#purpose","title":"Purpose","text":"This node is an outlier filter based on a occupancy grid map. Depending on the implementation of occupancy grid map, it can be called an outlier filter in time series, since the occupancy grid map expresses the occupancy probabilities in time series.
"},{"location":"perception/occupancy_grid_map_outlier_filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"Use the occupancy grid map to separate point clouds into those with low occupancy probability and those with high occupancy probability.
The point clouds that belong to the low occupancy probability are not necessarily outliers. In particular, the top of the moving object tends to belong to the low occupancy probability. Therefore, if use_radius_search_2d_filter
is true, then apply an radius search 2d outlier filter to the point cloud that is determined to have a low occupancy probability.
radius_search_2d_filter/search_radius
) and the number of point clouds. In this case, the point cloud to be referenced is not only low occupancy probability points, but all point cloud including high occupancy probability points.radius_search_2d_filter/min_points_and_distance_ratio
and distance from base link. However, the minimum and maximum number of point clouds is limited.The following video is a sample. Yellow points are high occupancy probability, green points are low occupancy probability which is not an outlier, and red points are outliers. At around 0:15 and 1:16 in the first video, a bird crosses the road, but it is considered as an outlier.
~/input/pointcloud
sensor_msgs/PointCloud2
Obstacle point cloud with ground removed. ~/input/occupancy_grid_map
nav_msgs/OccupancyGrid
A map in which the probability of the presence of an obstacle is occupancy probability map"},{"location":"perception/occupancy_grid_map_outlier_filter/#output","title":"Output","text":"Name Type Description ~/output/pointcloud
sensor_msgs/PointCloud2
Point cloud with outliers removed. trajectory ~/output/debug/outlier/pointcloud
sensor_msgs/PointCloud2
Point clouds removed as outliers. ~/output/debug/low_confidence/pointcloud
sensor_msgs/PointCloud2
Point clouds that had a low probability of occupancy in the occupancy grid map. However, it is not considered as an outlier. ~/output/debug/high_confidence/pointcloud
sensor_msgs/PointCloud2
Point clouds that had a high probability of occupancy in the occupancy grid map. trajectory"},{"location":"perception/occupancy_grid_map_outlier_filter/#parameters","title":"Parameters","text":"Name Type Description map_frame
string map frame id base_link_frame
string base link frame id cost_threshold
int Cost threshold of occupancy grid map (0~100). 100 means 100% probability that there is an obstacle, close to 50 means that it is indistinguishable whether it is an obstacle or free space, 0 means that there is no obstacle. enable_debugger
bool Whether to output the point cloud for debugging. use_radius_search_2d_filter
bool Whether or not to apply density-based outlier filters to objects that are judged to have low probability of occupancy on the occupancy grid map. radius_search_2d_filter/search_radius
float Radius when calculating the density radius_search_2d_filter/min_points_and_distance_ratio
float Threshold value of the number of point clouds per radius when the distance from baselink is 1m, because the number of point clouds varies with the distance from baselink. radius_search_2d_filter/min_points
int Minimum number of point clouds per radius radius_search_2d_filter/max_points
int Maximum number of point clouds per radius"},{"location":"perception/occupancy_grid_map_outlier_filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/occupancy_grid_map_outlier_filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/occupancy_grid_map_outlier_filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/occupancy_grid_map_outlier_filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/occupancy_grid_map_outlier_filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/probabilistic_occupancy_grid_map/","title":"probabilistic_occupancy_grid_map","text":""},{"location":"perception/probabilistic_occupancy_grid_map/#probabilistic_occupancy_grid_map","title":"probabilistic_occupancy_grid_map","text":""},{"location":"perception/probabilistic_occupancy_grid_map/#purpose","title":"Purpose","text":"This package outputs the probability of having an obstacle as occupancy grid map.
"},{"location":"perception/probabilistic_occupancy_grid_map/#referencesexternal-links","title":"References/External links","text":"Occupancy grid map is generated on map_frame
, and grid orientation is fixed.
You may need to choose scan_origin_frame
and gridmap_origin_frame
which means sensor origin and gridmap origin respectively. Especially, set your main LiDAR sensor frame (e.g. velodyne_top
in sample_vehicle) as a scan_origin_frame
would result in better performance.
Config parameters are managed in config/*.yaml
and here shows its outline.
Additional argument is shown below:
Name Default Descriptionuse_multithread
false
whether to use multithread use_intra_process
false
map_origin
`` parameter to override map_origin_frame
which means grid map origin scan_origin
`` parameter to override scan_origin_frame
which means scanning center output
occupancy_grid
output name use_pointcloud_container
false
container_name
occupancy_grid_map_container
input_obstacle_pointcloud
false
only for laserscan based method. If true, the node subscribe obstacle pointcloud input_obstacle_and_raw_pointcloud
true
only for laserscan based method. If true, the node subscribe both obstacle and raw pointcloud"},{"location":"perception/probabilistic_occupancy_grid_map/#test","title":"Test","text":"This package provides unit tests using gtest
. You can run the test by the following command.
colcon test --packages-select probabilistic_occupancy_grid_map --event-handlers console_direct+\n
Test contains the following.
The basic idea is to take a 2D laserscan and ray trace it to create a time-series processed occupancy grid map.
Optionally, obstacle point clouds and raw point clouds can be received and reflected in the occupancy grid map. The reason is that laserscan only uses the most foreground point in the polar coordinate system, so it throws away a lot of information. As a result, the occupancy grid map is almost an UNKNOWN cell. Therefore, the obstacle point cloud and the raw point cloud are used to reflect what is judged to be the ground and what is judged to be an obstacle in the occupancy grid map. The black and red dots represent raw point clouds, and the red dots represent obstacle point clouds. In other words, the black points are determined as the ground, and the red point cloud is the points determined as obstacles. The gray cells are represented as UNKNOWN cells.
Using the previous occupancy grid map, update the existence probability using a binary Bayesian filter (1). Also, the unobserved cells are time-decayed like the system noise of the Kalman filter (2).
~/input/laserscan
sensor_msgs::LaserScan
laserscan ~/input/obstacle_pointcloud
sensor_msgs::PointCloud2
obstacle pointcloud ~/input/raw_pointcloud
sensor_msgs::PointCloud2
The overall point cloud used to input the obstacle point cloud"},{"location":"perception/probabilistic_occupancy_grid_map/laserscan-based-occupancy-grid-map/#output","title":"Output","text":"Name Type Description ~/output/occupancy_grid_map
nav_msgs::OccupancyGrid
occupancy grid map"},{"location":"perception/probabilistic_occupancy_grid_map/laserscan-based-occupancy-grid-map/#parameters","title":"Parameters","text":""},{"location":"perception/probabilistic_occupancy_grid_map/laserscan-based-occupancy-grid-map/#node-parameters","title":"Node Parameters","text":"Name Type Description map_frame
string map frame base_link_frame
string base_link frame input_obstacle_pointcloud
bool whether to use the optional obstacle point cloud? If this is true, ~/input/obstacle_pointcloud
topics will be received. input_obstacle_and_raw_pointcloud
bool whether to use the optional obstacle and raw point cloud? If this is true, ~/input/obstacle_pointcloud
and ~/input/raw_pointcloud
topics will be received. use_height_filter
bool whether to height filter for ~/input/obstacle_pointcloud
and ~/input/raw_pointcloud
? By default, the height is set to -1~2m. map_length
double The length of the map. -100 if it is 50~50[m] map_resolution
double The map cell resolution [m]"},{"location":"perception/probabilistic_occupancy_grid_map/laserscan-based-occupancy-grid-map/#assumptions-known-limits","title":"Assumptions / Known limits","text":"In several places we have modified the external code written in BSD3 license.
Bresenham's_line_algorithm
First of all, input obstacle/raw pointcloud are transformed into the polar coordinate centered around scan_origin
and divided int circular bins per angle_increment respectively. At this time, each point belonging to each bin is stored as range data. In addition, the x,y information in the map coordinate is also stored for ray-tracing on the map coordinate. The bin contains the following information for each point
The following figure shows each of the bins from side view.
"},{"location":"perception/probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#2nd-step","title":"2nd step","text":"The ray trace is performed in three steps for each cell. The ray trace is done by Bresenham's line algorithm.
Initialize freespace to the farthest point of each bin.
Fill in the unknown cells. Based on the assumption that UNKNOWN
is behind the obstacle, the cells that are more than a distance margin from each obstacle point are filled with UNKNOWN
There are three reasons for setting a distance margin.
When the parameter grid_map_type
is \"OccupancyGridMapProjectiveBlindSpot\" and the scan_origin
is a sensor frame like velodyne_top
for instance, for each obstacle pointcloud, if there are no visible raw pointclouds that are located above the projected ray from the scan_origin
to that obstacle pointcloud, the cells between the obstacle pointcloud and the projected point
are filled with UNKNOWN
. Note that the scan_origin
should not be base_link
if this flag is true because otherwise all the cells behind the obstacle point clouds would be filled with UNKNOWN
.
Fill in the occupied cells. Fill in the point where the obstacle point is located with occupied. In addition, If the distance between obstacle points is less than or equal to the distance margin, that interval is filled with OCCUPIED
because the input may be inaccurate and obstacle points may not be determined as obstacles.
Using the previous occupancy grid map, update the existence probability using a binary Bayesian filter (1). Also, the unobserved cells are time-decayed like the system noise of the Kalman filter (2).
\\[ \\hat{P_{o}} = \\frac{(P_{o} *P_{z})}{(P_{o}* P_{z} + (1 - P_{o}) * \\bar{P_{z}})} \\tag{1} \\] \\[ \\hat{P_{o}} = \\frac{(P_{o} + 0.5 * \\frac{1}{ratio})}{(\\frac{1}{ratio} + 1)} \\tag{2} \\]"},{"location":"perception/probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#input","title":"Input","text":"Name Type Description~/input/obstacle_pointcloud
sensor_msgs::PointCloud2
obstacle pointcloud ~/input/raw_pointcloud
sensor_msgs::PointCloud2
The overall point cloud used to input the obstacle point cloud"},{"location":"perception/probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#output","title":"Output","text":"Name Type Description ~/output/occupancy_grid_map
nav_msgs::OccupancyGrid
occupancy grid map"},{"location":"perception/probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#parameters","title":"Parameters","text":""},{"location":"perception/probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#node-parameters","title":"Node Parameters","text":"Name Type Description map_frame
string map frame base_link_frame
string base_link frame use_height_filter
bool whether to height filter for ~/input/obstacle_pointcloud
and ~/input/raw_pointcloud
? By default, the height is set to -1~2m. map_length
double The length of the map. -100 if it is 50~50[m] map_resolution
double The map cell resolution [m] grid_map_type
string The type of grid map for estimating UNKNOWN
region behind obstacle point clouds"},{"location":"perception/probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#assumptions-known-limits","title":"Assumptions / Known limits","text":"In several places we have modified the external code written in BSD3 license.
If grid_map_type
is \"OccupancyGridMapProjectiveBlindSpot\" and pub_debug_grid
is true
, it is possible to check the each process of grid map generation by running
ros2 launch probabilistic_occupancy_grid_map debug.launch.xml\n
and visualizing the following occupancy grid map topics (which are listed in config/grid_map_param.yaml):
/perception/occupancy_grid_map/grid_1st_step
: FREE
cells are filled/perception/occupancy_grid_map/grid_2nd_step
: UNKNOWN
cells are filled/perception/occupancy_grid_map/grid_3rd_step
: OCCUPIED
cells are filledFor simplicity, we use OGM as the meaning of the occupancy grid map.
This package is used to fuse the OGMs from synchronized sensors. Especially for the lidar.
Here shows the example OGM for the this synchronized OGM fusion.
left lidar OGM right lidar OGM top lidar OGMOGM fusion with asynchronous sensor outputs is not suitable for this package. Asynchronous OGM fusion is under construction.
"},{"location":"perception/probabilistic_occupancy_grid_map/synchronized_grid_map_fusion/#processing-flow","title":"Processing flow","text":"The processing flow of this package is shown in the following figure.
input_ogm_topics
list of nav_msgs::msg::OccupancyGrid List of input topics for Occupancy Grid Maps. This parameter is given in list, so Output topic name Type Description ~/output/occupancy_grid_map
nav_msgs::msg::OccupancyGrid Output topic name of the fused Occupancy Grid Map. ~/debug/single_frame_map
nav_msgs::msg::OccupancyGrid (debug topic) Output topic name of the single frame fused Occupancy Grid Map."},{"location":"perception/probabilistic_occupancy_grid_map/synchronized_grid_map_fusion/#parameters","title":"Parameters","text":"Synchronized OGM fusion node parameters are shown in the following table. Main parameters to be considered in the fusion node is shown as bold.
Ros param name Sample value Description input_ogm_topics [\"topic1\", \"topic2\"] List of input topics for Occupancy Grid Maps input_ogm_reliabilities [0.8, 0.2] Weights for the reliability of each input topic fusion_method \"overwrite\" Method of fusion (\"overwrite\", \"log-odds\", \"dempster-shafer\") match_threshold_sec 0.01 Matching threshold in milliseconds timeout_sec 0.1 Timeout duration in seconds input_offset_sec [0.0, 0.0] Offset time in seconds for each input topic mapframe \"map\" Frame name for the fused map baselink_frame \"base_link\" Frame name for the base link gridmap_origin_frame \"base_link\" Frame name for the origin of the grid map fusion_map_length_x 100.0 Length of the fused map along the X-axis fusion_map_length_y 100.0 Length of the fused map along the Y-axis fusion_map_resolution 0.5 Resolution of the fused mapSince this node assumes that the OGMs from synchronized sensors are generated in the same time, we need to tune the match_threshold_sec
, timeout_sec
and input_offset_sec
parameters to successfully fuse the OGMs.
For the single frame fusion, the following fusion methods are supported.
Fusion Method in parameter Descriptionoverwrite
The value of the cell in the fused OGM is overwritten by the value of the cell in the OGM with the highest priority. We set priority as Occupied
> Free
> Unknown
. log-odds
The value of the cell in the fused OGM is calculated by the log-odds ratio method, which is known as a Bayesian fusion method. The log-odds of a probability \\(p\\) can be written as \\(l_p = \\log(\\frac{p}{1-p})\\). And the fused log-odds is calculated by the sum of log-odds. \\(l_f = \\Sigma l_p\\) dempster-shafer
The value of the cell in the fused OGM is calculated by the Dempster-Shafer theory[1]. This is also popular method to handle multiple evidences. This package applied conflict escape logic in [2] for the performance. See references for the algorithm details. For the multi frame fusion, currently only supporting log-odds
fusion method.
The minimum node launch will be like the following.
<?xml version=\"1.0\"?>\n<launch>\n<arg name=\"output_topic\" default=\"~/output/occupancy_grid_map\"/>\n<arg name=\"fusion_node_param_path\" default=\"$(find-pkg-share probabilistic_occupancy_grid_map)/config/synchronized_grid_map_fusion_node.param.yaml\"/>\n\n<node name=\"synchronized_grid_map_fusion_node\" exec=\"synchronized_grid_map_fusion_node\" pkg=\"probabilistic_occupancy_grid_map\" output=\"screen\">\n<remap from=\"~/output/occupancy_grid_map\" to=\"$(var output_topic)\"/>\n<param from=\"$(var fusion_node_param_path)\"/>\n</node>\n</launch>\n
"},{"location":"perception/probabilistic_occupancy_grid_map/synchronized_grid_map_fusion/#optional-generate-ogms-in-each-sensor-frame","title":"(Optional) Generate OGMs in each sensor frame","text":"You need to generate OGMs in each sensor frame before achieving grid map fusion.
probabilistic_occupancy_grid_map
package supports to generate OGMs for the each from the point cloud data.
<include file=\"$(find-pkg-share tier4_perception_launch)/launch/occupancy_grid_map/probabilistic_occupancy_grid_map.launch.xml\">\n<arg name=\"input/obstacle_pointcloud\" value=\"/perception/obstacle_segmentation/single_frame/pointcloud_raw\"/>\n<arg name=\"input/raw_pointcloud\" value=\"/sensing/lidar/right/outlier_filtered/pointcloud_synchronized\"/>\n<arg name=\"output\" value=\"/perception/occupancy_grid_map/right_lidar/map\"/>\n<arg name=\"map_frame\" value=\"base_link\"/>\n<arg name=\"scan_origin\" value=\"velodyne_right\"/>\n<arg name=\"use_intra_process\" value=\"true\"/>\n<arg name=\"use_multithread\" value=\"true\"/>\n<arg name=\"use_pointcloud_container\" value=\"$(var use_pointcloud_container)\"/>\n<arg name=\"pointcloud_container_name\" value=\"$(var pointcloud_container_name)\"/>\n<arg name=\"method\" value=\"pointcloud_based_occupancy_grid_map\"/>\n<arg name=\"param_file\" value=\"$(find-pkg-share probabilistic_occupancy_grid_map)/config/pointcloud_based_occupancy_grid_map_fusion.param.yaml\"/>\n</include>\n\n\nThe minimum parameter for the OGM generation in each frame is shown in the following table.\n\n|Parameter|Description|\n|--|--|\n|`input/obstacle_pointcloud`| The input point cloud data for the OGM generation. This point cloud data should be the point cloud data which is segmented as the obstacle.|\n|`input/raw_pointcloud`| The input point cloud data for the OGM generation. This point cloud data should be the point cloud data which is not segmented as the obstacle. |\n|`output`| The output topic of the OGM. |\n|`map_frame`| The tf frame for the OGM center origin. |\n|`scan_origin`| The tf frame for the sensor origin. |\n|`method`| The method for the OGM generation. Currently we support `pointcloud_based_occupancy_grid_map` and `laser_scan_based_occupancy_grid_map`. The pointcloud based method is recommended. |\n|`param_file`| The parameter file for the OGM generation. See [example parameter file](config/pointcloud_based_occupancy_grid_map_for_fusion.param.yaml) |\n
We recommend to use same map_frame
, size and resolutions for the OGMs from synchronized sensors. Also, remember to set enable_single_frame_mode
and filter_obstacle_pointcloud_by_raw_pointcloud
to true
in the probabilistic_occupancy_grid_map
package (you do not need to set these parameters if you use the above example config file).
We prepared the launch file to run both OGM generation node and fusion node in grid_map_fusion_with_synchronized_pointclouds.launch.py
You can include this launch file like the following.
<include file=\"$(find-pkg-share probabilistic_occupancy_grid_map)/launch/grid_map_fusion_with_synchronized_pointclouds.launch.py\">\n<arg name=\"output\" value=\"/perception/occupancy_grid_map/fusion/map\"/>\n<arg name=\"use_intra_process\" value=\"true\"/>\n<arg name=\"use_multithread\" value=\"true\"/>\n<arg name=\"use_pointcloud_container\" value=\"$(var use_pointcloud_container)\"/>\n<arg name=\"pointcloud_container_name\" value=\"$(var pointcloud_container_name)\"/>\n<arg name=\"method\" value=\"pointcloud_based_occupancy_grid_map\"/>\n<arg name=\"fusion_config_file\" value=\"$(var fusion_config_file)\"/>\n<arg name=\"ogm_config_file\" value=\"$(var ogm_config_file)\"/>\n</include>\n
The minimum parameter for the launch file is shown in the following table.
Parameter Descriptionoutput
The output topic of the finally fused OGM. method
The method for the OGM generation. Currently we support pointcloud_based_occupancy_grid_map
and laser_scan_based_occupancy_grid_map
. The pointcloud based method is recommended. fusion_config_file
The parameter file for the grid map fusion. See example parameter file ogm_config_file
The parameter file for the OGM generation. See example parameter file"},{"location":"perception/probabilistic_occupancy_grid_map/synchronized_grid_map_fusion/#references","title":"References","text":"This package contains a radar noise filter module for autoware_auto_perception_msgs/msg/DetectedObject. This package can filter the noise objects which cross to the ego vehicle.
"},{"location":"perception/radar_crossing_objects_noise_filter/#algorithm","title":"Algorithm","text":""},{"location":"perception/radar_crossing_objects_noise_filter/#background","title":"Background","text":"This package aim to filter the noise objects which cross from the ego vehicle. The reason why these objects are noise is as below.
Radars can get velocity information of objects as doppler velocity, but cannot get vertical velocity to doppler velocity directory. Some radars can output the objects with not only doppler velocity but also vertical velocity by estimation. If the vertical velocity estimation is poor, it leads to output noise objects. In other words, the above situation is that the objects which has vertical twist viewed from ego vehicle can tend to be noise objects.
The example is below figure. Velocity estimation fails on static objects, resulting in ghost objects crossing in front of ego vehicles.
When the ego vehicle turns around, the radars outputting at the object level sometimes fail to estimate the twist of objects correctly even if radar_tracks_msgs_converter compensates by the ego vehicle twist. So if an object detected by radars has circular motion viewing from base_link, it is likely that the speed is estimated incorrectly and that the object is a static object.
The example is below figure. When the ego vehicle turn right, the surrounding objects have left circular motion.
"},{"location":"perception/radar_crossing_objects_noise_filter/#detail-algorithm","title":"Detail Algorithm","text":"To filter the objects crossing to ego vehicle, this package filter the objects as below algorithm.
// If velocity of an object is rather than the velocity_threshold,\n// and crossing_yaw is near to vertical\n// angle_threshold < crossing_yaw < pi - angle_threshold\nif (\nvelocity > node_param_.velocity_threshold &&\nabs(std::cos(crossing_yaw)) < abs(std::cos(node_param_.angle_threshold))) {\n// Object is noise object;\n} else {\n// Object is not noise object;\n}\n
"},{"location":"perception/radar_crossing_objects_noise_filter/#input","title":"Input","text":"Name Type Description ~/input/objects
autoware_auto_perception_msgs/msg/DetectedObjects.msg Radar objects."},{"location":"perception/radar_crossing_objects_noise_filter/#output","title":"Output","text":"Name Type Description ~/output/noise_objects
autoware_auto_perception_msgs/msg/DetectedObjects.msg Noise objects ~/output/filtered_objects
autoware_auto_perception_msgs/msg/DetectedObjects.msg Filtered objects"},{"location":"perception/radar_crossing_objects_noise_filter/#parameters","title":"Parameters","text":"Name Type Description Default value angle_threshold
double The angle threshold parameter to filter [rad]. This parameter has condition that 0 < angle_threshold
< pi / 2. See algorithm chapter for details. 1.0472 velocity_threshold
double The velocity threshold parameter to filter [m/s]. See algorithm chapter for details. 3.0"},{"location":"perception/radar_fusion_to_detected_object/","title":"radar_fusion_to_detected_object","text":""},{"location":"perception/radar_fusion_to_detected_object/#radar_fusion_to_detected_object","title":"radar_fusion_to_detected_object","text":"This package contains a sensor fusion module for radar-detected objects and 3D detected objects. The fusion node can:
The document of core algorithm is here
"},{"location":"perception/radar_fusion_to_detected_object/#parameters-for-sensor-fusion","title":"Parameters for sensor fusion","text":"Name Type Description Default value bounding_box_margin double The distance to extend the 2D bird's-eye view Bounding Box on each side. This distance is used as a threshold to find radar centroids falling inside the extended box. [m] 2.0 split_threshold_velocity double The object's velocity threshold to decide to split for two objects from radar information (currently not implemented) [m/s] 5.0 threshold_yaw_diff double The yaw orientation threshold. If \u2223 \u03b8_ob \u2212 \u03b8_ra \u2223 < threshold \u00d7 yaw_diff attached to radar information include estimated velocity, where\u03b8obis yaw angle from 3d detected object,*\u03b8_ra is yaw angle from radar object. [rad] 0.35"},{"location":"perception/radar_fusion_to_detected_object/#weight-parameters-for-velocity-estimation","title":"Weight parameters for velocity estimation","text":"To tune these weight parameters, please see document in detail.
Name Type Description Default value velocity_weight_average double The twist coefficient of average twist of radar data in velocity estimation. 0.0 velocity_weight_median double The twist coefficient of median twist of radar data in velocity estimation. 0.0 velocity_weight_min_distance double The twist coefficient of radar data nearest to the center of bounding box in velocity estimation. 1.0 velocity_weight_target_value_average double The twist coefficient of target value weighted average in velocity estimation. Target value is amplitude if using radar pointcloud. Target value is probability if using radar objects. 0.0 velocity_weight_target_value_top double The twist coefficient of top target value radar data in velocity estimation. Target value is amplitude if using radar pointcloud. Target value is probability if using radar objects. 0.0"},{"location":"perception/radar_fusion_to_detected_object/#parameters-for-fixed-object-information","title":"Parameters for fixed object information","text":"Name Type Description Default value convert_doppler_to_twist bool Convert doppler velocity to twist using the yaw information of a detected object. false threshold_probability float If the probability of an output object is lower than this parameter, and the output object does not have radar points/objects, then delete the object. 0.4 compensate_probability bool If this parameter is true, compensate probability of objects to threshold probability. false"},{"location":"perception/radar_fusion_to_detected_object/#radar_object_fusion_to_detected_object","title":"radar_object_fusion_to_detected_object","text":"Sensor fusion with radar objects and a detected object.
ros2 launch radar_fusion_to_detected_object radar_object_to_detected_object.launch.xml\n
"},{"location":"perception/radar_fusion_to_detected_object/#input","title":"Input","text":"Name Type Description ~/input/objects
autoware_auto_perception_msgs/msg/DetectedObject.msg 3D detected objects. ~/input/radar_objects
autoware_auto_perception_msgs/msg/DetectedObjects.msg Radar objects. Note that frame_id need to be same as ~/input/objects
"},{"location":"perception/radar_fusion_to_detected_object/#output","title":"Output","text":"Name Type Description ~/output/objects
autoware_auto_perception_msgs/msg/DetectedObjects.msg 3D detected object with twist. ~/debug/low_confidence_objects
autoware_auto_perception_msgs/msg/DetectedObjects.msg 3D detected object that doesn't output as ~/output/objects
because of low confidence"},{"location":"perception/radar_fusion_to_detected_object/#parameters","title":"Parameters","text":"Name Type Description Default value update_rate_hz double The update rate [hz]. 20.0"},{"location":"perception/radar_fusion_to_detected_object/#radar_scan_fusion_to_detected_object-tbd","title":"radar_scan_fusion_to_detected_object (TBD)","text":"TBD
"},{"location":"perception/radar_fusion_to_detected_object/docs/algorithm/","title":"Algorithm","text":""},{"location":"perception/radar_fusion_to_detected_object/docs/algorithm/#common-algorithm","title":"Common Algorithm","text":""},{"location":"perception/radar_fusion_to_detected_object/docs/algorithm/#1-link-between-3d-bounding-box-and-radar-data","title":"1. Link between 3d bounding box and radar data","text":"Choose radar pointcloud/objects within 3D bounding box from lidar-base detection with margin space from bird's-eye view.
"},{"location":"perception/radar_fusion_to_detected_object/docs/algorithm/#2-feature-support-split-the-object-going-in-a-different-direction","title":"2. [Feature support] Split the object going in a different direction","text":"Estimate twist from chosen radar pointcloud/objects using twist and target value (Target value is amplitude if using radar pointcloud. Target value is probability if using radar objects). First, the estimation function calculate
Second, the estimation function calculate weighted average of these list. Third, twist information of estimated twist is attached to an object.
"},{"location":"perception/radar_fusion_to_detected_object/docs/algorithm/#4-feature-support-option-convert-doppler-velocity-to-twist","title":"4. [Feature support] [Option] Convert doppler velocity to twist","text":"If the twist information of radars is doppler velocity, convert from doppler velocity to twist using yaw angle of DetectedObject. Because radar pointcloud has only doppler velocity information, radar pointcloud fusion should use this feature. On the other hand, because radar objects have twist information, radar object fusion should not use this feature.
"},{"location":"perception/radar_fusion_to_detected_object/docs/algorithm/#5-delete-objects-with-low-probability","title":"5. Delete objects with low probability","text":"This package contains a radar object clustering for autoware_auto_perception_msgs/msg/DetectedObject input.
This package can make clustered objects from radar DetectedObjects, the objects which is converted from RadarTracks by radar_tracks_msgs_converter and is processed by noise filter. In other word, this package can combine multiple radar detections from one object into one and adjust class and size.
"},{"location":"perception/radar_object_clustering/#algorithm","title":"Algorithm","text":""},{"location":"perception/radar_object_clustering/#background","title":"Background","text":"In radars with object output, there are cases that multiple detection results are obtained from one object, especially for large vehicles such as trucks and trailers. Its multiple detection results cause separation of objects in tracking module. Therefore, by this package the multiple detection results are clustered into one object in advance.
"},{"location":"perception/radar_object_clustering/#detail-algorithm","title":"Detail Algorithm","text":"base_link
At first, to prevent changing the result from depending on the order of objects in DetectedObjects, input objects are sorted by distance from base_link
. In addition, to apply matching in closeness order considering occlusion, objects are sorted in order of short distance in advance.
If two radar objects are near, and yaw angle direction and velocity between two radar objects is similar (the degree of these is defined by parameters), then these are clustered. Note that radar characteristic affect parameters for this matching. For example, if resolution of range distance or angle is low and accuracy of velocity is high, then distance_threshold
parameter should be bigger and should set matching that strongly looks at velocity similarity.
After grouping for all radar objects, if multiple radar objects are grouping, the kinematics of the new clustered object is calculated from average of that and label and shape of the new clustered object is calculated from top confidence in radar objects.
When the label information from radar outputs lack accuracy, is_fixed_label
parameter is recommended to set true
. If the parameter is true, the label of a clustered object is overwritten by the label set by fixed_label
parameter. If this package use for faraway dynamic object detection with radar, the parameter is recommended to set to VEHICLE
.
When the size information from radar outputs lack accuracy, is_fixed_size
parameter is recommended to set true
. If the parameter is true, the size of a clustered object is overwritten by the label set by size_x
, size_y
, and size_z
parameters. If this package use for faraway dynamic object detection with radar, the parameter is recommended to set to size_x
, size_y
, size_z
, as average of vehicle size. Note that to use for multi_objects_tracker, the size parameters need to exceed min_area_matrix
parameters of it.
For now, size estimation for clustered object is not implemented. So is_fixed_size
parameter is recommended to set true
, and size parameters is recommended to set to value near to average size of vehicles.
~/input/objects
autoware_auto_perception_msgs/msg/DetectedObjects.msg Radar objects."},{"location":"perception/radar_object_clustering/#output","title":"Output","text":"Name Type Description ~/output/objects
autoware_auto_perception_msgs/msg/DetectedObjects.msg Output objects"},{"location":"perception/radar_object_clustering/#parameters","title":"Parameters","text":"Name Type Description Default value angle_threshold
double Angle threshold to judge whether radar detections come from one object. [rad] 0.174 distance_threshold
double Distance threshold to judge whether radar detections come from one object. [m] 4.0 velocity_threshold
double Velocity threshold to judge whether radar detections come from one object. [m/s] 2.0 is_fixed_label
bool If this parameter is true, the label of a clustered object is overwritten by the label set by fixed_label
parameter. false fixed_label
string If is_fixed_label
is true, the label of a clustered object is overwritten by this parameter. \"UNKNOWN\" is_fixed_size
bool If this parameter is true, the size of a clustered object is overwritten by the label set by size_x
, size_y
, and size_z
parameters. false size_x
double If is_fixed_size
is true, the x-axis size of a clustered object is overwritten by this parameter. [m] 4.0 size_y
double If is_fixed_size
is true, the y-axis size of a clustered object is overwritten by this parameter. [m] 1.5 size_z
double If is_fixed_size
is true, the z-axis size of a clustered object is overwritten by this parameter. [m] 1.5"},{"location":"perception/radar_object_tracker/","title":"Radar Object Tracker","text":""},{"location":"perception/radar_object_tracker/#radar-object-tracker","title":"Radar Object Tracker","text":""},{"location":"perception/radar_object_tracker/#purpose","title":"Purpose","text":"This package provides a radar object tracking node that processes sequences of detected objects to assign consistent identities to them and estimate their velocities.
"},{"location":"perception/radar_object_tracker/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"This radar object tracker is a combination of data association and tracking algorithms.
"},{"location":"perception/radar_object_tracker/#data-association","title":"Data Association","text":"The data association algorithm matches detected objects to existing tracks.
"},{"location":"perception/radar_object_tracker/#tracker-models","title":"Tracker Models","text":"The tracker models used in this package vary based on the class of the detected object. See more details in the models.md.
"},{"location":"perception/radar_object_tracker/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/radar_object_tracker/#input","title":"Input","text":"Name Type Description~/input
autoware_auto_perception_msgs::msg::DetectedObjects
Detected objects /vector/map
autoware_auto_msgs::msg::HADMapBin
Map data"},{"location":"perception/radar_object_tracker/#output","title":"Output","text":"Name Type Description ~/output
autoware_auto_perception_msgs::msg::TrackedObjects
Tracked objects"},{"location":"perception/radar_object_tracker/#parameters","title":"Parameters","text":""},{"location":"perception/radar_object_tracker/#node-parameters","title":"Node Parameters","text":"Name Type Default Value Description publish_rate
double 10.0 The rate at which to publish the output messages world_frame_id
string \"map\" The frame ID of the world coordinate system enable_delay_compensation
bool false Whether to enable delay compensation. If set to true
, output topic is published by timer with publish_rate
. tracking_config_directory
string \"./config/tracking/\" The directory containing the tracking configuration files enable_logging
bool false Whether to enable logging logging_file_path
string \"/tmp/association_log.json\" The path to the file where logs should be written tracker_lifetime
double 1.0 The lifetime of the tracker in seconds use_distance_based_noise_filtering
bool true Whether to use distance based filtering minimum_range_threshold
double 70.0 Minimum distance threshold for filtering in meters use_map_based_noise_filtering
bool true Whether to use map based filtering max_distance_from_lane
double 5.0 Maximum distance from lane for filtering in meters max_angle_diff_from_lane
double 0.785398 Maximum angle difference from lane for filtering in radians max_lateral_velocity
double 5.0 Maximum lateral velocity for filtering in m/s can_assign_matrix
array An array of integers used in the data association algorithm max_dist_matrix
array An array of doubles used in the data association algorithm max_area_matrix
array An array of doubles used in the data association algorithm min_area_matrix
array An array of doubles used in the data association algorithm max_rad_matrix
array An array of doubles used in the data association algorithm min_iou_matrix
array An array of doubles used in the data association algorithm See more details in the models.md.
"},{"location":"perception/radar_object_tracker/#tracker-parameters","title":"Tracker parameters","text":"Currently, this package supports the following trackers:
linear_motion_tracker
constant_turn_rate_motion_tracker
Default settings for each tracker are defined in the ./config/tracking/, and described in models.md.
"},{"location":"perception/radar_object_tracker/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/radar_object_tracker/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/radar_object_tracker/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/radar_object_tracker/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/radar_object_tracker/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/radar_object_tracker/models/","title":"models","text":""},{"location":"perception/radar_object_tracker/models/#models","title":"models","text":"Tracking models can be chosen from the ros parameter ~tracking_model
:
Each model has its own parameters, which can be set in the ros parameter server.
noise model
Just idea, not implemented yet.
\\[ \\begin{align} x_{k+1} &= x_k + \\frac{v_k}{\\omega_k} (sin(\\theta_k + \\omega_k dt) - sin(\\theta_k)) \\\\ y_{k+1} &= y_k + \\frac{v_k}{\\omega_k} (cos(\\theta_k) - cos(\\theta_k + \\omega_k dt)) \\\\ v_{k+1} &= v_k \\\\ \\theta_{k+1} &= \\theta_k + \\omega_k dt \\\\ \\omega_{k+1} &= \\omega_k \\end{align} \\]"},{"location":"perception/radar_object_tracker/models/#noise-filtering","title":"Noise filtering","text":"Radar sensors often have noisy measurement. So we use the following filter to reduce the false positive objects.
The figure below shows the current noise filtering process.
"},{"location":"perception/radar_object_tracker/models/#minimum-range-filter","title":"minimum range filter","text":"In most cases, Radar sensors are used with other sensors such as LiDAR and Camera, and Radar sensors are used to detect objects far away. So we can filter out objects that are too close to the sensor.
use_distance_based_noise_filtering
parameter is used to enable/disable this filter, and minimum_range_threshold
parameter is used to set the threshold.
With lanelet map information, We can filter out false positive objects that are not likely important obstacles.
We filter out objects that satisfy the following conditions:
Each condition can be set by the following parameters:
max_distance_from_lane
max_angle_diff_from_lane
max_lateral_velocity
This package converts from radar_msgs/msg/RadarTracks into autoware_auto_perception_msgs/msg/DetectedObject and autoware_auto_perception_msgs/msg/TrackedObject.
~/input/radar_objects
(radar_msgs/msg/RadarTracks.msg): Input radar topic~/input/odometry
(nav_msgs/msg/Odometry.msg): Ego vehicle odometry topic~/output/radar_detected_objects
(autoware_auto_perception_msgs/msg/DetectedObject.idl): The topic converted to Autoware's message. This is used for radar sensor fusion detection and radar detection.~/output/radar_tracked_objects
(autoware_auto_perception_msgs/msg/TrackedObject.idl): The topic converted to Autoware's message. This is used for tracking layer sensor fusion.update_rate_hz
(double): The update rate [hz].new_frame_id
(string): The header frame of the output topic.use_twist_compensation
(bool): If the parameter is true, then the twist of the output objects' topic is compensated by ego vehicle motion.use_twist_yaw_compensation
(bool): If the parameter is true, then the ego motion compensation will also consider yaw motion of the ego vehicle.static_object_speed_threshold
(float): Specify the threshold for static object speed which determines the flag is_stationary
[m/s].This package convert the label from radar_msgs/msg/RadarTrack.msg
to Autoware label. Label id is defined as below.
This node calculates a refined object shape (bounding box, cylinder, convex hull) in which a pointcloud cluster fits according to a label.
"},{"location":"perception/shape_estimation/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"perception/shape_estimation/#fitting-algorithms","title":"Fitting algorithms","text":"bounding box
L-shape fitting. See reference below for details.
cylinder
cv::minEnclosingCircle
convex hull
cv::convexHull
input
tier4_perception_msgs::msg::DetectedObjectsWithFeature
detected objects with labeled cluster"},{"location":"perception/shape_estimation/#output","title":"Output","text":"Name Type Description output/objects
autoware_auto_perception_msgs::msg::DetectedObjects
detected objects with refined shape"},{"location":"perception/shape_estimation/#parameters","title":"Parameters","text":"Name Type Description Default Range use_corrector boolean The flag to apply rule-based corrector. true N/A use_filter boolean The flag to apply rule-based filter true N/A use_vehicle_reference_yaw boolean The flag to use vehicle reference yaw for corrector false N/A use_vehicle_reference_shape_size boolean The flag to use vehicle reference shape size false N/A use_boost_bbox_optimizer boolean The flag to use boost bbox optimizer false N/A"},{"location":"perception/shape_estimation/#assumptions-known-limits","title":"Assumptions / Known limits","text":"TBD
"},{"location":"perception/shape_estimation/#referencesexternal-links","title":"References/External links","text":"L-shape fitting implementation of the paper:
@conference{Zhang-2017-26536,\nauthor = {Xiao Zhang and Wenda Xu and Chiyu Dong and John M. Dolan},\ntitle = {Efficient L-Shape Fitting for Vehicle Detection Using Laser Scanners},\nbooktitle = {2017 IEEE Intelligent Vehicles Symposium},\nyear = {2017},\nmonth = {June},\nkeywords = {autonomous driving, laser scanner, perception, segmentation},\n}\n
"},{"location":"perception/simple_object_merger/","title":"simple_object_merger","text":""},{"location":"perception/simple_object_merger/#simple_object_merger","title":"simple_object_merger","text":"This package can merge multiple topics of autoware_auto_perception_msgs/msg/DetectedObject with low calculation cost.
"},{"location":"perception/simple_object_merger/#design","title":"Design","text":""},{"location":"perception/simple_object_merger/#background","title":"Background","text":"Object_merger is mainly used for merge process with DetectedObjects. There are 2 characteristics in Object_merger
. First, object_merger
solve data association algorithm like Hungarian algorithm for matching problem, but it needs computational cost. Second, object_merger
can handle only 2 DetectedObjects topics and cannot handle more than 2 topics in one node. To merge 6 DetectedObjects topics, 6 object_merger
nodes need to stand for now.
Therefore, simple_object_merger
aim to merge multiple DetectedObjects with low calculation cost. The package do not use data association algorithm to reduce the computational cost, and it can handle more than 2 topics in one node to prevent launching a large number of nodes.
Simple_object_merger
can be used for multiple radar detection. By combining them into one topic from multiple radar topics, the pipeline for faraway detection with radar can be simpler.
Merged objects will not be published until all topic data is received when initializing. In addition, to care sensor data drops and delayed, this package has a parameter to judge timeout. When the latest time of the data of a topic is older than the timeout parameter, it is not merged for output objects. For now specification of this package, if all topic data is received at first and after that the data drops, and the merged objects are published without objects which is judged as timeout.The timeout parameter should be determined by sensor cycle time.
Because this package does not have matching processing, there are overlapping objects depending on the input objects. So output objects can be used only when post-processing is used. For now, clustering processing can be used as post-processing.
"},{"location":"perception/simple_object_merger/#interface","title":"Interface","text":""},{"location":"perception/simple_object_merger/#input","title":"Input","text":"Input topics is defined by the parameter of input_topics
(List[string]). The type of input topics is std::vector<autoware_auto_perception_msgs/msg/DetectedObjects.msg>
.
~/output/objects
(autoware_auto_perception_msgs/msg/DetectedObjects.msg
)update_rate_hz
(double) [hz]This parameter is update rate for the onTimer
function. This parameter should be same as the frame rate of input topics.
new_frame_id
(string)This parameter is the header frame_id of the output topic. If output topics use for perception module, it should be set for \"base_link\"
timeout_threshold
(double) [s]This parameter is the threshold for timeout judgement. If the time difference between the first topic of input_topics
and an input topic is exceeded to this parameter, then the objects of topic is not merged to output objects.
for (size_t i = 0; i < input_topic_size; i++) {\ndouble time_diff = rclcpp::Time(objects_data_.at(i)->header.stamp).seconds() -\nrclcpp::Time(objects_data_.at(0)->header.stamp).seconds();\nif (std::abs(time_diff) < node_param_.timeout_threshold) {\n// merge objects\n}\n}\n
input_topics
(List[string])This parameter is the name of input topics. For example, when this packages use for radar objects, \"[/sensing/radar/front_center/detected_objects, /sensing/radar/front_left/detected_objects, /sensing/radar/rear_left/detected_objects, /sensing/radar/rear_center/detected_objects, /sensing/radar/rear_right/detected_objects, /sensing/radar/front_right/detected_objects]\"
can be set. For now, the time difference is calculated by the header time between the first topic of input_topics
and the input topics, so the most important objects to detect should be set in the first of input_topics
list.
This package classifies arbitrary categories using TensorRT for efficient and faster inference. Specifically, this optimizes preprocessing for efficient inference on embedded platform. Moreover, we support dynamic batched inference in GPUs and DLAs.
"},{"location":"perception/tensorrt_yolo/","title":"tensorrt_yolo","text":""},{"location":"perception/tensorrt_yolo/#tensorrt_yolo","title":"tensorrt_yolo","text":""},{"location":"perception/tensorrt_yolo/#purpose","title":"Purpose","text":"This package detects 2D bounding boxes for target objects e.g., cars, trucks, bicycles, and pedestrians on a image based on YOLO(You only look once) model.
"},{"location":"perception/tensorrt_yolo/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"perception/tensorrt_yolo/#cite","title":"Cite","text":"yolov3
Redmon, J., & Farhadi, A. (2018). Yolov3: An incremental improvement. arXiv preprint arXiv:1804.02767.
yolov4
Bochkovskiy, A., Wang, C. Y., & Liao, H. Y. M. (2020). Yolov4: Optimal speed and accuracy of object detection. arXiv preprint arXiv:2004.10934.
yolov5
Jocher, G., et al. (2021). ultralytics/yolov5: v6.0 - YOLOv5n 'Nano' models, Roboflow integration, TensorFlow export, OpenCV DNN support (v6.0). Zenodo. https://doi.org/10.5281/zenodo.5563715
"},{"location":"perception/tensorrt_yolo/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/tensorrt_yolo/#input","title":"Input","text":"Name Type Descriptionin/image
sensor_msgs/Image
The input image"},{"location":"perception/tensorrt_yolo/#output","title":"Output","text":"Name Type Description out/objects
tier4_perception_msgs/DetectedObjectsWithFeature
The detected objects with 2D bounding boxes out/image
sensor_msgs/Image
The image with 2D bounding boxes for visualization"},{"location":"perception/tensorrt_yolo/#parameters","title":"Parameters","text":""},{"location":"perception/tensorrt_yolo/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description anchors
double array [10.0, 13.0, 16.0, 30.0, 33.0, 23.0, 30.0, 61.0, 62.0, 45.0, 59.0, 119.0, 116.0, 90.0, 156.0, 198.0, 373.0, 326.0] The anchors to create bounding box candidates scale_x_y
double array [1.0, 1.0, 1.0] The scale parameter to eliminate grid sensitivity score_thresh
double 0.1 If the objectness score is less than this value, the object is ignored in yolo layer. iou_thresh
double 0.45 The iou threshold for NMS method detections_per_im
int 100 The maximum detection number for one frame use_darknet_layer
bool true The flag to use yolo layer in darknet ignore_thresh
double 0.5 If the output score is less than this value, ths object is ignored."},{"location":"perception/tensorrt_yolo/#node-parameters","title":"Node Parameters","text":"Name Type Default Value Description data_path
string \"\" Packages data and artifacts directory path onnx_file
string \"\" The onnx file name for yolo model engine_file
string \"\" The tensorrt engine file name for yolo model label_file
string \"\" The label file with label names for detected objects written on it calib_image_directory
string \"\" The directory name including calibration images for int8 inference calib_cache_file
string \"\" The calibration cache file for int8 inference mode
string \"FP32\" The inference mode: \"FP32\", \"FP16\", \"INT8\" gpu_id
int 0 GPU device ID that runs the model"},{"location":"perception/tensorrt_yolo/#assumptions-known-limits","title":"Assumptions / Known limits","text":"This package includes multiple licenses.
"},{"location":"perception/tensorrt_yolo/#onnx-model","title":"Onnx model","text":"All YOLO ONNX models are converted from the officially trained model. If you need information about training datasets and conditions, please refer to the official repositories.
All models are downloaded during env preparation by ansible (as mention in installation). It is also possible to download them manually, see Manual downloading of artifacts . When launching the node with a model for the first time, the model is automatically converted to TensorRT, although this may take some time.
"},{"location":"perception/tensorrt_yolo/#yolov3","title":"YOLOv3","text":"YOLOv3: Converted from darknet weight file and conf file.
YOLOv4: Converted from darknet weight file and conf file.
YOLOv4-tiny: Converted from darknet weight file and conf file.
Refer to this guide
This package detects target objects e.g., cars, trucks, bicycles, and pedestrians on a image based on YOLOX model.
"},{"location":"perception/tensorrt_yolox/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"perception/tensorrt_yolox/#cite","title":"Cite","text":"Zheng Ge, Songtao Liu, Feng Wang, Zeming Li, Jian Sun, \"YOLOX: Exceeding YOLO Series in 2021\", arXiv preprint arXiv:2107.08430, 2021 [ref]
"},{"location":"perception/tensorrt_yolox/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/tensorrt_yolox/#input","title":"Input","text":"Name Type Descriptionin/image
sensor_msgs/Image
The input image"},{"location":"perception/tensorrt_yolox/#output","title":"Output","text":"Name Type Description out/objects
tier4_perception_msgs/DetectedObjectsWithFeature
The detected objects with 2D bounding boxes out/image
sensor_msgs/Image
The image with 2D bounding boxes for visualization"},{"location":"perception/tensorrt_yolox/#parameters","title":"Parameters","text":""},{"location":"perception/tensorrt_yolox/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description score_threshold
float 0.3 If the objectness score is less than this value, the object is ignored in yolox layer. nms_threshold
float 0.7 The IoU threshold for NMS method NOTE: These two parameters are only valid for \"plain\" model (described later).
"},{"location":"perception/tensorrt_yolox/#node-parameters","title":"Node Parameters","text":"Name Type Default Value Descriptionmodel_path
string \"\" The onnx file name for yolox model label_path
string \"\" The label file with label names for detected objects written on it precision
string \"fp16\" The inference mode: \"fp32\", \"fp16\", \"int8\" build_only
bool false shutdown node after TensorRT engine file is built calibration_algorithm
string \"MinMax\" Calibration algorithm to be used for quantization when precision==int8. Valid value is one of: Entropy\",(\"Legacy\" | \"Percentile\"), \"MinMax\"] dla_core_id
int -1 If positive ID value is specified, the node assign inference task to the DLA core quantize_first_layer
bool false If true, set the operating precision for the first (input) layer to be fp16. This option is valid only when precision==int8 quantize_last_layer
bool false If true, set the operating precision for the last (output) layer to be fp16. This option is valid only when precision==int8 profile_per_layer
bool false If true, profiler function will be enabled. Since the profile function may affect execution speed, it is recommended to set this flag true only for development purpose. clip_value
double 0.0 If positive value is specified, the value of each layer output will be clipped between [0.0, clip_value]. This option is valid only when precision==int8 and used to manually specify the dynamic range instead of using any calibration preprocess_on_gpu
bool true If true, pre-processing is performed on GPU calibration_image_list_path
string \"\" Path to a file which contains path to images. Those images will be used for int8 quantization."},{"location":"perception/tensorrt_yolox/#assumptions-known-limits","title":"Assumptions / Known limits","text":"The label contained in detected 2D bounding boxes (i.e., out/objects
) will be either one of the followings:
If other labels (case insensitive) are contained in the file specified via the label_file
parameter, those are labeled as UNKNOWN
, while detected rectangles are drawn in the visualization result (out/image
).
A sample model (named yolox-tiny.onnx
) is downloaded by ansible script on env preparation stage, if not, please, follow Manual downloading of artifacts. To accelerate Non-maximum-suppression (NMS), which is one of the common post-process after object detection inference, EfficientNMS_TRT
module is attached after the ordinal YOLOX (tiny) network. The EfficientNMS_TRT
module contains fixed values for score_threshold
and nms_threshold
in it, hence these parameters are ignored when users specify ONNX models including this module.
This package accepts both EfficientNMS_TRT
attached ONNXs and models published from the official YOLOX repository (we referred to them as \"plain\" models).
In addition to yolox-tiny.onnx
, a custom model named yolox-sPlus-opt.onnx
is either available. This model is based on YOLOX-s and tuned to perform more accurate detection with almost comparable execution speed with yolox-tiny
. To get better results with this model, users are recommended to use some specific running arguments such as precision:=int8
, calibration_algorithm:=Entropy
, clip_value:=6.0
. Users can refer launch/yolox_sPlus_opt.launch.xml
to see how this model can be used.
All models are automatically converted to TensorRT format. These converted files will be saved in the same directory as specified ONNX files with .engine
filename extension and reused from the next run. The conversion process may take a while (typically 10 to 20 minutes) and the inference process is blocked until complete the conversion, so it will take some time until detection results are published (even until appearing in the topic list) on the first run
To convert users' own model that saved in PyTorch's pth
format into ONNX, users can exploit the converter offered by the official repository. For the convenience, only procedures are described below. Please refer the official document for more detail.
Install dependency
git clone git@github.com:Megvii-BaseDetection/YOLOX.git\ncd YOLOX\npython3 setup.py develop --user\n
Convert pth into ONNX
python3 tools/export_onnx.py \\\n--output-name YOUR_YOLOX.onnx \\\n-f YOUR_YOLOX.py \\\n-c YOUR_YOLOX.pth\n
Install dependency
git clone git@github.com:Megvii-BaseDetection/YOLOX.git\ncd YOLOX\npython3 setup.py develop --user\npip3 install git+ssh://git@github.com/wep21/yolox_onnx_modifier.git --user\n
Convert pth into ONNX
python3 tools/export_onnx.py \\\n--output-name YOUR_YOLOX.onnx \\\n-f YOUR_YOLOX.py \\\n-c YOUR_YOLOX.pth\n --decode_in_inference\n
Embed EfficientNMS_TRT
to the end of YOLOX
yolox_onnx_modifier YOUR_YOLOX.onnx -o YOUR_YOLOX_WITH_NMS.onnx\n
A sample label file (named label.txt
)is also downloaded automatically during env preparation process (NOTE: This file is incompatible with models that output labels for the COCO dataset (e.g., models from the official YOLOX repository)).
This file represents the correspondence between class index (integer outputted from YOLOX network) and class label (strings making understanding easier). This package maps class IDs (incremented from 0) with labels according to the order in this file.
"},{"location":"perception/tensorrt_yolox/#reference-repositories","title":"Reference repositories","text":"This package try to merge two tracking objects from different sensor.
"},{"location":"perception/tracking_object_merger/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"Merging tracking objects from different sensor is a combination of data association and state fusion algorithms.
Detailed process depends on the merger policy.
"},{"location":"perception/tracking_object_merger/#decorative_tracker_merger","title":"decorative_tracker_merger","text":"In decorative_tracker_merger, we assume there are dominant tracking objects and sub tracking objects. The name decorative
means that sub tracking objects are used to complement the main objects.
Usually the dominant tracking objects are from LiDAR and sub tracking objects are from Radar or Camera.
Here show the processing pipeline.
"},{"location":"perception/tracking_object_merger/#time-sync","title":"time sync","text":"Sub object(Radar or Camera) often has higher frequency than dominant object(LiDAR). So we need to sync the time of sub object to dominant object.
"},{"location":"perception/tracking_object_merger/#data-association","title":"data association","text":"In the data association, we use the following rules to determine whether two tracking objects are the same object.
distance gate
: distance between two tracking objectsangle gate
: angle between two tracking objectsmahalanobis_distance_gate
: Mahalanobis distance between two tracking objectsmin_iou_gate
: minimum IoU between two tracking objectsmax_velocity_gate
: maximum velocity difference between two tracking objectsSub tracking objects are merged into dominant tracking objects.
Depends on the tracklet input sensor state, we update the tracklet state with different rules.
state\\priority 1st 2nd 3rd Kinematics except velocity LiDAR Radar Camera Forward velocity Radar LiDAR Camera Object classification Camera LiDAR Radar"},{"location":"perception/tracking_object_merger/#tracklet-management","title":"tracklet management","text":"We use the existence_probability
to manage tracklet.
existence_probability
to \\(p_{sensor}\\) value.existence_probability
to \\(p_{sensor}\\) value.existence_probability
by decay_rate
existence_probability
is larger than publish_probability_threshold
existence_probability
is smaller than remove_probability_threshold
These parameter can be set in config/decorative_tracker_merger.param.yaml
.
tracker_state_parameter:\nremove_probability_threshold: 0.3\npublish_probability_threshold: 0.6\ndefault_lidar_existence_probability: 0.7\ndefault_radar_existence_probability: 0.6\ndefault_camera_existence_probability: 0.6\ndecay_rate: 0.1\nmax_dt: 1.0\n
"},{"location":"perception/tracking_object_merger/#inputparameters","title":"input/parameters","text":"topic name message type description ~input/main_object
autoware_auto_perception_msgs::TrackedObjects
Dominant tracking objects. Output will be published with this dominant object stamps. ~input/sub_object
autoware_auto_perception_msgs::TrackedObjects
Sub tracking objects. output/object
autoware_auto_perception_msgs::TrackedObjects
Merged tracking objects. debug/interpolated_sub_object
autoware_auto_perception_msgs::TrackedObjects
Interpolated sub tracking objects. Default parameters are set in config/decorative_tracker_merger.param.yaml.
parameter name description default valuebase_link_frame_id
base link frame id. This is used to transform the tracking object. \"base_link\" time_sync_threshold
time sync threshold. If the time difference between two tracking objects is smaller than this value, we consider these two tracking objects are the same object. 0.05 sub_object_timeout_sec
sub object timeout. If the sub object is not updated for this time, we consider this object is not exist. 0.5 main_sensor_type
main sensor type. This is used to determine the dominant tracking object. \"lidar\" sub_sensor_type
sub sensor type. This is used to determine the sub tracking object. \"radar\" tracker_state_parameter
tracker state parameter. This is used to manage the tracklet. tracker_state_parameter
is described in tracklet managementAs explained in tracklet management, this tracker merger tend to maintain the both input tracking objects.
If there are many false positive tracking objects,
default_<sensor>_existence_probability
of that sensordecay_rate
publish_probability_threshold
to publish only reliable tracking objectsThis is future work.
"},{"location":"perception/traffic_light_arbiter/","title":"traffic_light_arbiter","text":""},{"location":"perception/traffic_light_arbiter/#traffic_light_arbiter","title":"traffic_light_arbiter","text":""},{"location":"perception/traffic_light_arbiter/#purpose","title":"Purpose","text":"This package receives traffic signals from perception and external (e.g., V2X) components and combines them using either a confidence-based or a external-preference based approach.
"},{"location":"perception/traffic_light_arbiter/#trafficlightarbiter","title":"TrafficLightArbiter","text":"A node that merges traffic light/signal state from image recognition and external (e.g., V2X) systems to provide to a planning component.
"},{"location":"perception/traffic_light_arbiter/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/traffic_light_arbiter/#input","title":"Input","text":"Name Type Description ~/sub/vector_map autoware_auto_mapping_msgs::msg::HADMapBin The vector map to get valid traffic signal ids. ~/sub/perception_traffic_signals autoware_perception_msgs::msg::TrafficSignalArray The traffic signals from the image recognition pipeline. ~/sub/external_traffic_signals autoware_perception_msgs::msg::TrafficSignalArray The traffic signals from an external system."},{"location":"perception/traffic_light_arbiter/#output","title":"Output","text":"Name Type Description ~/pub/traffic_signals autoware_perception_msgs::msg::TrafficSignalArray The merged traffic signal state."},{"location":"perception/traffic_light_arbiter/#parameters","title":"Parameters","text":""},{"location":"perception/traffic_light_arbiter/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Descriptionexternal_time_tolerance
double 5.0 The duration in seconds an external message is considered valid for merging perception_time_tolerance
double 1.0 The duration in seconds a perception message is considered valid for merging external_priority
bool false Whether or not externals signals take precedence over perception-based ones. If false, the merging uses confidence as a criteria"},{"location":"perception/traffic_light_classifier/","title":"traffic_light_classifier","text":""},{"location":"perception/traffic_light_classifier/#traffic_light_classifier","title":"traffic_light_classifier","text":""},{"location":"perception/traffic_light_classifier/#purpose","title":"Purpose","text":"traffic_light_classifier is a package for classifying traffic light labels using cropped image around a traffic light. This package has two classifier models: cnn_classifier
and hsv_classifier
.
Traffic light labels are classified by EfficientNet-b1 or MobileNet-v2. Totally 83400 (58600 for training, 14800 for evaluation and 10000 for test) TIER IV internal images of Japanese traffic lights were used for fine-tuning. The information of the models is listed here:
Name Input Size Test Accuracy EfficientNet-b1 128 x 128 99.76% MobileNet-v2 224 x 224 99.81%"},{"location":"perception/traffic_light_classifier/#hsv_classifier","title":"hsv_classifier","text":"Traffic light colors (green, yellow and red) are classified in HSV model.
"},{"location":"perception/traffic_light_classifier/#about-label","title":"About Label","text":"The message type is designed to comply with the unified road signs proposed at the Vienna Convention. This idea has been also proposed in Autoware.Auto.
There are rules for naming labels that nodes receive. One traffic light is represented by the following character string separated by commas. color1-shape1, color2-shape2
.
For example, the simple red and red cross traffic light label must be expressed as \"red-circle, red-cross\".
These colors and shapes are assigned to the message as follows:
"},{"location":"perception/traffic_light_classifier/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/traffic_light_classifier/#input","title":"Input","text":"Name Type Description~/input/image
sensor_msgs::msg::Image
input image ~/input/rois
tier4_perception_msgs::msg::TrafficLightRoiArray
rois of traffic lights"},{"location":"perception/traffic_light_classifier/#output","title":"Output","text":"Name Type Description ~/output/traffic_signals
tier4_perception_msgs::msg::TrafficSignalArray
classified signals ~/output/debug/image
sensor_msgs::msg::Image
image for debugging"},{"location":"perception/traffic_light_classifier/#parameters","title":"Parameters","text":""},{"location":"perception/traffic_light_classifier/#node-parameters","title":"Node Parameters","text":"Name Type Description classifier_type
int if the value is 1
, cnn_classifier is used data_path
str packages data and artifacts directory path backlight_threshold
float If the intensity get grater than this overwrite with UNKNOWN in corresponding RoI. Note that, if the value is much higher, the node only overwrites in the harsher backlight situations. Therefore, If you wouldn't like to use this feature set this value to 1.0
. The value can be [0.0, 1.0]
. The confidence of overwritten signal is set to 0.0
."},{"location":"perception/traffic_light_classifier/#core-parameters","title":"Core Parameters","text":""},{"location":"perception/traffic_light_classifier/#cnn_classifier_1","title":"cnn_classifier","text":"Name Type Description classifier_label_path
str path to the model file classifier_model_path
str path to the label file classifier_precision
str TensorRT precision, fp16
or int8
classifier_mean
vector\\ 3-channel input image mean classifier_std
vector\\ 3-channel input image std apply_softmax
bool whether or not apply softmax"},{"location":"perception/traffic_light_classifier/#hsv_classifier_1","title":"hsv_classifier","text":"Name Type Description green_min_h
int the minimum hue of green color green_min_s
int the minimum saturation of green color green_min_v
int the minimum value (brightness) of green color green_max_h
int the maximum hue of green color green_max_s
int the maximum saturation of green color green_max_v
int the maximum value (brightness) of green color yellow_min_h
int the minimum hue of yellow color yellow_min_s
int the minimum saturation of yellow color yellow_min_v
int the minimum value (brightness) of yellow color yellow_max_h
int the maximum hue of yellow color yellow_max_s
int the maximum saturation of yellow color yellow_max_v
int the maximum value (brightness) of yellow color red_min_h
int the minimum hue of red color red_min_s
int the minimum saturation of red color red_min_v
int the minimum value (brightness) of red color red_max_h
int the maximum hue of red color red_max_s
int the maximum saturation of red color red_max_v
int the maximum value (brightness) of red color"},{"location":"perception/traffic_light_classifier/#training-traffic-light-classifier-model","title":"Training Traffic Light Classifier Model","text":""},{"location":"perception/traffic_light_classifier/#overview","title":"Overview","text":"This guide provides detailed instructions on training a traffic light classifier model using the mmlab/mmpretrain repository and deploying it using mmlab/mmdeploy. If you wish to create a custom traffic light classifier model with your own dataset, please follow the steps outlined below.
"},{"location":"perception/traffic_light_classifier/#data-preparation","title":"Data Preparation","text":""},{"location":"perception/traffic_light_classifier/#use-sample-dataset","title":"Use Sample Dataset","text":"Autoware offers a sample dataset that illustrates the training procedures for traffic light classification. This dataset comprises 1045 images categorized into red, green, and yellow labels. To utilize this sample dataset, please download it from link and extract it to a designated folder of your choice.
"},{"location":"perception/traffic_light_classifier/#use-your-custom-dataset","title":"Use Your Custom Dataset","text":"To train a traffic light classifier, adopt a structured subfolder format where each subfolder represents a distinct class. Below is an illustrative dataset structure example;
DATASET_ROOT\n \u251c\u2500\u2500 TRAIN\n \u2502 \u251c\u2500\u2500 RED\n \u2502 \u2502 \u251c\u2500\u2500 001.png\n \u2502 \u2502 \u251c\u2500\u2500 002.png\n \u2502 \u2502 \u2514\u2500\u2500 ...\n \u2502 \u2502\n \u2502 \u251c\u2500\u2500 GREEN\n \u2502 \u2502 \u251c\u2500\u2500 001.png\n \u2502 \u2502 \u251c\u2500\u2500 002.png\n \u2502 \u2502 \u2514\u2500\u2500...\n \u2502 \u2502\n \u2502 \u251c\u2500\u2500 YELLOW\n \u2502 \u2502 \u251c\u2500\u2500 001.png\n \u2502 \u2502 \u251c\u2500\u2500 002.png\n \u2502 \u2502 \u2514\u2500\u2500...\n \u2502 \u2514\u2500\u2500 ...\n \u2502\n \u251c\u2500\u2500 VAL\n \u2502 \u2514\u2500\u2500...\n \u2502\n \u2502\n \u2514\u2500\u2500 TEST\n \u2514\u2500\u2500 ...\n
"},{"location":"perception/traffic_light_classifier/#installation","title":"Installation","text":""},{"location":"perception/traffic_light_classifier/#prerequisites","title":"Prerequisites","text":"Step 1. Download and install Miniconda from the official website.
Step 2. Create a conda virtual environment and activate it
conda create --name tl-classifier python=3.8 -y\nconda activate tl-classifier\n
Step 3. Install PyTorch
Please ensure you have PyTorch installed, compatible with CUDA 11.6, as it is a requirement for current Autoware
conda install pytorch==1.13.1 torchvision==0.14.1 pytorch-cuda=11.6 -c pytorch -c nvidia\n
"},{"location":"perception/traffic_light_classifier/#install-mmlabmmpretrain","title":"Install mmlab/mmpretrain","text":"Step 1. Install mmpretrain from source
cd ~/\ngit clone https://github.com/open-mmlab/mmpretrain.git\ncd mmpretrain\npip install -U openmim && mim install -e .\n
"},{"location":"perception/traffic_light_classifier/#training","title":"Training","text":"MMPretrain offers a training script that is controlled through a configuration file. Leveraging an inheritance design pattern, you can effortlessly tailor the training script using Python files as configuration files.
In the example, we demonstrate the training steps on the MobileNetV2 model, but you have the flexibility to employ alternative classification models such as EfficientNetV2, EfficientNetV3, ResNet, and more.
"},{"location":"perception/traffic_light_classifier/#create-a-config-file","title":"Create a config file","text":"Generate a configuration file for your preferred model within the configs
folder
touch ~/mmpretrain/configs/mobilenet_v2/mobilenet-v2_8xb32_custom.py\n
Open the configuration file in your preferred text editor and make a copy of the provided content. Adjust the data_root variable to match the path of your dataset. You are welcome to customize the configuration parameters for the model, dataset, and scheduler to suit your preferences
# Inherit model, schedule and default_runtime from base model\n_base_ = [\n '../_base_/models/mobilenet_v2_1x.py',\n '../_base_/schedules/imagenet_bs256_epochstep.py',\n '../_base_/default_runtime.py'\n]\n\n# Set the number of classes to the model\n# You can also change other model parameters here\n# For detailed descriptions of model parameters, please refer to link below\n# (Customize model)[https://mmpretrain.readthedocs.io/en/latest/advanced_guides/modules.html]\nmodel = dict(head=dict(num_classes=3, topk=(1, 3)))\n\n# Set max epochs and validation interval\ntrain_cfg = dict(by_epoch=True, max_epochs=50, val_interval=5)\n\n# Set optimizer and lr scheduler\noptim_wrapper = dict(\n optimizer=dict(type='SGD', lr=0.001, momentum=0.9))\nparam_scheduler = dict(type='StepLR', by_epoch=True, step_size=1, gamma=0.98)\n\ndataset_type = 'CustomDataset'\ndata_root = \"/PATH/OF/YOUR/DATASET\"\n\n# Customize data preprocessing and dataloader pipeline for training set\n# These parameters calculated for the sample dataset\ndata_preprocessor = dict(\n mean=[0.2888 * 256, 0.2570 * 256, 0.2329 * 256],\n std=[0.2106 * 256, 0.2037 * 256, 0.1864 * 256],\n num_classes=3,\n to_rgb=True,\n)\n\n# Customize data preprocessing and dataloader pipeline for train set\n# For detailed descriptions of data pipeline, please refer to link below\n# (Customize data pipeline)[https://mmpretrain.readthedocs.io/en/latest/advanced_guides/pipeline.html]\ntrain_pipeline = [\n dict(type='LoadImageFromFile'),\n dict(type='Resize', scale=224),\n dict(type='RandomFlip', prob=0.5, direction='horizontal'),\n dict(type='PackInputs'),\n]\ntrain_dataloader = dict(\n dataset=dict(\n type=dataset_type,\n data_root=data_root,\n ann_file='',\n data_prefix='train',\n with_label=True,\n pipeline=train_pipeline,\n ),\n num_workers=8,\n batch_size=32,\n sampler=dict(type='DefaultSampler', shuffle=True)\n)\n\n# Customize data preprocessing and dataloader pipeline for test set\ntest_pipeline = [\n dict(type='LoadImageFromFile'),\n dict(type='Resize', scale=224),\n dict(type='PackInputs'),\n]\n\n# Customize data preprocessing and dataloader pipeline for validation set\nval_cfg = dict()\nval_dataloader = dict(\n dataset=dict(\n type=dataset_type,\n data_root=data_root,\n ann_file='',\n data_prefix='val',\n with_label=True,\n pipeline=test_pipeline,\n ),\n num_workers=8,\n batch_size=32,\n sampler=dict(type='DefaultSampler', shuffle=True)\n)\n\nval_evaluator = dict(topk=(1, 3,), type='Accuracy')\n\ntest_dataloader = val_dataloader\ntest_evaluator = val_evaluator\n
"},{"location":"perception/traffic_light_classifier/#start-training","title":"Start training","text":"cd ~/mmpretrain\npython tools/train.py configs/mobilenet_v2/mobilenet-v2_8xb32_custom.py\n
Training logs and weights will be saved in the work_dirs/mobilenet-v2_8xb32_custom
folder.
The 'mmdeploy' toolset is designed for deploying your trained model onto various target devices. With its capabilities, you can seamlessly convert PyTorch models into the ONNX format.
# Activate your conda environment\nconda activate tl-classifier\n\n# Install mmenigne and mmcv\nmim install mmengine\nmim install \"mmcv>=2.0.0rc2\"\n\n# Install mmdeploy\npip install mmdeploy==1.2.0\n\n# Support onnxruntime\npip install mmdeploy-runtime==1.2.0\npip install mmdeploy-runtime-gpu==1.2.0\npip install onnxruntime-gpu==1.8.1\n\n#Clone mmdeploy repository\ncd ~/\ngit clone -b main https://github.com/open-mmlab/mmdeploy.git\n
"},{"location":"perception/traffic_light_classifier/#convert-pytorch-model-to-onnx-model_1","title":"Convert PyTorch model to ONNX model","text":"cd ~/mmdeploy\n\n# Run deploy.py script\n# deploy.py script takes 5 main arguments with these order; config file path, train config file path,\n# checkpoint file path, demo image path, and work directory path\npython tools/deploy.py \\\n~/mmdeploy/configs/mmpretrain/classification_onnxruntime_static.py\\\n~/mmpretrain/configs/mobilenet_v2/train_mobilenet_v2.py \\\n~/mmpretrain/work_dirs/train_mobilenet_v2/epoch_300.pth \\\n/SAMPLE/IAMGE/DIRECTORY \\\n--work-dir mmdeploy_model/mobilenet_v2\n
Converted ONNX model will be saved in the mmdeploy/mmdeploy_model/mobilenet_v2
folder.
After obtaining your onnx model, update parameters defined in the launch file (e.g. model_file_path
, label_file_path
, input_h
, input_w
...). Note that, we only support labels defined in tier4_perception_msgs::msg::TrafficLightElement.
[1] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov and L. Chen, \"MobileNetV2: Inverted Residuals and Linear Bottlenecks,\" 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, 2018, pp. 4510-4520, doi: 10.1109/CVPR.2018.00474.
[2] Tan, Mingxing, and Quoc Le. \"EfficientNet: Rethinking model scaling for convolutional neural networks.\" International conference on machine learning. PMLR, 2019.
"},{"location":"perception/traffic_light_classifier/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/traffic_light_fine_detector/","title":"traffic_light_fine_detector","text":""},{"location":"perception/traffic_light_fine_detector/#traffic_light_fine_detector","title":"traffic_light_fine_detector","text":""},{"location":"perception/traffic_light_fine_detector/#purpose","title":"Purpose","text":"It is a package for traffic light detection using YoloX-s.
"},{"location":"perception/traffic_light_fine_detector/#training-information","title":"Training Information","text":""},{"location":"perception/traffic_light_fine_detector/#pretrained-model","title":"Pretrained Model","text":"The model is based on YOLOX and the pretrained model could be downloaded from here.
"},{"location":"perception/traffic_light_fine_detector/#training-data","title":"Training Data","text":"The model was fine-tuned on around 17,000 TIER IV internal images of Japanese traffic lights.
"},{"location":"perception/traffic_light_fine_detector/#trained-onnx-model","title":"Trained Onnx model","text":"You can download the ONNX file using these instructions. Please visit autoware-documentation for more information.
"},{"location":"perception/traffic_light_fine_detector/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"Based on the camera image and the global ROI array detected by map_based_detection
node, a CNN-based detection method enables highly accurate traffic light detection.
~/input/image
sensor_msgs/Image
The full size camera image ~/input/rois
tier4_perception_msgs::msg::TrafficLightRoiArray
The array of ROIs detected by map_based_detector ~/expect/rois
tier4_perception_msgs::msg::TrafficLightRoiArray
The array of ROIs detected by map_based_detector without any offset"},{"location":"perception/traffic_light_fine_detector/#output","title":"Output","text":"Name Type Description ~/output/rois
tier4_perception_msgs::msg::TrafficLightRoiArray
The detected accurate rois ~/debug/exe_time_ms
tier4_debug_msgs::msg::Float32Stamped
The time taken for inference"},{"location":"perception/traffic_light_fine_detector/#parameters","title":"Parameters","text":""},{"location":"perception/traffic_light_fine_detector/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description fine_detector_score_thresh
double 0.3 If the objectness score is less than this value, the object is ignored fine_detector_nms_thresh
double 0.65 IoU threshold to perform Non-Maximum Suppression"},{"location":"perception/traffic_light_fine_detector/#node-parameters","title":"Node Parameters","text":"Name Type Default Value Description data_path
string \"$(env HOME)/autoware_data\" packages data and artifacts directory path fine_detector_model_path
string \"\" The onnx file name for yolo model fine_detector_label_path
string \"\" The label file with label names for detected objects written on it fine_detector_precision
string \"fp32\" The inference mode: \"fp32\", \"fp16\" approximate_sync
bool false Flag for whether to ues approximate sync policy"},{"location":"perception/traffic_light_fine_detector/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/traffic_light_fine_detector/#reference-repositories","title":"Reference repositories","text":"YOLOX github repository
traffic_light_map_based_detector
Package","text":""},{"location":"perception/traffic_light_map_based_detector/#overview","title":"Overview","text":"traffic_light_map_based_detector
calculates where the traffic lights will appear in the image based on the HD map.
Calibration and vibration errors can be entered as parameters, and the size of the detected RegionOfInterest will change according to the error.
If the node receives route information, it only looks at traffic lights on that route. If the node receives no route information, it looks at a radius of 200 meters and the angle between the traffic light and the camera is less than 40 degrees.
"},{"location":"perception/traffic_light_map_based_detector/#input-topics","title":"Input topics","text":"Name Type Description~input/vector_map
autoware_auto_mapping_msgs::HADMapBin vector map ~input/camera_info
sensor_msgs::CameraInfo target camera parameter ~input/route
autoware_planning_msgs::LaneletRoute optional: route"},{"location":"perception/traffic_light_map_based_detector/#output-topics","title":"Output topics","text":"Name Type Description ~output/rois
tier4_perception_msgs::TrafficLightRoiArray location of traffic lights in image corresponding to the camera info ~expect/rois
tier4_perception_msgs::TrafficLightRoiArray location of traffic lights in image without any offset ~debug/markers
visualization_msgs::MarkerArray visualization to debug"},{"location":"perception/traffic_light_map_based_detector/#node-parameters","title":"Node parameters","text":"Parameter Type Description max_vibration_pitch
double Maximum error in pitch direction. If -5~+5, it will be 10. max_vibration_yaw
double Maximum error in yaw direction. If -5~+5, it will be 10. max_vibration_height
double Maximum error in height direction. If -5~+5, it will be 10. max_vibration_width
double Maximum error in width direction. If -5~+5, it will be 10. max_vibration_depth
double Maximum error in depth direction. If -5~+5, it will be 10. max_detection_range
double Maximum detection range in meters. Must be positive min_timestamp_offset
double Minimum timestamp offset when searching for corresponding tf max_timestamp_offset
double Maximum timestamp offset when searching for corresponding tf timestamp_sample_len
double sampling length between min_timestamp_offset and max_timestamp_offset"},{"location":"perception/traffic_light_multi_camera_fusion/","title":"The `traffic_light_multi_camera_fusion` Package","text":""},{"location":"perception/traffic_light_multi_camera_fusion/#the-traffic_light_multi_camera_fusion-package","title":"The traffic_light_multi_camera_fusion
Package","text":""},{"location":"perception/traffic_light_multi_camera_fusion/#overview","title":"Overview","text":"traffic_light_multi_camera_fusion
performs traffic light signal fusion which can be summarized as the following two tasks:
For every camera, the following three topics are subscribed:
Name Type Description~/<camera_namespace>/camera_info
sensor_msgs::CameraInfo camera info from traffic_light_map_based_detector ~/<camera_namespace>/rois
tier4_perception_msgs::TrafficLightRoiArray detection roi from traffic_light_fine_detector ~/<camera_namespace>/traffic_signals
tier4_perception_msgs::TrafficLightSignalArray classification result from traffic_light_classifier You don't need to configure these topics manually. Just provide the camera_namespaces
parameter and the node will automatically extract the <camera_namespace>
and create the subscribers.
~/output/traffic_signals
autoware_perception_msgs::TrafficLightSignalArray traffic light signal fusion result"},{"location":"perception/traffic_light_multi_camera_fusion/#node-parameters","title":"Node parameters","text":"Parameter Type Description camera_namespaces
vector\\ Camera Namespaces to be fused message_lifespan
double The maximum timestamp span to be fused approximate_sync
bool Whether work in Approximate Synchronization Mode perform_group_fusion
bool Whether perform Group Fusion"},{"location":"perception/traffic_light_occlusion_predictor/","title":"The `traffic_light_occlusion_predictor` Package","text":""},{"location":"perception/traffic_light_occlusion_predictor/#the-traffic_light_occlusion_predictor-package","title":"The traffic_light_occlusion_predictor
Package","text":""},{"location":"perception/traffic_light_occlusion_predictor/#overview","title":"Overview","text":"traffic_light_occlusion_predictor
receives the detected traffic lights rois and calculates the occlusion ratios of each roi with point cloud.
For each traffic light roi, hundreds of pixels would be selected and projected into the 3D space. Then from the camera point of view, the number of projected pixels that are occluded by the point cloud is counted and used for calculating the occlusion ratio for the roi. As shown in follow image, the red pixels are occluded and the occlusion ratio is the number of red pixels divided by the total pixel numbers.
If no point cloud is received or all point clouds have very large stamp difference with the camera image, the occlusion ratio of each roi would be set as 0.
"},{"location":"perception/traffic_light_occlusion_predictor/#input-topics","title":"Input topics","text":"Name Type Description~input/vector_map
autoware_auto_mapping_msgs::HADMapBin vector map ~/input/rois
autoware_auto_perception_msgs::TrafficLightRoiArray traffic light detections ~input/camera_info
sensor_msgs::CameraInfo target camera parameter ~/input/cloud
sensor_msgs::PointCloud2 LiDAR point cloud"},{"location":"perception/traffic_light_occlusion_predictor/#output-topics","title":"Output topics","text":"Name Type Description ~/output/occlusion
autoware_auto_perception_msgs::TrafficLightOcclusionArray occlusion ratios of each roi"},{"location":"perception/traffic_light_occlusion_predictor/#node-parameters","title":"Node parameters","text":"Parameter Type Description azimuth_occlusion_resolution_deg
double azimuth resolution of LiDAR point cloud (degree) elevation_occlusion_resolution_deg
double elevation resolution of LiDAR point cloud (degree) max_valid_pt_dist
double The points within this distance would be used for calculation max_image_cloud_delay
double The maximum delay between LiDAR point cloud and camera image max_wait_t
double The maximum time waiting for the LiDAR point cloud"},{"location":"perception/traffic_light_ssd_fine_detector/","title":"traffic_light_ssd_fine_detector","text":""},{"location":"perception/traffic_light_ssd_fine_detector/#traffic_light_ssd_fine_detector","title":"traffic_light_ssd_fine_detector","text":""},{"location":"perception/traffic_light_ssd_fine_detector/#purpose","title":"Purpose","text":"It is a package for traffic light detection using MobileNetV2 and SSDLite.
"},{"location":"perception/traffic_light_ssd_fine_detector/#training-information","title":"Training Information","text":"NOTE:
pytorch
or mmdetection
in dnn_header_type
.input
boxes
.scores
.The model is based on pytorch-ssd and the pretrained model could be downloaded from here.
"},{"location":"perception/traffic_light_ssd_fine_detector/#training-data","title":"Training Data","text":"The model was fine-tuned on 1750 TIER IV internal images of Japanese traffic lights.
"},{"location":"perception/traffic_light_ssd_fine_detector/#trained-onnx-model","title":"Trained Onnx model","text":"In order to train models and export onnx model, we recommend open-mmlab/mmdetection. Please follow the official document to install and experiment with mmdetection. If you get into troubles, FAQ page would help you.
The following steps are example of a quick-start.
"},{"location":"perception/traffic_light_ssd_fine_detector/#step-0-install-mmcv-and-mim","title":"step 0. Install MMCV and MIM","text":"NOTE :
In order to install mmcv suitable for your CUDA version, install it specifying a url.
# Install mim\n$ pip install -U openmim\n\n# Install mmcv on a machine with CUDA11.6 and PyTorch1.13.0\n$ pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu116/torch1.13/index.html\n
"},{"location":"perception/traffic_light_ssd_fine_detector/#step-1-install-mmdetection","title":"step 1. Install MMDetection","text":"You can install mmdetection as a Python package or from source.
# As a Python package\n$ pip install mmdet\n\n# From source\n$ git clone https://github.com/open-mmlab/mmdetection.git\n$ cd mmdetection\n$ pip install -v -e .\n
"},{"location":"perception/traffic_light_ssd_fine_detector/#step-2-train-your-model","title":"step 2. Train your model","text":"Train model with your experiment configuration file. For the details of config file, see here.
# [] is optional, you can start training from pre-trained checkpoint\n$ mim train mmdet YOUR_CONFIG.py [--resume-from YOUR_CHECKPOINT.pth]\n
"},{"location":"perception/traffic_light_ssd_fine_detector/#step-3-export-onnx-model","title":"step 3. Export onnx model","text":"In exporting onnx, use mmdetection/tools/deployment/pytorch2onnx.py
or open-mmlab/mmdeploy. NOTE:
cd ~/mmdetection/tools/deployment\npython3 pytorch2onnx.py YOUR_CONFIG.py ...\n
"},{"location":"perception/traffic_light_ssd_fine_detector/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"Based on the camera image and the global ROI array detected by map_based_detection
node, a CNN-based detection method enables highly accurate traffic light detection.
~/input/image
sensor_msgs/Image
The full size camera image ~/input/rois
tier4_perception_msgs::msg::TrafficLightRoiArray
The array of ROIs detected by map_based_detector"},{"location":"perception/traffic_light_ssd_fine_detector/#output","title":"Output","text":"Name Type Description ~/output/rois
tier4_perception_msgs::msg::TrafficLightRoiArray
The detected accurate rois ~/debug/exe_time_ms
tier4_debug_msgs::msg::Float32Stamped
The time taken for inference"},{"location":"perception/traffic_light_ssd_fine_detector/#parameters","title":"Parameters","text":""},{"location":"perception/traffic_light_ssd_fine_detector/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description score_thresh
double 0.7 If the objectness score is less than this value, the object is ignored mean
std::vector [0.5,0.5,0.5] Average value of the normalized values of the image data used for training std
std::vector [0.5,0.5,0.5] Standard deviation of the normalized values of the image data used for training"},{"location":"perception/traffic_light_ssd_fine_detector/#node-parameters","title":"Node Parameters","text":"Name Type Default Value Description data_path
string \"$(env HOME)/autoware_data\" packages data and artifacts directory path onnx_file
string \"$(var data_path)/traffic_light_ssd_fine_detector/mb2-ssd-lite-tlr.onnx\" The onnx file name for yolo model label_file
string \"$(var data_path)/traffic_light_ssd_fine_detector/voc_labels_tl.txt\" The label file with label names for detected objects written on it dnn_header_type
string \"pytorch\" Name of DNN trained toolbox: \"pytorch\" or \"mmdetection\" mode
string \"FP32\" The inference mode: \"FP32\", \"FP16\", \"INT8\" max_batch_size
int 8 The size of the batch processed at one time by inference by TensorRT approximate_sync
bool false Flag for whether to ues approximate sync policy build_only
bool false shutdown node after TensorRT engine file is built"},{"location":"perception/traffic_light_ssd_fine_detector/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/traffic_light_ssd_fine_detector/#reference-repositories","title":"Reference repositories","text":"pytorch-ssd github repository
MobileNetV2
The traffic_light_visualization
is a package that includes two visualizing nodes:
~/input/tl_state
tier4_perception_msgs::msg::TrafficSignalArray
status of traffic lights ~/input/vector_map
autoware_auto_mapping_msgs::msg::HADMapBin
vector map"},{"location":"perception/traffic_light_visualization/#output","title":"Output","text":"Name Type Description ~/output/traffic_light
visualization_msgs::msg::MarkerArray
marker array that indicates status of traffic lights"},{"location":"perception/traffic_light_visualization/#traffic_light_roi_visualizer","title":"traffic_light_roi_visualizer","text":""},{"location":"perception/traffic_light_visualization/#input_1","title":"Input","text":"Name Type Description ~/input/tl_state
tier4_perception_msgs::msg::TrafficSignalArray
status of traffic lights ~/input/image
sensor_msgs::msg::Image
the image captured by perception cameras ~/input/rois
tier4_perception_msgs::msg::TrafficLightRoiArray
the ROIs detected by traffic_light_ssd_fine_detector
~/input/rough/rois
(option) tier4_perception_msgs::msg::TrafficLightRoiArray
the ROIs detected by traffic_light_map_based_detector
"},{"location":"perception/traffic_light_visualization/#output_1","title":"Output","text":"Name Type Description ~/output/image
sensor_msgs::msg::Image
output image with ROIs"},{"location":"perception/traffic_light_visualization/#parameters","title":"Parameters","text":""},{"location":"perception/traffic_light_visualization/#traffic_light_map_visualizer_1","title":"traffic_light_map_visualizer","text":"None
"},{"location":"perception/traffic_light_visualization/#traffic_light_roi_visualizer_1","title":"traffic_light_roi_visualizer","text":""},{"location":"perception/traffic_light_visualization/#node-parameters","title":"Node Parameters","text":"Name Type Default Value Descriptionenable_fine_detection
bool false whether to visualize result of the traffic light fine detection"},{"location":"perception/traffic_light_visualization/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/traffic_light_visualization/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/traffic_light_visualization/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/traffic_light_visualization/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/traffic_light_visualization/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"planning/","title":"Planning Components","text":""},{"location":"planning/#planning-components","title":"Planning Components","text":""},{"location":"planning/#getting-started","title":"Getting Started","text":"The Autoware.Universe Planning Modules represent a cutting-edge component within the broader open-source autonomous driving software stack. These modules play a pivotal role in autonomous vehicle navigation, skillfully handling route planning, dynamic obstacle avoidance, and real-time adaptation to varied traffic conditions.
The Module in the Planning Component refers to the various components that collectively form the planning system of the software. These modules cover a range of functionalities necessary for autonomous vehicle planning. Autoware's planning modules are modularized, meaning users can customize which functions are enabled by changing the configuration. This modular design allows for flexibility and adaptability to different scenarios and requirements in autonomous vehicle operations.
"},{"location":"planning/#how-to-enable-or-disable-planning-module","title":"How to Enable or Disable Planning Module","text":"Enabling and disabling modules involves managing settings in key configuration and launch files.
"},{"location":"planning/#key-files-for-configuration","title":"Key Files for Configuration","text":"The default_preset.yaml
file acts as the primary configuration file, where planning modules can be disable or enabled. Furthermore, users can also set the type of motion planner across various motion planners. For example:
launch_avoidance_module
: Set to true
to enable the avoidance module, or false
to disable it.motion_stop_planner_type
: Set default
to either obstacle_stop_planner
or obstacle_cruise_planner
.Note
Click here to view the default_preset.yaml
.
The launch files reference the settings defined in default_preset.yaml
to apply the configurations when the behavior path planner's node is running. For instance, the parameter avoidance.enable_module
in
<param name=\"avoidance.enable_module\" value=\"$(var launch_avoidance_module)\"/>\n
corresponds to launch_avoidance_module from default_preset.yaml
.
There are multiple parameters available for configuration, and users have the option to modify them in here. It's important to note that not all parameters are adjustable via rqt_reconfigure
. To ensure the changes are effective, modify the parameters and then restart Autoware. Additionally, detailed information about each parameter is available in the corresponding documents under the planning tab.
This guide outlines the steps for integrating your custom module into Autoware:
default_preset.yaml
file. For example- arg:\nname: launch_intersection_module\ndefault: \"true\"\n
<arg name=\"launch_intersection_module\" default=\"true\"/>\n\n<let\nname=\"behavior_velocity_planner_launch_modules\"\nvalue=\"$(eval "'$(var behavior_velocity_planner_launch_modules)' + 'behavior_velocity_planner::IntersectionModulePlugin, '")\"\nif=\"$(var launch_intersection_module)\"\n/>\n
behavior_velocity_planner_intersection_module_param_path
is used.<arg name=\"behavior_velocity_planner_intersection_module_param_path\" value=\"$(var behavior_velocity_config_path)/intersection.param.yaml\"/>\n
<param from=\"$(var behavior_velocity_planner_intersection_module_param_path)\"/>\n
Note
Depending on the specific module you wish to add, the relevant files and steps may vary. This guide provides a general overview and serves as a starting point. It's important to adapt these instructions to the specifics of your module.
"},{"location":"planning/#join-our-community-driven-effort","title":"Join Our Community-Driven Effort","text":"Autoware thrives on community collaboration. Every contribution, big or small, is invaluable to us. Whether it's reporting bugs, suggesting improvements, offering new ideas, or anything else you can think of \u2013 we welcome it all with open arms.
"},{"location":"planning/#how-to-contribute","title":"How to Contribute?","text":"Ready to contribute? Great! To get started, simply visit our Contributing Guidelines where you'll find all the information you need to jump in. This includes instructions on submitting bug reports, proposing feature enhancements, and even contributing to the codebase.
"},{"location":"planning/#join-our-planning-control-working-group-meetings","title":"Join Our Planning & Control Working Group Meetings","text":"The Planning & Control working group is an integral part of our community. We meet bi-weekly to discuss our current progress, upcoming challenges, and brainstorm new ideas. These meetings are a fantastic opportunity to directly contribute to our discussions and decision-making processes.
Meeting Details:
Interested in joining our meetings? We\u2019d love to have you! For more information on how to participate, visit the following link: How to participate in the working group.
"},{"location":"planning/#citations","title":"Citations","text":"Occasionally, we publish papers specific to the Planning Component in Autoware. We encourage you to explore these publications and find valuable insights for your work. If you find them useful and incorporate any of our methodologies or algorithms in your projects, citing our papers would be immensely helpful. This support allows us to reach a broader audience and continue contributing to the field.
If you use the Jerk Constrained Velocity Planning algorithm in Motion Velocity Smoother module in the Planning Component, we kindly request you to cite the relevant paper.
Y. Shimizu, T. Horibe, F. Watanabe and S. Kato, \"Jerk Constrained Velocity Planning for an Autonomous Vehicle: Linear Programming Approach,\" 2022 International Conference on Robotics and Automation (ICRA)
@inproceedings{shimizu2022,\n author={Shimizu, Yutaka and Horibe, Takamasa and Watanabe, Fumiya and Kato, Shinpei},\n booktitle={2022 International Conference on Robotics and Automation (ICRA)},\n title={Jerk Constrained Velocity Planning for an Autonomous Vehicle: Linear Programming Approach},\n year={2022},\n pages={5814-5820},\n doi={10.1109/ICRA46639.2022.9812155}}\n
"},{"location":"planning/behavior_path_avoidance_by_lane_change_module/","title":"Avoidance by lane change design","text":""},{"location":"planning/behavior_path_avoidance_by_lane_change_module/#avoidance-by-lane-change-design","title":"Avoidance by lane change design","text":"This is a sub-module to avoid obstacles by lane change maneuver.
"},{"location":"planning/behavior_path_avoidance_by_lane_change_module/#purpose-role","title":"Purpose / Role","text":"This module is designed as one of the obstacle avoidance features and generates a lane change path if the following conditions are satisfied.
Basically, this module is implemented by reusing the avoidance target filtering logic of the existing Normal Avoidance Module and the path generation logic of the Normal Lane Change Module. On the other hand, the conditions under which the module is activated differ from those of a normal avoidance module.
Check that the following conditions are satisfied after the filtering process for the avoidance target.
"},{"location":"planning/behavior_path_avoidance_by_lane_change_module/#number-of-the-avoidance-target-objects","title":"Number of the avoidance target objects","text":"This module is launched when the number of avoidance target objects on EGO DRIVING LANE is greater than execute_object_num
. If there are no avoidance targets in the ego driving lane or their number is less than the parameter, the obstacle is avoided by normal avoidance behavior (if the normal avoidance module is registered).
Unlike the normal avoidance module, which specifies the shift line end point, this module does not specify its end point when generating a lane change path. On the other hand, setting execute_only_when_lane_change_finish_before_object
to true
will activate this module only if the lane change can be completed before the avoidance target object.
Although setting the parameter to false
would increase the scene of avoidance by lane change, it is assumed that sufficient lateral margin may not be ensured in some cases because the vehicle passes by the side of obstacles during the lane change.
true
, this module will be launched only when the lane change end point is NOT behind the avoidance target object. true"},{"location":"planning/behavior_path_avoidance_module/","title":"Avoidance design","text":""},{"location":"planning/behavior_path_avoidance_module/#avoidance-design","title":"Avoidance design","text":"This is a rule-based path planning module designed for obstacle avoidance.
"},{"location":"planning/behavior_path_avoidance_module/#purpose-role","title":"Purpose / Role","text":"This module is designed for rule-based avoidance that is easy for developers to design its behavior. It generates avoidance path parameterized by intuitive parameters such as lateral jerk and avoidance distance margin. This makes it possible to pre-define avoidance behavior.
In addition, the approval interface of behavior_path_planner allows external users / modules (e.g. remote operation) to intervene the decision of the vehicle behavior.\u3000 This function is expected to be used, for example, for remote intervention in emergency situations or gathering information on operator decisions during development.
"},{"location":"planning/behavior_path_avoidance_module/#limitations","title":"Limitations","text":"This module allows developers to design vehicle behavior in avoidance planning using specific rules. Due to the property of rule-based planning, the algorithm can not compensate for not colliding with obstacles in complex cases. This is a trade-off between \"be intuitive and easy to design\" and \"be hard to tune but can handle many cases\". This module adopts the former policy and therefore this output should be checked more strictly in the later stage. In the .iv reference implementation, there is another avoidance module in motion planning module that uses optimization to handle the avoidance in complex cases. (Note that, the motion planner needs to be adjusted so that the behavior result will not be changed much in the simple case and this is a typical challenge for the behavior-motion hierarchical architecture.)
"},{"location":"planning/behavior_path_avoidance_module/#why-is-avoidance-in-behavior-module","title":"Why is avoidance in behavior module?","text":"This module executes avoidance over lanes, and the decision requires the lane structure information to take care of traffic rules (e.g. it needs to send an indicator signal when the vehicle crosses a lane). The difference between motion and behavior module in the planning stack is whether the planner takes traffic rules into account, which is why this avoidance module exists in the behavior module.
"},{"location":"planning/behavior_path_avoidance_module/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"The following figure shows a simple explanation of the logic for avoidance path generation. First, target objects are picked up, and shift requests are generated for each object. These shift requests are generated by taking into account the lateral jerk required for avoidance (red lines). Then these requests are merged and the shift points are created on the reference path (blue line). Filtering operations are performed on the shift points such as removing unnecessary shift points (yellow line), and finally a smooth avoidance path is generated by combining Clothoid-like curve primitives (green line).
"},{"location":"planning/behavior_path_avoidance_module/#flowchart","title":"Flowchart","text":""},{"location":"planning/behavior_path_avoidance_module/#overview-of-algorithm-for-target-object-filtering","title":"Overview of algorithm for target object filtering","text":""},{"location":"planning/behavior_path_avoidance_module/#how-to-decide-the-target-obstacles","title":"How to decide the target obstacles","text":"The avoidance target should be limited to stationary objects (you should not avoid a vehicle waiting at a traffic light even if it blocks your path). Therefore, target vehicles for avoidance should meet the following specific conditions.
threshold_speed_object_is_stopped
: parameter that be used for judge the object has stopped or not.threshold_time_object_is_moving
: parameter that be used for chattering prevention.2.0 m
) or too far(default: < 150.0 m
) and object is not behind the path goal.Not only the length from the centerline, but also the length from the road shoulder is calculated and used for the filtering process. It calculates the ratio of the actual length between the the object's center and the center line shift_length
and the maximum length the object can shift shiftable_length
.
The closer the object is to the shoulder, the larger the value of \\(ratio\\) (theoretical max value is 1.0), and it compares the value and object_check_shiftable_ratio
to determine whether the object is a parked-car. If the road has no road shoulders, it uses object_check_min_road_shoulder_width
as a road shoulder width virtually.
In order to prevent chattering of recognition results, once an obstacle is targeted, it is hold for a while even if it disappears. This is effective when recognition is unstable. However, since it will result in over-detection (increase a number of false-positive), it is necessary to adjust parameters according to the recognition accuracy (if object_last_seen_threshold = 0.0
, the recognition result is 100% trusted).
Since object recognition results contain noise related to position ,orientation and boundary size, if the raw object recognition results are used in path generation, the avoidance path will be directly affected by the noise.
Therefore, in order to reduce the influence of the noise, avoidance module generate a envelope polygon for the avoidance target that covers it, and the avoidance path should be generated based on that polygon. The envelope polygons are generated so that they are parallel to the reference path and the polygon size is larger than the avoidance target (define by object_envelope_buffer
). The position and size of the polygon is not updated as long as the avoidance target exists within that polygon.
# default value\nobject_envelope_buffer: 0.3 # [m]\n
"},{"location":"planning/behavior_path_avoidance_module/#computing-shift-length-and-shift-points","title":"Computing Shift Length and Shift Points","text":"The lateral shift length is affected by 4 variables, namely lateral_collision_safety_buffer
, lateral_collision_margin
, vehicle_width
and overhang_distance
. The equation is as follows
avoid_margin = lateral_collision_margin + lateral_collision_safety_buffer + 0.5 * vehicle_width\nmax_allowable_lateral_distance = to_road_shoulder_distance - road_shoulder_safety_margin - 0.5 * vehicle_width\nif(isOnRight(o))\n{\nshift_length = avoid_margin + overhang_distance\n}\nelse\n{\nshift_length = avoid_margin - overhang_distance\n}\n
The following figure illustrates these variables(This figure just shows the max value of lateral shift length).
"},{"location":"planning/behavior_path_avoidance_module/#rationale-of-having-safety-buffer-and-safety-margin","title":"Rationale of having safety buffer and safety margin","text":"To compute the shift length, additional parameters that can be tune are lateral_collision_safety_buffer
and road_shoulder_safety_margin
.
lateral_collision_safety_buffer
parameter is used to set a safety gap that will act as the final line of defense when computing avoidance path.lateral_collision_margin
might be changing according to the situation for various reasons. Therefore, lateral_collision_safety_buffer
will act as the final line of defense in case of the usage of lateral_collision_margin
fails.road_shoulder_safety_margin
will prevent the module from generating a path that might cause the vehicle to go too near the road shoulder or adjacent lane dividing line.The shift length is set as a constant value before the feature is implemented. Setting the shift length like this will cause the module to generate an avoidance path regardless of actual environmental properties. For example, the path might exceed the actual road boundary or go towards a wall. Therefore, to address this limitation, in addition to how to decide the target obstacle, the module also takes into account the following additional element
These elements are used to compute the distance from the object to the road's shoulder (to_road_shoulder_distance
). The parameters use_adjacent_lane
and use_opposite_lane
allows further configuration of the to to_road_shoulder_distance
. The following image illustrates the configuration.
If one of the following conditions is false
, then the shift point will not be generated.
avoid_margin = lateral_collision_margin + lateral_collision_safety_buffer + 0.5 * vehicle_width\navoid_margin <= (to_road_shoulder_distance - 0.5 * vehicle_width - road_shoulder_safety_margin)\n
The obstacle intrudes into the current driving path.
when the object is on right of the path
-overhang_dist<(lateral_collision_margin + lateral_collision_safety_buffer + 0.5 * vehicle_width)\n
when the object is on left of the path
overhang_dist<(lateral_collision_margin + lateral_collision_safety_buffer + 0.5 * vehicle_width)\n
Generate shift points for obstacles with given lateral jerk. These points are integrated to generate an avoidance path. The detailed process flow for each case corresponding to the obstacle placement are described below. The actual implementation is not separated for each case, but the function corresponding to multiple obstacle case (both directions)
is always running.
The lateral shift distance to the obstacle is calculated, and then the shift point is generated from the ego vehicle speed and the given lateral jerk as shown in the figure below. A smooth avoidance path is then calculated based on the shift point.
Additionally, the following processes are executed in special cases.
"},{"location":"planning/behavior_path_avoidance_module/#lateral-jerk-relaxation-conditions","title":"Lateral jerk relaxation conditions","text":"There is a problem that we can not know the actual speed during avoidance in advance. This is especially critical when the ego vehicle speed is 0. To solve that, this module provides a parameter for the minimum avoidance speed, which is used for the lateral jerk calculation when the vehicle speed is low.
Generate shift points for multiple obstacles. All of them are merged to generate new shift points along the reference path. The new points are filtered (e.g. remove small-impact shift points), and the avoidance path is computed for the filtered shift points.
Merge process of raw shift points: check the shift length on each path points. If the shift points are overlapped, the maximum shift value is selected for the same direction.
For the details of the shift point filtering, see filtering for shift points.
"},{"location":"planning/behavior_path_avoidance_module/#multiple-obstacle-case-both-direction","title":"Multiple obstacle case (both direction)","text":"Generate shift points for multiple obstacles. All of them are merged to generate new shift points. If there are areas where the desired shifts conflict in different directions, the sum of the maximum shift amounts of these areas is used as the final shift amount. The rest of the process is the same as in the case of one direction.
"},{"location":"planning/behavior_path_avoidance_module/#filtering-for-shift-points","title":"Filtering for shift points","text":"The shift points are modified by a filtering process in order to get the expected shape of the avoidance path. It contains the following filters.
This module has following parameters that sets which areas the path may extend into when generating an avoidance path.
# drivable area setting\nuse_adjacent_lane: true\nuse_opposite_lane: true\nuse_intersection_areas: false\nuse_hatched_road_markings: false\n
"},{"location":"planning/behavior_path_avoidance_module/#adjacent-lane","title":"adjacent lane","text":""},{"location":"planning/behavior_path_avoidance_module/#opposite-lane","title":"opposite lane","text":""},{"location":"planning/behavior_path_avoidance_module/#intersection-areas","title":"intersection areas","text":"The intersection area is defined on Lanelet map. See here
"},{"location":"planning/behavior_path_avoidance_module/#hatched-road-markings","title":"hatched road markings","text":"The hatched road marking is defined on Lanelet map. See here
"},{"location":"planning/behavior_path_avoidance_module/#safety-check","title":"Safety check","text":"The avoidance module has a safety check logic. The result of safe check is used for yield maneuver. It is enable by setting enable
as true
.
# safety check configuration\nenable: true # [-]\ncheck_current_lane: false # [-]\ncheck_shift_side_lane: true # [-]\ncheck_other_side_lane: false # [-]\ncheck_unavoidable_object: false # [-]\ncheck_other_object: true # [-]\n\n# collision check parameters\ncheck_all_predicted_path: false # [-]\ntime_horizon: 10.0 # [s]\nidling_time: 1.5 # [s]\nsafety_check_backward_distance: 50.0 # [m]\nsafety_check_accel_for_rss: 2.5 # [m/ss]\n
safety_check_backward_distance
is the parameter related to the safety check area. The module checks a collision risk for all vehicle that is within shift side lane and between object object_check_forward_distance
ahead and safety_check_backward_distance
behind.
NOTE: Even if a part of an object polygon overlaps the detection area, if the center of gravity of the object does not exist on the lane, the vehicle is excluded from the safety check target.
Judge the risk of collision based on ego future position and object prediction path. The module calculates Ego's future position in the time horizon (safety_check_time_horizon
), and use object's prediction path as object future position.
After calculating the future position of Ego and object, the module calculates the lateral/longitudinal deviation of Ego and the object. The module also calculates the lateral/longitudinal margin necessary to determine that it is safe to execute avoidance maneuver, and if both the lateral and longitudinal distances are less than the margins, it determines that there is a risk of a collision at that time.
The value of the longitudinal margin is calculated based on Responsibility-Sensitive Safety theory (RSS). The safety_check_idling_time
represents \\(T_{idle}\\), and safety_check_accel_for_rss
represents \\(a_{max}\\).
The lateral margin is changeable based on ego longitudinal velocity. If the vehicle is driving at a high speed, the lateral margin should be larger, and if the vehicle is driving at a low speed, the value of the lateral margin should be set to a smaller value. Thus, the lateral margin for each vehicle speed is set as a parameter, and the module determines the lateral margin from the current vehicle speed as shown in the following figure.
target_velocity_matrix:\ncol_size: 5\nmatrix: [2.78 5.56 ... 16.7 # target velocity [m/s]\n0.50 0.75 ... 1.50] # margin [m]\n
"},{"location":"planning/behavior_path_avoidance_module/#yield-maneuver","title":"Yield maneuver","text":""},{"location":"planning/behavior_path_avoidance_module/#overview","title":"Overview","text":"If an avoidance path can be generated and it is determined that avoidance maneuver should not be executed due to surrounding traffic conditions, the module executes YIELD maneuver. In yield maneuver, the vehicle slows down to the target vehicle velocity (yield_velocity
) and keep that speed until the module judge that avoidance path is safe. If the YIELD condition goes on and the vehicle approaches the avoidance target, it stops at the avoidable position and waits until the safety is confirmed.
# For yield maneuver\nyield_velocity: 2.78 # [m/s]\n
NOTE: In yield maneuver, the vehicle decelerates target velocity under constraints.
nominal_deceleration: -1.0 # [m/ss]\nnominal_jerk: 0.5 # [m/sss]\n
If it satisfies following all of three conditions, the module inserts stop point in front of the avoidance target with an avoidable interval.
The module determines that it is NOT passable without avoidance if the object overhang is less than the threshold.
lateral_passable_collision_margin: 0.5 # [-]\n
\\[ L_{overhang} < \\frac{W}{2} + L_{margin} (not passable) \\] The \\(W\\) represents vehicle width, and \\(L_{margin}\\) represents lateral_passable_collision_margin
.
The current behavior in unsafe condition is just slow down and it is so conservative. It is difficult to achieve aggressive behavior in the current architecture because of modularity. There are many modules in autoware that change the vehicle speed, and the avoidance module cannot know what speed planning they will output, so it is forced to choose a behavior that is as independent of other modules' processing as possible.
"},{"location":"planning/behavior_path_avoidance_module/#limitation2","title":"Limitation2","text":"The YIELD maneuver is executed ONLY when the vehicle has NOT initiated avoidance maneuver. The module has a threshold parameter (avoidance_initiate_threshold
) for the amount of shifting and determines that the vehicle is initiating avoidance if the vehicle current shift exceeds the threshold.
If enable_cancel_maneuver
parameter is true, Avoidance Module takes different actions according to the situations as follows:
If enable_cancel_maneuver
parameter is false, Avoidance Module doesn't revert generated avoidance path even if path objects are gone.
WIP
"},{"location":"planning/behavior_path_avoidance_module/#parameters","title":"Parameters","text":"The avoidance specific parameter configuration file can be located at src/autoware/launcher/planning_launch/config/scenario_planning/lane_driving/behavior_planning/behavior_path_planner/avoidance/avoidance.param.yaml
.
namespace: avoidance.
use_adjacent_lane
must be true
to take effects true use_intersection_areas [-] bool Extend drivable to intersection area. false use_hatched_road_markings [-] bool Extend drivable to hatched road marking area. false Name Unit Type Description Default value output_debug_marker [-] bool Flag to publish debug marker (set false
as default since it takes considerable cost). false output_debug_info [-] bool Flag to print debug info (set false
as default since it takes considerable cost). false"},{"location":"planning/behavior_path_avoidance_module/#avoidance-target-filtering-parameters","title":"Avoidance target filtering parameters","text":"namespace: avoidance.target_object.
This module supports all object classes, and it can set following parameters independently.
car:\nis_target: true # [-]\nmoving_speed_threshold: 1.0 # [m/s]\nmoving_time_threshold: 1.0 # [s]\nmax_expand_ratio: 0.0 # [-]\nenvelope_buffer_margin: 0.3 # [m]\navoid_margin_lateral: 1.0 # [m]\nsafety_buffer_lateral: 0.7 # [m]\nsafety_buffer_longitudinal: 0.0 # [m]\n
Name Unit Type Description Default value is_target [-] bool By setting this flag true
, this module avoid those class objects. false moving_speed_threshold [m/s] double Objects with speed greater than this will be judged as moving ones. 1.0 moving_time_threshold [s] double Objects keep moving longer duration than this will be excluded from avoidance target. 1.0 envelope_buffer_margin [m] double The buffer between raw boundary box of detected objects and enveloped polygon that is used for avoidance path generation. 0.3 avoid_margin_lateral [m] double The lateral distance between ego and avoidance targets. 1.0 safety_buffer_lateral [m] double Creates an additional lateral gap that will prevent the vehicle from getting to near to the obstacle. 0.5 safety_buffer_longitudinal [m] double Creates an additional longitudinal gap that will prevent the vehicle from getting to near to the obstacle. 0.0 Parameters for the logic to compensate perception noise of the far objects.
Name Unit Type Description Default value max_expand_ratio [-] double This value will be appliedenvelope_buffer_margin
according to the distance between the ego and object. 0.0 lower_distance_for_polygon_expansion [-] double If the distance between the ego and object is less than this, the expand ratio will be zero. 30.0 upper_distance_for_polygon_expansion [-] double If the distance between the ego and object is larger than this, the expand ratio will be max_expand_ratio
. 100.0 namespace: avoidance.target_filtering.
namespace: avoidance.safety_check.
namespace: avoidance.avoidance.lateral.
namespace: avoidance.avoidance.longitudinal.
namespace: avoidance.yield.
namespace: avoidance.stop.
namespace: avoidance.constraints.
TRUE: allow to control breaking mildness
false namespace: avoidance.constraints.lateral.
namespace: avoidance.constraints.longitudinal.
(*2) If there are multiple vehicles in a row to be avoided, no new avoidance path will be generated unless their lateral margin difference exceeds this value.
"},{"location":"planning/behavior_path_avoidance_module/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":"Safety Check
Consideration of the speed of the avoidance target
Cancel avoidance when target disappears
Improved performance of avoidance target selection
5m
), but small resolution should be applied for complex paths.Developers can see what is going on in each process by visualizing all the avoidance planning process outputs. The example includes target vehicles, shift points for each object, shift points after each filtering process, etc.
To enable the debug marker, execute ros2 param set /planning/scenario_planning/lane_driving/behavior_planning/behavior_path_planner avoidance.publish_debug_marker true
(no restart is needed) or simply set the publish_debug_marker
to true
in the avoidance.param.yaml
for permanent effect (restart is needed). Then add the marker /planning/scenario_planning/lane_driving/behavior_planning/behavior_path_planner/debug/avoidance
in rviz2
.
If for some reason, no shift point is generated for your object, you can check for the failure reason via ros2 topic echo
.
To print the debug message, just run the following
ros2 topic echo /planning/scenario_planning/lane_driving/behavior_planning/behavior_path_planner/debug/avoidance_debug_message_array\n
"},{"location":"planning/behavior_path_dynamic_avoidance_module/","title":"Dynamic avoidance design","text":""},{"location":"planning/behavior_path_dynamic_avoidance_module/#dynamic-avoidance-design","title":"Dynamic avoidance design","text":""},{"location":"planning/behavior_path_dynamic_avoidance_module/#purpose-role","title":"Purpose / Role","text":"This is a module designed for avoiding obstacles which are running. Static obstacles such as parked vehicles are dealt with by the avoidance module.
This module is under development. In the current implementation, the dynamic obstacles to avoid is extracted from the drivable area. Then the motion planner, in detail obstacle_avoidance_planner, will generate an avoiding trajectory.
"},{"location":"planning/behavior_path_dynamic_avoidance_module/#overview-of-drivable-area-modification","title":"Overview of drivable area modification","text":""},{"location":"planning/behavior_path_dynamic_avoidance_module/#filtering-obstacles-to-avoid","title":"Filtering obstacles to avoid","text":"The dynamics obstacles meeting the following condition will be avoided.
target_object.*
.target_object.min_obstacle_vel
.To realize dynamic obstacles for avoidance, the time dimension should be take into an account considering the dynamics. However, it will make the planning problem much harder to solve. Therefore, we project the time dimension to the 2D pose dimension.
Currently, the predicted paths of predicted objects are not so stable. Therefore, instead of using the predicted paths, we assume that the obstacle will run parallel to the ego's path.
First, a maximum lateral offset to avoid is calculated as follows. The polygon's width to extract from the drivable area is the obstacle width and double drivable_area_generation.lat_offset_from_obstacle
. We can limit the lateral shift offset by drivable_area_generation.max_lat_offset_to_avoid
.
Then, extracting the same directional and opposite directional obstacles from the drivable area will work as follows considering TTC (time to collision). Regarding the same directional obstacles, obstacles whose TTC is negative will be ignored (e.g. The obstacle is in front of the ego, and the obstacle's velocity is larger than the ego's velocity.).
Same directional obstacles
Opposite directional obstacles
"},{"location":"planning/behavior_path_dynamic_avoidance_module/#parameters","title":"Parameters","text":"Name Unit Type Description Default value target_object.car [-] bool The flag whether to avoid cars or not true target_object.truck [-] bool The flag whether to avoid trucks or not true ... [-] bool ... ... target_object.min_obstacle_vel [m/s] double Minimum obstacle velocity to avoid 1.0 drivable_area_generation.lat_offset_from_obstacle [m] double Lateral offset to avoid from obstacles 0.8 drivable_area_generation.max_lat_offset_to_avoid [m] double Maximum lateral offset to avoid 0.5 drivable_area_generation.overtaking_object.max_time_to_collision [s] double Maximum value when calculating time to collision 3.0 drivable_area_generation.overtaking_object.start_duration_to_avoid [s] double Duration to consider avoidance before passing by obstacles 4.0 drivable_area_generation.overtaking_object.end_duration_to_avoid [s] double Duration to consider avoidance after passing by obstacles 5.0 drivable_area_generation.overtaking_object.duration_to_hold_avoidance [s] double Duration to hold avoidance after passing by obstacles 3.0 drivable_area_generation.oncoming_object.max_time_to_collision [s] double Maximum value when calculating time to collision 3.0 drivable_area_generation.oncoming_object.start_duration_to_avoid [s] double Duration to consider avoidance before passing by obstacles 9.0 drivable_area_generation.oncoming_object.end_duration_to_avoid [s] double Duration to consider avoidance after passing by obstacles 0.0"},{"location":"planning/behavior_path_goal_planner_module/","title":"Goal Planner design","text":""},{"location":"planning/behavior_path_goal_planner_module/#goal-planner-design","title":"Goal Planner design","text":""},{"location":"planning/behavior_path_goal_planner_module/#purpose-role","title":"Purpose / Role","text":"Plan path around the goal.
If goal modification is not allowed, park at the designated fixed goal. (fixed_goal_planner
in the figure below) When allowed, park in accordance with the specified policy(e.g pull over on left/right side of the lane). (rough_goal_planner
in the figure below). Currently rough goal planner only support pull_over feature, but it would be desirable to be able to accommodate various parking policies in the future.
Either one is activated when all conditions are met.
"},{"location":"planning/behavior_path_goal_planner_module/#fixed_goal_planner","title":"fixed_goal_planner","text":"allow_goal_modification=false
by default.If the target path contains a goal, modify the points of the path so that the path and the goal are connected smoothly. This process will change the shape of the path by the distance of refine_goal_search_radius_range
from the goal. Note that this logic depends on the interpolation algorithm that will be executed in a later module (at the moment it uses spline interpolation), so it needs to be updated in the future.
pull_over_minimum_request_length
.allow_goal_modification=true
.2D Rough Goal Pose
with the key bind r
in RViz, but in the future there will be a panel of tools to manipulate various Route API from RViz.pull_over_minimum_request_length
.road_shoulder
.1m
).0.01m/s
).Generate footprints from ego-vehicle path points and determine obstacle collision from the value of occupancy_grid of the corresponding cell.
"},{"location":"planning/behavior_path_goal_planner_module/#parameters-for-occupancy-grid-based-collision-check","title":"Parameters for occupancy grid based collision check","text":"Name Unit Type Description Default value use_occupancy_grid_for_goal_search [-] bool flag whether to use occupancy grid for goal search collision check true use_occupancy_grid_for_goal_longitudinal_margin [-] bool flag whether to use occupancy grid for keeping longitudinal margin false use_occupancy_grid_for_path_collision_check [-] bool flag whether to use occupancy grid for collision check false occupancy_grid_collision_check_margin [m] double margin to calculate ego-vehicle cells from footprint. 0.0 theta_size [-] int size of theta angle to be considered. angular resolution for collision check will be 2\\(\\pi\\) / theta_size [rad]. 360 obstacle_threshold [-] int threshold of cell values to be considered as obstacles 60"},{"location":"planning/behavior_path_goal_planner_module/#object-recognition-based-collision-check","title":"object recognition based collision check","text":""},{"location":"planning/behavior_path_goal_planner_module/#parameters-for-object-recognition-based-collision-check","title":"Parameters for object recognition based collision check","text":"Name Unit Type Description Default value use_object_recognition [-] bool flag whether to use object recognition for collision check true object_recognition_collision_check_margin [m] double margin to calculate ego-vehicle cells from footprint. 0.6 object_recognition_collision_check_max_extra_stopping_margin [m] double maximum value when adding longitudinal distance margin for collision check considering stopping distance 1.0 detection_bound_offset [m] double expand pull over lane with this offset to make detection area for collision check of path generation 15.0"},{"location":"planning/behavior_path_goal_planner_module/#goal-search","title":"Goal Search","text":"If it is not possible to park safely at a given goal, /planning/scenario_planning/modified_goal
is searched for in certain range of the shoulder lane.
goal search video
"},{"location":"planning/behavior_path_goal_planner_module/#parameters-for-goal-search","title":"Parameters for goal search","text":"Name Unit Type Description Default value goal_priority [-] string In caseminimum_weighted_distance
, sort with smaller longitudinal distances taking precedence over smaller lateral distances. In case minimum_longitudinal_distance
, sort with weighted lateral distance against longitudinal distance. minimum_weighted_distance
prioritize_goals_before_objects [-] bool If there are objects that may need to be avoided, prioritize the goal in front of them true forward_goal_search_length [m] double length of forward range to be explored from the original goal 20.0 backward_goal_search_length [m] double length of backward range to be explored from the original goal 20.0 goal_search_interval [m] double distance interval for goal search 2.0 longitudinal_margin [m] double margin between ego-vehicle at the goal position and obstacles 3.0 max_lateral_offset [m] double maximum offset of goal search in the lateral direction 0.5 lateral_offset_interval [m] double distance interval of goal search in the lateral direction 0.25 ignore_distance_from_lane_start [m] double distance from start of pull over lanes for ignoring goal candidates 0.0 ignore_distance_from_lane_start [m] double distance from start of pull over lanes for ignoring goal candidates 0.0 margin_from_boundary [m] double distance margin from edge of the shoulder lane 0.5"},{"location":"planning/behavior_path_goal_planner_module/#pull-over","title":"Pull Over","text":"There are three path generation methods. The path is generated with a certain margin (default: 0.5 m
) from the boundary of shoulder lane.
efficient_path
use a goal that can generate an efficient path which is set in efficient_path_order
. In case close_goal
use the closest goal to the original one. efficient_path efficient_path_order [-] string efficient order of pull over planner along lanes\u3000excluding freespace pull over [\"SHIFT\", \"ARC_FORWARD\", \"ARC_BACKWARD\"]"},{"location":"planning/behavior_path_goal_planner_module/#shift-parking","title":"shift parking","text":"Pull over distance is calculated by the speed, lateral deviation, and the lateral jerk. The lateral jerk is searched for among the predetermined minimum and maximum values, and the one satisfies ready conditions described above is output.
shift_parking video
"},{"location":"planning/behavior_path_goal_planner_module/#parameters-for-shift-parking","title":"Parameters for shift parking","text":"Name Unit Type Description Default value enable_shift_parking [-] bool flag whether to enable shift parking true shift_sampling_num [-] int Number of samplings in the minimum to maximum range of lateral_jerk 4 maximum_lateral_jerk [m/s3] double maximum lateral jerk 2.0 minimum_lateral_jerk [m/s3] double minimum lateral jerk 0.5 deceleration_interval [m] double distance of deceleration section 15.0 after_shift_straight_distance [m] double straight line distance after pull over end point 1.0"},{"location":"planning/behavior_path_goal_planner_module/#geometric-parallel-parking","title":"geometric parallel parking","text":"Generate two arc paths with discontinuous curvature. It stops twice in the middle of the path to control the steer on the spot. There are two path generation methods: forward and backward. See also [1] for details of the algorithm. There is also a simple python implementation.
"},{"location":"planning/behavior_path_goal_planner_module/#parameters-geometric-parallel-parking","title":"Parameters geometric parallel parking","text":"Name Unit Type Description Default value arc_path_interval [m] double interval between arc path points 1.0 pull_over_max_steer_rad [rad] double maximum steer angle for path generation. it may not be possible to control steer up to max_steer_angle in vehicle_info when stopped 0.35"},{"location":"planning/behavior_path_goal_planner_module/#arc-forward-parking","title":"arc forward parking","text":"Generate two forward arc paths.
arc_forward_parking video
"},{"location":"planning/behavior_path_goal_planner_module/#parameters-arc-forward-parking","title":"Parameters arc forward parking","text":"Name Unit Type Description Default value enable_arc_forward_parking [-] bool flag whether to enable arc forward parking true after_forward_parking_straight_distance [m] double straight line distance after pull over end point 2.0 forward_parking_velocity [m/s] double velocity when forward parking 1.38 forward_parking_lane_departure_margin [m/s] double lane departure margin for front left corner of ego-vehicle when forward parking 0.0"},{"location":"planning/behavior_path_goal_planner_module/#arc-backward-parking","title":"arc backward parking","text":"Generate two backward arc paths.
.
arc_backward_parking video
"},{"location":"planning/behavior_path_goal_planner_module/#parameters-arc-backward-parking","title":"Parameters arc backward parking","text":"Name Unit Type Description Default value enable_arc_backward_parking [-] bool flag whether to enable arc backward parking true after_backward_parking_straight_distance [m] double straight line distance after pull over end point 2.0 backward_parking_velocity [m/s] double velocity when backward parking -1.38 backward_parking_lane_departure_margin [m/s] double lane departure margin for front right corner of ego-vehicle when backward 0.0"},{"location":"planning/behavior_path_goal_planner_module/#freespace-parking","title":"freespace parking","text":"If the vehicle gets stuck with lane_parking
, run freespace_parking
. To run this feature, you need to set parking_lot
to the map, activate_by_scenario
of costmap_generator to false
and enable_freespace_parking
to true
Simultaneous execution with avoidance_module
in the flowchart is under development.
See freespace_planner for other parameters.
"},{"location":"planning/behavior_path_lane_change_module/","title":"Lane Change design","text":""},{"location":"planning/behavior_path_lane_change_module/#lane-change-design","title":"Lane Change design","text":"The Lane Change module is activated when lane change is needed and can be safely executed.
"},{"location":"planning/behavior_path_lane_change_module/#lane-change-requirement","title":"Lane Change Requirement","text":"preferred_lane
.The lane change candidate path is divided into two phases: preparation and lane-changing. The following figure illustrates each phase of the lane change candidate path.
"},{"location":"planning/behavior_path_lane_change_module/#preparation-phase","title":"Preparation phase","text":"The preparation trajectory is the candidate path's first and the straight portion generated along the ego vehicle's current lane. The length of the preparation trajectory is computed as follows.
lane_change_prepare_distance = current_speed * lane_change_prepare_duration + 0.5 * deceleration * lane_change_prepare_duration^2\n
During the preparation phase, the turn signal will be activated when the remaining distance is equal to or less than lane_change_search_distance
.
The lane-changing phase consist of the shifted path that moves ego from current lane to the target lane. Total distance of lane-changing phase is as follows. Note that during the lane changing phase, the ego vehicle travels at a constant speed.
lane_change_prepare_velocity = std::max(current_speed + deceleration * lane_change_prepare_duration, minimum_lane_changing_velocity)\nlane_changing_distance = lane_change_prepare_velocity * lane_changing_duration\n
The backward_length_buffer_for_end_of_lane
is added to allow some window for any possible delay, such as control or mechanical delay during brake lag.
Lane change velocity is affected by the ego vehicle's current velocity. High velocity requires longer preparation and lane changing distance. However we also need to plan lane changing trajectories in case ego vehicle slows down. Computing candidate paths that assumes ego vehicle's slows down is performed by substituting predetermined deceleration value into prepare_length
, prepare_velocity
and lane_changing_length
equation.
The predetermined longitudinal acceleration values are a set of value that starts from longitudinal_acceleration = maximum_longitudinal_acceleration
, and decrease by longitudinal_acceleration_resolution
until it reaches longitudinal_acceleration = -maximum_longitudinal_deceleration
. Both maximum_longitudinal_acceleration
and maximum_longitudinal_deceleration
are calculated as: defined in the common.param
file as normal.min_acc
.
maximum_longitudinal_acceleration = min(common_param.max_acc, lane_change_param.max_acc)\nmaximum_longitudinal_deceleration = max(common_param.min_acc, lane_change_param.min_acc)\n
where common_param
is vehicle common parameter, which defines vehicle common maximum longitudinal acceleration and deceleration. Whereas, lane_change_param
has maximum longitudinal acceleration and deceleration for the lane change module. For example, if a user set and common_param.max_acc=1.0
and lane_change_param.max_acc=0.0
, maximum_longitudinal_acceleration
becomes 0.0
, and the lane change does not accelerate in the lane change phase.
The longitudinal_acceleration_resolution
is determine by the following
longitudinal_acceleration_resolution = (maximum_longitudinal_acceleration - minimum_longitudinal_acceleration) / longitudinal_acceleration_sampling_num\n
Note that when the current_velocity
is lower than minimum_lane_changing_velocity
, the vehicle needs to accelerate its velocity to minimum_lane_changing_velocity
. Therefore, longitudinal acceleration becomes positive value (not decelerate).
The following figure illustrates when longitudinal_acceleration_sampling_num = 4
. Assuming that maximum_deceleration = 1.0
then a0 == 0.0 == no deceleration
, a1 == 0.25
, a2 == 0.5
, a3 == 0.75
and a4 == 1.0 == maximum_deceleration
. a0
is the expected lane change trajectories should ego vehicle do not decelerate, and a1
's path is the expected lane change trajectories should ego vehicle decelerate at 0.25 m/s^2
.
Which path will be chosen will depend on validity and collision check.
"},{"location":"planning/behavior_path_lane_change_module/#multiple-candidate-path-samples-lateral-acceleration","title":"Multiple candidate path samples (lateral acceleration)","text":"In addition to sampling longitudinal acceleration, we also sample lane change paths by adjusting the value of lateral acceleration. Since lateral acceleration influences the duration of a lane change, a lower lateral acceleration value results in a longer lane change path, while a higher lateral acceleration value leads to a shorter lane change path. This allows the lane change module to generate a shorter lane change path by increasing the lateral acceleration when there is limited space for the lane change.
The maximum and minimum lateral accelerations are defined in the lane change parameter file as a map. The range of lateral acceleration is determined for each velocity by linearly interpolating the values in the map. Let's assume we have the following map
Ego Velocity Minimum lateral acceleration Maximum lateral acceleration 0.0 0.2 0.3 2.0 0.2 0.4 4.0 0.3 0.4 6.0 0.3 0.5In this case, when the current velocity of the ego vehicle is 3.0, the minimum and maximum lateral accelerations are 0.25 and 0.4 respectively. These values are obtained by linearly interpolating the second and third rows of the map, which provide the minimum and maximum lateral acceleration values.
Within this range, we sample the lateral acceleration for the ego vehicle. Similar to the method used for sampling longitudinal acceleration, the resolution of lateral acceleration (lateral_acceleration_resolution) is determined by the following:
lateral_acceleration_resolution = (maximum_lateral_acceleration - minimum_lateral_acceleration) / lateral_acceleration_sampling_num\n
"},{"location":"planning/behavior_path_lane_change_module/#candidate-paths-validity-check","title":"Candidate Path's validity check","text":"A candidate path is valid if the total lane change distance is less than
The goal must also be in the list of the preferred lane.
The following flow chart illustrates the validity check.
"},{"location":"planning/behavior_path_lane_change_module/#candidate-paths-safety-check","title":"Candidate Path's Safety check","text":"See safety check utils explanation
"},{"location":"planning/behavior_path_lane_change_module/#objects-selection-and-classification","title":"Objects selection and classification","text":"First, we divide the target objects into obstacles in the target lane, obstacles in the current lane, and obstacles in other lanes. Target lane indicates the lane that the ego vehicle is going to reach after the lane change and current lane mean the current lane where the ego vehicle is following before the lane change. Other lanes are lanes that do not belong to the target and current lanes. The following picture describes objects on each lane. Note that users can remove objects either on current and other lanes from safety check by changing the flag, which are check_objects_on_current_lanes
and check_objects_on_other_lanes
.
Furthermore, to change lanes behind a vehicle waiting at a traffic light, we skip the safety check for the stopping vehicles near the traffic light.\u3000The explanation for parked car detection is written in documentation for avoidance module.
"},{"location":"planning/behavior_path_lane_change_module/#collision-check-in-prepare-phase","title":"Collision check in prepare phase","text":"The ego vehicle may need to secure ample inter-vehicle distance ahead of the target vehicle before attempting a lane change. The flag enable_collision_check_at_prepare_phase
can be enabled to gain this behavior. The following image illustrates the differences between the false
and true
cases.
The parameter prepare_phase_ignore_target_speed_thresh
can be configured to ignore the prepare phase collision check for targets whose speeds are less than a specific threshold, such as stationary or very slow-moving objects.
When driving on the public road with other vehicles, there exist scenarios where lane changes cannot be executed. Suppose the candidate path is evaluated as unsafe, for example, due to incoming vehicles in the adjacent lane. In that case, the ego vehicle can't change lanes, and it is impossible to reach the goal. Therefore, the ego vehicle must stop earlier at a certain distance and wait for the adjacent lane to be evaluated as safe. The minimum stopping distance can be computed from shift length and minimum lane changing velocity.
lane_changing_time = f(shift_length, lat_acceleration, lat_jerk)\nminimum_lane_change_distance = minimum_prepare_length + minimum_lane_changing_velocity * lane_changing_time + lane_change_finish_judge_buffer\n
The following figure illustrates when the lane is blocked in multiple lane changes cases.
"},{"location":"planning/behavior_path_lane_change_module/#stopping-position-when-an-object-exists-ahead","title":"Stopping position when an object exists ahead","text":"When an obstacle is in front of the ego vehicle, stop with keeping a distance for lane change. The position to be stopped depends on the situation, such as when the lane change is blocked by the target lane obstacle, or when the lane change is not needed immediately.The following shows the division in that case.
"},{"location":"planning/behavior_path_lane_change_module/#when-the-ego-vehicle-is-near-the-end-of-the-lane-change","title":"When the ego vehicle is near the end of the lane change","text":"Regardless of the presence or absence of objects in the lane change target lane, stop by keeping the distance necessary for lane change to the object ahead.
"},{"location":"planning/behavior_path_lane_change_module/#when-the-ego-vehicle-is-not-near-the-end-of-the-lane-change","title":"When the ego vehicle is not near the end of the lane change","text":"If there are NO objects in the lane change section of the target lane, stop by keeping the distance necessary for lane change to the object ahead.
If there are objects in the lane change section of the target lane, stop WITHOUT keeping the distance necessary for lane change to the object ahead.
"},{"location":"planning/behavior_path_lane_change_module/#when-the-target-lane-is-far-away","title":"When the target lane is far away","text":"When the target lane for lane change is far away and not next to the current lane, do not keep the distance necessary for lane change to the object ahead.
"},{"location":"planning/behavior_path_lane_change_module/#lane-change-when-stuck","title":"Lane Change When Stuck","text":"The ego vehicle is considered stuck if it is stopped and meets any of the following conditions:
In this case, the safety check for lane change is relaxed compared to normal times. Please refer to the 'stuck' section under the 'Collision checks during lane change' for more details. The function to stop by keeping a margin against forward obstacle in the previous section is being performed to achieve this feature.
"},{"location":"planning/behavior_path_lane_change_module/#lane-change-regulations","title":"Lane change regulations","text":"If you want to regulate lane change on crosswalks or intersections, the lane change module finds a lane change path excluding it includes crosswalks or intersections. To regulate lane change on crosswalks or intersections, change regulation.crosswalk
or regulation.intersection
to true
. If the ego vehicle gets stuck, to avoid stuck, it enables lane change in crosswalk/intersection. If the ego vehicle stops more than stuck_detection.stop_time
seconds, it is regarded as a stuck. If the ego vehicle velocity is smaller than stuck_detection.velocity
, it is regarded as stopping.
The abort process may result in three different outcome; Cancel, Abort and Stop/Cruise.
The following depicts the flow of the abort lane change check.
"},{"location":"planning/behavior_path_lane_change_module/#cancel","title":"Cancel","text":"Suppose the lane change trajectory is evaluated as unsafe. In that case, if the ego vehicle has not departed from the current lane yet, the trajectory will be reset, and the ego vehicle will resume the lane following the maneuver.
The function can be enabled by setting enable_on_prepare_phase
to true
.
The following image illustrates the cancel process.
"},{"location":"planning/behavior_path_lane_change_module/#abort","title":"Abort","text":"Assume the ego vehicle has already departed from the current lane. In that case, it is dangerous to cancel the path, and it will cause the ego vehicle to change the heading direction abruptly. In this case, planning a trajectory that allows the ego vehicle to return to the current path while minimizing the heading changes is necessary. In this case, the lane change module will generate an abort path. The following images show an example of the abort path. Do note that the function DOESN'T GUARANTEE a safe abort process, as it didn't check the presence of the surrounding objects and/or their reactions. The function can be enable manually by setting both enable_on_prepare_phase
and enable_on_lane_changing_phase
to true
. The parameter max_lateral_jerk
need to be set to a high value in order for it to work.
The last behavior will also occur if the ego vehicle has departed from the current lane. If the abort function is disabled or the abort is no longer possible, the ego vehicle will attempt to stop or transition to the obstacle cruise mode. Do note that the module DOESN'T GUARANTEE safe maneuver due to the unexpected behavior that might've occurred during these critical scenarios. The following images illustrate the situation.
"},{"location":"planning/behavior_path_lane_change_module/#parameters","title":"Parameters","text":""},{"location":"planning/behavior_path_lane_change_module/#essential-lane-change-parameters","title":"Essential lane change parameters","text":"The following parameters are configurable in lane_change.param.yaml
.
backward_lane_length
[m] double The backward length to check incoming objects in lane change target lane. 200.0 prepare_duration
[m] double The preparation time for the ego vehicle to be ready to perform lane change. 4.0 backward_length_buffer_for_end_of_lane
[m] double The end of lane buffer to ensure ego vehicle has enough distance to start lane change 3.0 backward_length_buffer_for_blocking_object
[m] double The end of lane buffer to ensure ego vehicle has enough distance to start lane change when there is an object in front 3.0 lane_change_finish_judge_buffer
[m] double The additional buffer used to confirm lane change process completion 3.0 finish_judge_lateral_threshold
[m] double Lateral distance threshold to confirm lane change process completion 0.2 lane_changing_lateral_jerk
[m/s3] double Lateral jerk value for lane change path generation 0.5 minimum_lane_changing_velocity
[m/s] double Minimum speed during lane changing process. 2.78 prediction_time_resolution
[s] double Time resolution for object's path interpolation and collision check. 0.5 longitudinal_acceleration_sampling_num
[-] int Number of possible lane-changing trajectories that are being influenced by longitudinal acceleration 5 lateral_acceleration_sampling_num
[-] int Number of possible lane-changing trajectories that are being influenced by lateral acceleration 3 object_check_min_road_shoulder_width
[m] double Width considered as a road shoulder if the lane does not have a road shoulder 0.5 object_shiftable_ratio_threshold
[-] double Vehicles around the center line within this distance ratio will be excluded from parking objects 0.6 min_length_for_turn_signal_activation
[m] double Turn signal will be activated if the ego vehicle approaches to this length from minimum lane change length 10.0 length_ratio_for_turn_signal_deactivation
[-] double Turn signal will be deactivated if the ego vehicle approaches to this length ratio for lane change finish point 0.8 max_longitudinal_acc
[-] double maximum longitudinal acceleration for lane change 1.0 min_longitudinal_acc
[-] double maximum longitudinal deceleration for lane change -1.0 lateral_acceleration.velocity
[m/s] double Reference velocity for lateral acceleration calculation (look up table) [0.0, 4.0, 10.0] lateral_acceleration.min_values
[m/ss] double Min lateral acceleration values corresponding to velocity (look up table) [0.15, 0.15, 0.15] lateral_acceleration.max_values
[m/ss] double Max lateral acceleration values corresponding to velocity (look up table) [0.5, 0.5, 0.5] target_object.car
[-] boolean Include car objects for safety check true target_object.truck
[-] boolean Include truck objects for safety check true target_object.bus
[-] boolean Include bus objects for safety check true target_object.trailer
[-] boolean Include trailer objects for safety check true target_object.unknown
[-] boolean Include unknown objects for safety check true target_object.bicycle
[-] boolean Include bicycle objects for safety check true target_object.motorcycle
[-] boolean Include motorcycle objects for safety check true target_object.pedestrian
[-] boolean Include pedestrian objects for safety check true"},{"location":"planning/behavior_path_lane_change_module/#lane-change-regulations_1","title":"Lane change regulations","text":"Name Unit Type Description Default value regulation.crosswalk
[-] boolean Regulate lane change on crosswalks false regulation.intersection
[-] boolean Regulate lane change on intersections false"},{"location":"planning/behavior_path_lane_change_module/#ego-vehicle-stuck-detection","title":"Ego vehicle stuck detection","text":"Name Unit Type Description Default value stuck_detection.velocity
[m/s] double Velocity threshold for ego vehicle stuck detection 0.1 stuck_detection.stop_time
[s] double Stop time threshold for ego vehicle stuck detection 3.0"},{"location":"planning/behavior_path_lane_change_module/#collision-checks-during-lane-change","title":"Collision checks during lane change","text":"The following parameters are configurable in behavior_path_planner.param.yaml
and lane_change.param.yaml
.
safety_check.execution.lateral_distance_max_threshold
[m] double The lateral distance threshold that is used to determine whether lateral distance between two object is enough and whether lane change is safe. 2.0 safety_check.execution.longitudinal_distance_min_threshold
[m] double The longitudinal distance threshold that is used to determine whether longitudinal distance between two object is enough and whether lane change is safe. 3.0 safety_check.execution.expected_front_deceleration
[m/s^2] double The front object's maximum deceleration when the front vehicle perform sudden braking. (*1) -1.0 safety_check.execution.expected_rear_deceleration
[m/s^2] double The rear object's maximum deceleration when the rear vehicle perform sudden braking. (*1) -1.0 safety_check.execution.rear_vehicle_reaction_time
[s] double The reaction time of the rear vehicle driver which starts from the driver noticing the sudden braking of the front vehicle until the driver step on the brake. 2.0 safety_check.execution.rear_vehicle_safety_time_margin
[s] double The time buffer for the rear vehicle to come into complete stop when its driver perform sudden braking. 2.0 safety_check.execution.enable_collision_check_at_prepare_phase
[-] boolean Perform collision check starting from prepare phase. If false
, collision check only evaluated for lane changing phase. true safety_check.execution.prepare_phase_ignore_target_speed_thresh
[m/s] double Ignore collision check in prepare phase of object speed that is lesser that the configured value. enable_collision_check_at_prepare_phase
must be true
0.1 safety_check.execution.check_objects_on_current_lanes
[-] boolean If true, the lane change module include objects on current lanes. true safety_check.execution.check_objects_on_other_lanes
[-] boolean If true, the lane change module include objects on other lanes. true safety_check.execution.use_all_predicted_path
[-] boolean If false, use only the predicted path that has the maximum confidence. true"},{"location":"planning/behavior_path_lane_change_module/#cancel_1","title":"cancel","text":"Name Unit Type Description Default value safety_check.cancel.lateral_distance_max_threshold
[m] double The lateral distance threshold that is used to determine whether lateral distance between two object is enough and whether lane change is safe. 1.5 safety_check.cancel.longitudinal_distance_min_threshold
[m] double The longitudinal distance threshold that is used to determine whether longitudinal distance between two object is enough and whether lane change is safe. 3.0 safety_check.cancel.expected_front_deceleration
[m/s^2] double The front object's maximum deceleration when the front vehicle perform sudden braking. (*1) -1.5 safety_check.cancel.expected_rear_deceleration
[m/s^2] double The rear object's maximum deceleration when the rear vehicle perform sudden braking. (*1) -2.5 safety_check.cancel.rear_vehicle_reaction_time
[s] double The reaction time of the rear vehicle driver which starts from the driver noticing the sudden braking of the front vehicle until the driver step on the brake. 2.0 safety_check.cancel.rear_vehicle_safety_time_margin
[s] double The time buffer for the rear vehicle to come into complete stop when its driver perform sudden braking. 2.5 safety_check.cancel.enable_collision_check_at_prepare_phase
[-] boolean Perform collision check starting from prepare phase. If false
, collision check only evaluated for lane changing phase. false safety_check.cancel.prepare_phase_ignore_target_speed_thresh
[m/s] double Ignore collision check in prepare phase of object speed that is lesser that the configured value. enable_collision_check_at_prepare_phase
must be true
0.2 safety_check.cancel.check_objects_on_current_lanes
[-] boolean If true, the lane change module include objects on current lanes. false safety_check.cancel.check_objects_on_other_lanes
[-] boolean If true, the lane change module include objects on other lanes. false safety_check.cancel.use_all_predicted_path
[-] boolean If false, use only the predicted path that has the maximum confidence. false"},{"location":"planning/behavior_path_lane_change_module/#stuck","title":"stuck","text":"Name Unit Type Description Default value safety_check.stuck.lateral_distance_max_threshold
[m] double The lateral distance threshold that is used to determine whether lateral distance between two object is enough and whether lane change is safe. 2.0 safety_check.stuck.longitudinal_distance_min_threshold
[m] double The longitudinal distance threshold that is used to determine whether longitudinal distance between two object is enough and whether lane change is safe. 3.0 safety_check.stuck.expected_front_deceleration
[m/s^2] double The front object's maximum deceleration when the front vehicle perform sudden braking. (*1) -1.0 safety_check.stuck.expected_rear_deceleration
[m/s^2] double The rear object's maximum deceleration when the rear vehicle perform sudden braking. (*1) -1.0 safety_check.stuck.rear_vehicle_reaction_time
[s] double The reaction time of the rear vehicle driver which starts from the driver noticing the sudden braking of the front vehicle until the driver step on the brake. 2.0 safety_check.stuck.rear_vehicle_safety_time_margin
[s] double The time buffer for the rear vehicle to come into complete stop when its driver perform sudden braking. 2.0 safety_check.stuck.enable_collision_check_at_prepare_phase
[-] boolean Perform collision check starting from prepare phase. If false
, collision check only evaluated for lane changing phase. true safety_check.stuck.prepare_phase_ignore_target_speed_thresh
[m/s] double Ignore collision check in prepare phase of object speed that is lesser that the configured value. enable_collision_check_at_prepare_phase
must be true
0.1 safety_check.stuck.check_objects_on_current_lanes
[-] boolean If true, the lane change module include objects on current lanes. true safety_check.stuck.check_objects_on_other_lanes
[-] boolean If true, the lane change module include objects on other lanes. true safety_check.stuck.use_all_predicted_path
[-] boolean If false, use only the predicted path that has the maximum confidence. true (*1) the value must be negative.
"},{"location":"planning/behavior_path_lane_change_module/#abort-lane-change","title":"Abort lane change","text":"The following parameters are configurable in lane_change.param.yaml
.
cancel.enable_on_prepare_phase
[-] boolean Enable cancel lane change true cancel.enable_on_lane_changing_phase
[-] boolean Enable abort lane change. false cancel.delta_time
[s] double The time taken to start steering to return to the center line. 3.0 cancel.duration
[s] double The time taken to complete returning to the center line. 3.0 cancel.max_lateral_jerk
[m/sss] double The maximum lateral jerk for abort path 1000.0 cancel.overhang_tolerance
[m] double Lane change cancel is prohibited if the vehicle head exceeds the lane boundary more than this tolerance distance 0.0"},{"location":"planning/behavior_path_lane_change_module/#debug","title":"Debug","text":"The following parameters are configurable in lane_change.param.yaml
.
publish_debug_marker
[-] boolean Flag to publish debug marker false"},{"location":"planning/behavior_path_lane_change_module/#debug-marker-visualization","title":"Debug Marker & Visualization","text":"To enable the debug marker, execute (no restart is needed)
ros2 param set /planning/scenario_planning/lane_driving/behavior_planning/behavior_path_planner lane_change.publish_debug_marker true\n
or simply set the publish_debug_marker
to true
in the lane_change.param.yaml
for permanent effect (restart is needed).
Then add the marker
/planning/scenario_planning/lane_driving/behavior_planning/behavior_path_planner/debug/lane_change_left\n
in rviz2
.
Available information
The Behavior Path Planner's main objective is to significantly enhance the safety of autonomous vehicles by minimizing the risk of accidents. It improves driving efficiency through time conservation and underpins reliability with its rule-based approach. Additionally, it allows users to integrate their own custom behavior modules or use it with different types of vehicles, such as cars, buses, and delivery robots, as well as in various environments, from busy urban streets to open highways.
The module begins by thoroughly analyzing the ego vehicle's current situation, including its position, speed, and surrounding environment. This analysis leads to essential driving decisions about lane changes or stopping and subsequently generates a path that is both safe and efficient. It considers road geometry, traffic rules, and dynamic conditions while also incorporating obstacle avoidance to respond to static and dynamic obstacles such as other vehicles, pedestrians, or unexpected roadblocks, ensuring safe navigation.
Moreover, the planner actively interacts with other traffic participants, predicting their actions and accordingly adjusting the vehicle's path. This ensures not only the safety of the autonomous vehicle but also contributes to smooth traffic flow. Its adherence to traffic laws, including speed limits and traffic signals, further guarantees lawful and predictable driving behavior. The planner is also designed to minimize sudden or abrupt maneuvers, aiming for a comfortable and natural driving experience.
Note
The Planning Component Design Document outlines the foundational philosophy guiding the design and future development of the Behavior Path Planner module. We strongly encourage readers to consult this document to understand the rationale behind its current configuration and the direction of its ongoing development.
"},{"location":"planning/behavior_path_planner/#purpose-use-cases","title":"Purpose / Use Cases","text":"Essentially, the module has three primary responsibilities:
Behavior Path Planner has following scene modules
Name Description Details Lane Following this module generates reference path from lanelet centerline. LINK Avoidance this module generates avoidance path when there is objects that should be avoid. LINK Dynamic Avoidance WIP LINK Avoidance By Lane Change this module generates lane change path when there is objects that should be avoid. LINK Lane Change this module is performed when it is necessary and a collision check with other vehicles is cleared. LINK External Lane Change WIP LINK Start Planner this module is performed when ego-vehicle is in the road lane and goal is in the shoulder lane. ego-vehicle will stop at the goal. LINK Goal Planner this module is performed when ego-vehicle is stationary and footprint of ego-vehicle is included in shoulder lane. This module ends when ego-vehicle merges into the road. LINK Side Shift (for remote control) shift the path to left or right according to an external instruction. LINKNote
click on the following images to view the video of their execution
Note
Users can refer to Planning component design for some additional behavior.
"},{"location":"planning/behavior_path_planner/#how-to-add-or-implement-new-module","title":"How to add or implement new module?","text":"All scene modules are implemented by inheriting base class scene_module_interface.hpp
.
Warning
The remainder of this subsection is work in progress (WIP).
"},{"location":"planning/behavior_path_planner/#planner-manager","title":"Planner Manager","text":"The Planner Manager's responsibilities include:
Note
To check the scene module's transition, i.e.: registered, approved and candidate modules, set verbose: true
in the behavior path planner configuration file.
Note
For more in-depth information, refer to Manager design document.
"},{"location":"planning/behavior_path_planner/#inputs-outputs-api","title":"Inputs / Outputs / API","text":""},{"location":"planning/behavior_path_planner/#input","title":"Input","text":"Name Required? Type Description ~/input/odometry \u25cbnav_msgs::msg::Odometry
for ego velocity. ~/input/accel \u25cb geometry_msgs::msg::AccelWithCovarianceStamped
for ego acceleration. ~/input/objects \u25cb autoware_auto_perception_msgs::msg::PredictedObjects
dynamic objects from perception module. ~/input/occupancy_grid_map \u25cb nav_msgs::msg::OccupancyGrid
occupancy grid map from perception module. This is used for only Goal Planner module. ~/input/traffic_signals \u25cb autoware_perception_msgs::msg::TrafficSignalArray
traffic signals information from the perception module ~/input/vector_map \u25cb autoware_auto_mapping_msgs::msg::HADMapBin
vector map information. ~/input/route \u25cb autoware_auto_mapping_msgs::msg::LaneletRoute
current route from start to goal. ~/input/scenario \u25cb tier4_planning_msgs::msg::Scenario
Launches behavior path planner if current scenario == Scenario:LaneDriving
. ~/input/lateral_offset \u25b3 tier4_planning_msgs::msg::LateralOffset
lateral offset to trigger side shift ~/system/operation_mode/state \u25cb autoware_adapi_v1_msgs::msg::OperationModeState
Allows planning module to know if vehicle is in autonomous mode or can be controlledref autoware_auto_planning_msgs::msg::PathWithLaneId
the path generated by modules. volatile
~/output/turn_indicators_cmd autoware_auto_vehicle_msgs::msg::TurnIndicatorsCommand
turn indicators command. volatile
~/output/hazard_lights_cmd autoware_auto_vehicle_msgs::msg::HazardLightsCommand
hazard lights command. volatile
~/output/modified_goal autoware_planning_msgs::msg::PoseWithUuidStamped
output modified goal commands. transient_local
~/output/stop_reasons tier4_planning_msgs::msg::StopReasonArray
describe the reason for ego vehicle stop volatile
~/output/reroute_availability tier4_planning_msgs::msg::RerouteAvailability
the path the module is about to take. to be executed as soon as external approval is obtained. volatile
"},{"location":"planning/behavior_path_planner/#debug","title":"Debug","text":"Name Type Description QoS Durability ~/debug/avoidance_debug_message_array tier4_planning_msgs::msg::AvoidanceDebugMsgArray
debug message for avoidance. notify users reasons for avoidance path cannot be generated. volatile
~/debug/lane_change_debug_message_array tier4_planning_msgs::msg::LaneChangeDebugMsgArray
debug message for lane change. notify users unsafe reason during lane changing process volatile
~/debug/maximum_drivable_area visualization_msgs::msg::MarkerArray
shows maximum static drivable area. volatile
~/debug/turn_signal_info visualization_msgs::msg::MarkerArray
TBA volatile
~/debug/bound visualization_msgs::msg::MarkerArray
debug for static drivable area volatile
~/planning/path_candidate/* autoware_auto_planning_msgs::msg::Path
the path before approval. volatile
~/planning/path_reference/* autoware_auto_planning_msgs::msg::Path
reference path generated by each modules. volatile
Note
For specific information of which topics are being subscribed and published, refer to behavior_path_planner.xml.
"},{"location":"planning/behavior_path_planner/#how-to-enable-or-disable-the-modules","title":"How to enable or disable the modules","text":"Enabling and disabling the modules in the behavior path planner is primarily managed through two key files: default_preset.yaml
and behavior_path_planner.launch.xml
.
The default_preset.yaml
file acts as a configuration file for enabling or disabling specific modules within the planner. It contains a series of arguments which represent the behavior path planner's modules or features. For example:
launch_avoidance_module
: Set to true
to enable the avoidance module, or false
to disable it.Note
Click here to view the default_preset.yaml
.
The behavior_path_planner.launch.xml
file references the settings defined in default_preset.yaml
to apply the configurations when the behavior path planner's node is running. For instance, the parameter avoidance.enable_module
in
<param name=\"avoidance.enable_module\" value=\"$(var launch_avoidance_module)\"/>\n
corresponds to launch_avoidance_module from default_preset.yaml
.
Therefore, to enable or disable a module, simply set the corresponding module in default_preset.yaml
to true
or false
. These changes will be applied upon the next launch of Autoware.
A sophisticated methodology is used for path generation, particularly focusing on maneuvers like lane changes and avoidance. At the core of this design is the smooth lateral shifting of the reference path, achieved through a constant-jerk profile. This approach ensures a consistent rate of change in acceleration, facilitating smooth transitions and minimizing abrupt changes in lateral dynamics, crucial for passenger comfort and safety.
The design involves complex mathematical formulations for calculating the lateral shift of the vehicle's path over time. These calculations include determining lateral displacement, velocity, and acceleration, while considering the vehicle's lateral acceleration and velocity limits. This is essential for ensuring that the vehicle's movements remain safe and manageable.
The ShiftLine
struct (as seen here) is utilized to represent points along the path where the lateral shift starts and ends. It includes details like the start and end points in absolute coordinates, the relative shift lengths at these points compared to the reference path, and the associated indexes on the reference path. This struct is integral to managing the path shifts, as it allows the path planner to dynamically adjust the trajectory based on the vehicle's current position and planned maneuver.
Furthermore, the design and its implementation incorporate various equations and mathematical models to calculate essential parameters for the path shift. These include the total distance of the lateral shift, the maximum allowable lateral acceleration and jerk, and the total time required for the shift. Practical considerations are also noted, such as simplifying assumptions in the absence of a specific time interval for most lane change and avoidance cases.
The shifted path generation logic enables the behavior path planner to dynamically generate safe and efficient paths, precisely controlling the vehicle\u2019s lateral movements to ensure the smooth execution of lane changes and avoidance maneuvers. This careful planning and execution adhere to the vehicle's dynamic capabilities and safety constraints, maximizing efficiency and safety in autonomous vehicle navigation.
Note
If you're a math lover, refer to Path Generation Design for the nitty-gritty.
"},{"location":"planning/behavior_path_planner/#collision-assessment-safety-check","title":"Collision Assessment / Safety check","text":"The purpose of the collision assessment function in the Behavior Path Planner is to evaluate the potential for collisions with target objects across all modules. It is utilized in two scenarios:
The safety check process involves several steps. Initially, it obtains the pose of the target object at a specific time, typically through interpolation of the predicted path. It then checks for any overlap between the ego vehicle and the target object at this time. If an overlap is detected, the path is deemed unsafe. The function also identifies which vehicle is in front by using the arc length along the given path. The function operates under the assumption that accurate data on the position, velocity, and shape of both the ego vehicle (the autonomous vehicle) and any target objects are available. It also relies on the yaw angle of each point in the predicted paths of these objects, which is expected to point towards the next path point.
A critical part of the safety check is the calculation of the RSS (Responsibility-Sensitive Safety) distance-inspired algorithm. This algorithm considers factors such as reaction time, safety time margin, and the velocities and decelerations of both vehicles. Extended object polygons are created for both the ego and target vehicles. Notably, the rear object\u2019s polygon is extended by the RSS distance longitudinally and by a lateral margin. The function finally checks for overlap between this extended rear object polygon and the front object polygon. Any overlap indicates a potential unsafe situation.
However, the module does have a limitation concerning the yaw angle of each point in the predicted paths of target objects, which may not always accurately point to the next point, leading to potential inaccuracies in some edge cases.
Note
For further reading on the collision assessment method, please refer to Safety check utils
"},{"location":"planning/behavior_path_planner/#generating-drivable-area","title":"Generating Drivable Area","text":""},{"location":"planning/behavior_path_planner/#static-drivable-area-logic","title":"Static Drivable Area logic","text":"The drivable area is used to determine the area in which the ego vehicle can travel. The primary goal of static drivable area expansion is to ensure safe travel by generating an area that encompasses only the necessary spaces for the vehicle's current behavior, while excluding non-essential areas. For example, while avoidance
module is running, the drivable area includes additional space needed for maneuvers around obstacles, and it limits the behavior by not extending the avoidance path outside of lanelet areas.
Static drivable area expansion operates under assumptions about the correct arrangement of lanes and the coverage of both the front and rear of the vehicle within the left and right boundaries. Key parameters for drivable area generation include extra footprint offsets for the ego vehicle, the handling of dynamic objects, maximum expansion distance, and specific methods for expansion. Additionally, since each module generates its own drivable area, before passing it as the input to generate the next running module's drivable area, or before generating a unified drivable area, the system sorts drivable lanes based on the vehicle's passage order. This ensures the correct definition of the lanes used in drivable area generation.
Note
Further details can is provided in Drivable Area Design.
"},{"location":"planning/behavior_path_planner/#dynamic-drivable-area-logic","title":"Dynamic Drivable Area Logic","text":"Large vehicles require much more space, which sometimes causes them to veer out of their current lane. A typical example being a bus making a turn at a corner. In such cases, relying on a static drivable area is insufficient, since the static method depends on lane information provided by high-definition maps. To overcome the limitations of the static approach, the dynamic drivable area expansion algorithm adjusts the navigable space for an autonomous vehicle in real-time. It conserves computational power by reusing previously calculated path data, updating only when there is a significant change in the vehicle's position. The system evaluates the minimum lane width necessary to accommodate the vehicle's turning radius and other dynamic factors. It then calculates the optimal expansion of the drivable area's boundaries to ensure there is adequate space for safe maneuvering, taking into account the vehicle's path curvature. The rate at which these boundaries can expand or contract is moderated to maintain stability in the vehicle's navigation. The algorithm aims to maximize the drivable space while avoiding fixed obstacles and adhering to legal driving limits. Finally, it applies these boundary adjustments and smooths out the path curvature calculations to ensure a safe and legally compliant navigable path is maintained throughout the vehicle's operation.
Note
The feature can be enabled in the drivable_area_expansion.param.yaml.
"},{"location":"planning/behavior_path_planner/#generating-turn-signal","title":"Generating Turn Signal","text":"The Behavior Path Planner module uses the autoware_auto_vehicle_msgs::msg::TurnIndicatorsCommand
to output turn signal commands (see TurnIndicatorsCommand.idl). The system evaluates the driving context and determines when to activate turn signals based on its maneuver planning\u2014like turning, lane changing, or obstacle avoidance.
Within this framework, the system differentiates between desired and required blinker activations. Desired activations are those recommended by traffic laws for typical driving scenarios, such as signaling before a lane change or turn. Required activations are those that are deemed mandatory for safety reasons, like signaling an abrupt lane change to avoid an obstacle.
The TurnIndicatorsCommand
message structure has a command field that can take one of several constants: NO_COMMAND
indicates no signal is necessary, DISABLE
to deactivate signals, ENABLE_LEFT
to signal a left turn, and ENABLE_RIGHT
to signal a right turn. The Behavior Path Planner sends these commands at the appropriate times, based on its rules-based system that considers both the desired and required scenarios for blinker activation.
Note
For more in-depth information, refer to Turn Signal Design document.
"},{"location":"planning/behavior_path_planner/#rerouting","title":"Rerouting","text":"Warning
Rerouting is a feature that was still under progress. Further information will be included on a later date.
"},{"location":"planning/behavior_path_planner/#parameters-and-configuration","title":"Parameters and Configuration","text":"The configuration files are organized in a hierarchical directory structure for ease of navigation and management. Each subdirectory contains specific configuration files relevant to its module. The root directory holds general configuration files that apply to the overall behavior of the planner. The following is an overview of the directory structure with the respective configuration files.
behavior_path_planner\n\u251c\u2500\u2500 behavior_path_planner.param.yaml\n\u251c\u2500\u2500 drivable_area_expansion.param.yaml\n\u251c\u2500\u2500 scene_module_manager.param.yaml\n\u251c\u2500\u2500 avoidance\n\u2502 \u2514\u2500\u2500 avoidance.param.yaml\n\u251c\u2500\u2500 avoidance_by_lc\n\u2502 \u2514\u2500\u2500 avoidance_by_lc.param.yaml\n\u251c\u2500\u2500 dynamic_avoidance\n\u2502 \u2514\u2500\u2500 dynamic_avoidance.param.yaml\n\u251c\u2500\u2500 goal_planner\n\u2502 \u2514\u2500\u2500 goal_planner.param.yaml\n\u251c\u2500\u2500 lane_change\n\u2502 \u2514\u2500\u2500 lane_change.param.yaml\n\u251c\u2500\u2500 side_shift\n\u2502 \u2514\u2500\u2500 side_shift.param.yaml\n\u2514\u2500\u2500 start_planner\n \u2514\u2500\u2500 start_planner.param.yaml\n
Similarly, the common directory contains configuration files that are used across various modules, providing shared parameters and settings essential for the functioning of the Behavior Path Planner:
common\n\u251c\u2500\u2500 common.param.yaml\n\u251c\u2500\u2500 costmap_generator.param.yaml\n\u2514\u2500\u2500 nearest_search.param.yaml\n
The preset directory contains the configurations for managing the operational state of various modules. It includes the default_preset.yaml file, which specifically caters to enabling and disabling modules within the system.
preset\n\u2514\u2500\u2500 default_preset.yaml\n
"},{"location":"planning/behavior_path_planner/#limitations-future-work","title":"Limitations & Future Work","text":"Warning
Under Construction
"},{"location":"planning/behavior_path_planner/docs/behavior_path_planner_limitations/","title":"Limitations","text":""},{"location":"planning/behavior_path_planner/docs/behavior_path_planner_limitations/#limitations","title":"Limitations","text":"The document describes the limitations that are currently present in the behavior_path_planner
module.
The following items (but not limited to) fall in the scope of limitation:
To fully utilize the Lanelet2
's API, the design of the vector map (.osm
) needs to follow all the criteria described in Lanelet2
documentation. Specifically, in the case of 2 or more lanes, the Linestrings that divide the current lane with the opposite/adjacent lane need to have a matching Linestring ID
. Assume the following ideal case.
In the image, Linestring ID51
is shared by Lanelet A
and Lanelet B
. Hence we can directly use the available left
, adjacentLeft
, right
, adjacentRight
and findUsages
method within Lanelet2
's API to directly query the direction and opposite lane availability.
const auto right_lane = routing_graph_ptr_->right(lanelet);\nconst auto adjacent_right_lane = routing_graph_ptr_->adjacentRight(lanelet);\nconst auto opposite_right_lane = lanelet_map_ptr_->laneletLayer.findUsages(lanelet.rightBound().invert());\n
The following images show the situation where these API does not work directly. This means that we cannot use them straight away, and several assumptions and logical instruction are needed to make these APIs work.
In this example (multiple linestring issues), Lanelet C
contains Linestring ID61
and ID62
, while Lanelet D
contains Linestring ID63
and ID 64
. Although the Linestring ID62
and ID64
have identical point IDs and seem visually connected, the API will treat these Linestring as though they are separated. When it searches for any Lanelet
that is connected via Linestring ID62
, it will return NULL
, since ID62
only connects to Lanelet C
and not other Lanelet
.
Although, in this case, it is possible to forcefully search the lanelet availability by checking the lanelet that contains the points, usinggetLaneletFromPoint
method. But, the implementation requires complex rules for it to work. Take the following images as an example.
Assume Object X
is in Lanelet F
. We can forcefully search Lanelet E
via Point 7
, and it will work if Point 7
is utilized by only 2 lanelet. However, the complexity increases when we want to start searching for the direction of the opposite lane. We can infer the direction of the lanelet by using mathematical operations (dot product of vector V_ID72
(Point 6
minus Point 9
), and V_ID74
(Point 7
minus Point 8
). But, notice that we did not use Point 7 in V_ID72. This is because searching it requires an iteration, adding additional non-beneficial computation.
Suppose the points are used by more than 2 lanelets. In that case, we have to find the differences for all lanelet, and the result might be undefined. The reason is that the differences between the coordinates do not reflect the actual shape of the lanelet. The following image demonstrates this point.
There are many other available solutions to try. However, further attempt to solve this might cause issues in the future, especially for maintaining or scaling up the software.
In conclusion, the multiple Linestring issues will not be supported. Covering these scenarios might give the user an \"everything is possible\" impression. This is dangerous since any attempt to create a non-standardized vector map is not compliant with safety regulations.
"},{"location":"planning/behavior_path_planner/docs/behavior_path_planner_limitations/#limitation-avoidance-at-corners-and-intersections","title":"Limitation: Avoidance at Corners and Intersections","text":"Currently, the implementation doesn't cover avoidance at corners and intersections. The reason is similar to here. However, this case can still be supported in the future (assuming the vector map is defined correctly).
"},{"location":"planning/behavior_path_planner/docs/behavior_path_planner_limitations/#limitation-chattering-shifts","title":"Limitation: Chattering shifts","text":"There are possibilities that the shifted path chatters as a result of various factors. For example, bounded box shape or position from the perception input. Sometimes, it is difficult for the perception to get complete information about the object's size. As the object size is updated, the object length will also be updated. This might cause shifts point to be re-calculated, therefore resulting in chattering shift points.
"},{"location":"planning/behavior_path_planner/docs/behavior_path_planner_manager_design/","title":"Manager design","text":""},{"location":"planning/behavior_path_planner/docs/behavior_path_planner_manager_design/#manager-design","title":"Manager design","text":""},{"location":"planning/behavior_path_planner/docs/behavior_path_planner_manager_design/#purpose-role","title":"Purpose / Role","text":"The manager launches and executes scene modules in behavior_path_planner
depending on the use case, and has been developed to achieve following features:
Movie
Support status:
Name Simple exclusive execution Advanced simultaneous execution Avoidance Avoidance By Lane Change Lane Change External Lane Change Goal Planner (without goal modification) Goal Planner (with goal modification) Pull Out Side ShiftClick here for supported scene modules.
Warning
It is still under development and some functions may be unstable.
"},{"location":"planning/behavior_path_planner/docs/behavior_path_planner_manager_design/#overview","title":"Overview","text":"The manager is the core part of the behavior_path_planner
implementation. It outputs path based on the latest data.
The manager has sub-managers for each scene module, and its main task is
Additionally, the manager generates root reference path, and if any other modules don't request execution, the path is used as the planning result of behavior_path_planner
.
The sub-manager's main task is
registered_modules_
.registered_modules_
.sub-managers
Sub-manager is registered on the manager with the following function.
/**\n * @brief register managers.\n * @param manager pointer.\n */\nvoid registerSceneModuleManager(const SceneModuleManagerPtr & manager_ptr)\n{\nRCLCPP_INFO(logger_, \"register %s module\", manager_ptr->getModuleName().c_str());\nmanager_ptrs_.push_back(manager_ptr);\nprocessing_time_.emplace(manager_ptr->getModuleName(), 0.0);\n}\n
Code is here
Sub-manager has the following parameters that are needed by the manager to manage the launched modules, and these parameters can be set for each module.
struct ModuleConfigParameters\n{\nbool enable_module{false};\nbool enable_rtc{false};\nbool enable_simultaneous_execution_as_approved_module{false};\nbool enable_simultaneous_execution_as_candidate_module{false};\nuint8_t priority{0};\nuint8_t max_module_size{0};\n};\n
Code is here
Name Type Descriptionenable_module
bool if true, the sub-manager is registered on the manager. enable_rtc
bool if true, the scene modules should be approved by (request to cooperate)rtc function. if false, the module can be run without approval from rtc. enable_simultaneous_execution_as_candidate_module
bool if true, the manager allows its scene modules to run with other scene modules as candidate module. enable_simultaneous_execution_as_approved_module
bool if true, the manager allows its scene modules to run with other scene modules as approved module. priority
uint8_t the manager decides execution priority based on this parameter. The smaller the number is, the higher the priority is. max_module_size
uint8_t the sub-manager can run some modules simultaneously. this parameter set the maximum number of the launched modules."},{"location":"planning/behavior_path_planner/docs/behavior_path_planner_manager_design/#scene-modules","title":"Scene modules","text":"Scene modules receives necessary data and RTC command, and outputs candidate path(s), reference path and RTC cooperate status. When multiple modules run in series, the output of the previous module is received as input and the information is used to generate a new modified path, as shown in the following figure. And, when one module is running alone, it receives a reference path generated from the centerline of the lane in which Ego is currently driving as previous module output.
scene module I/O Type Description IN
behavior_path_planner::BehaviorModuleOutput
previous module output. contains data necessary for path planning. IN behavior_path_planner::PlannerData
contains data necessary for path planning. IN tier4_planning_msgs::srv::CooperateCommands
contains approval data for scene module's path modification. (details) OUT behavior_path_planner::BehaviorModuleOutput
contains modified path, turn signal information, etc... OUT tier4_planning_msgs::msg::CooperateStatus
contains RTC cooperate status. (details) OUT autoware_auto_planning_msgs::msg::Path
candidate path output by a module that has not received approval for path change. when it approved, the ego's following path is switched to this path. (just for visualization) OUT autoware_auto_planning_msgs::msg::Path
reference path generated from the centerline of the lane the ego is going to follow. (just for visualization) OUT visualization_msgs::msg::MarkerArray
virtual wall, debug info, etc... Scene modules running on the manager are stored on the candidate modules stack or approved modules stack depending on the condition whether the path modification has been approved or not.
Stack Approval condition Description candidate modules Not approved The candidate modules whose modified path has not been approved by RTC is stored in vectorcandidate_module_ptrs_
in the manager. The candidate modules stack is updated in the following order. 1. The manager selects only those modules that can be executed based on the configuration of the sub-manager whose scene module requests execution. 2. Determines the execution priority. 3. Executes them as candidate module. All of these modules receive the decided (approved) path from approved modules stack and RUN in PARALLEL. approved modules Already approved When the path modification is approved via RTC commands, the manager moves the candidate module to approved modules stack. These modules are stored in approved_module_ptrs_
. In this stack, all scene modules RUN in SERIES."},{"location":"planning/behavior_path_planner/docs/behavior_path_planner_manager_design/#process-flow","title":"Process flow","text":"There are 6 steps in one process:
"},{"location":"planning/behavior_path_planner/docs/behavior_path_planner_manager_design/#step1","title":"Step1","text":"At first, the manager set latest planner data, and run all approved modules and get output path. At this time, the manager checks module status and removes expired modules from approved modules stack.
"},{"location":"planning/behavior_path_planner/docs/behavior_path_planner_manager_design/#step2","title":"Step2","text":"Input approved modules output and necessary data to all registered modules, and the modules judge the necessity of path modification based on it. The manager checks which module makes execution request.
"},{"location":"planning/behavior_path_planner/docs/behavior_path_planner_manager_design/#step3","title":"Step3","text":"Check request module existence.
"},{"location":"planning/behavior_path_planner/docs/behavior_path_planner_manager_design/#step4","title":"Step4","text":"The manager decides which module to execute as candidate modules from the modules that requested to execute path modification.
"},{"location":"planning/behavior_path_planner/docs/behavior_path_planner_manager_design/#step5","title":"Step5","text":"Decides the priority order of execution among candidate modules. And, run all candidate modules. Each modules outputs reference path and RTC cooperate status.
"},{"location":"planning/behavior_path_planner/docs/behavior_path_planner_manager_design/#step6","title":"Step6","text":"Move approved module to approved modules stack from candidate modules stack.
and, within a single planning cycle, these steps are repeated until the following conditions are satisfied.
while (rclcpp::ok()) {\n/**\n * STEP1: get approved modules' output\n */\nconst auto approved_modules_output = runApprovedModules(data);\n\n/**\n * STEP2: check modules that need to be launched\n */\nconst auto request_modules = getRequestModules(approved_modules_output);\n\n/**\n * STEP3: if there is no module that need to be launched, return approved modules' output\n */\nif (request_modules.empty()) {\nprocessing_time_.at(\"total_time\") = stop_watch_.toc(\"total_time\", true);\nreturn approved_modules_output;\n}\n\n/**\n * STEP4: if there is module that should be launched, execute the module\n */\nconst auto [highest_priority_module, candidate_modules_output] =\nrunRequestModules(request_modules, data, approved_modules_output);\nif (!highest_priority_module) {\nprocessing_time_.at(\"total_time\") = stop_watch_.toc(\"total_time\", true);\nreturn approved_modules_output;\n}\n\n/**\n * STEP5: if the candidate module's modification is NOT approved yet, return the result.\n * NOTE: the result is output of the candidate module, but the output path don't contains path\n * shape modification that needs approval. On the other hand, it could include velocity profile\n * modification.\n */\nif (highest_priority_module->isWaitingApproval()) {\nprocessing_time_.at(\"total_time\") = stop_watch_.toc(\"total_time\", true);\nreturn candidate_modules_output;\n}\n\n/**\n * STEP6: if the candidate module is approved, push the module into approved_module_ptrs_\n */\naddApprovedModule(highest_priority_module);\nclearCandidateModules();\n}\n
Code is here
"},{"location":"planning/behavior_path_planner/docs/behavior_path_planner_manager_design/#priority-of-execution-request","title":"Priority of execution request","text":"Compare priorities parameter among sub-managers to determine the order of execution based on config. Therefore, the priority between sub-modules does NOT change at runtime.
/**\n * @brief swap the modules order based on it's priority.\n * @param modules.\n * @details for now, the priority is decided in config file and doesn't change runtime.\n */\nvoid sortByPriority(std::vector<SceneModulePtr> & modules) const\n{\n// TODO(someone) enhance this priority decision method.\nstd::sort(modules.begin(), modules.end(), [this](auto a, auto b) {\nreturn getManager(a)->getPriority() < getManager(b)->getPriority();\n});\n}\n
Code is here
In the future, however, we are considering having the priorities change dynamically depending on the situation in order to achieve more complex use cases.
"},{"location":"planning/behavior_path_planner/docs/behavior_path_planner_manager_design/#how-to-decide-which-request-modules-to-run","title":"How to decide which request modules to run?","text":"On this manager, it is possible that multiple scene modules may request path modification at same time. In that case, the modules to be executed as candidate module is determined in the following order.
"},{"location":"planning/behavior_path_planner/docs/behavior_path_planner_manager_design/#step1_1","title":"Step1","text":"Push back the modules that make a request to request_modules
.
Check approved modules stack, and remove non-executable modules fromrequest_modules
based on the following condition.
enable_simultaneous_execution_as_approved_module
is true
).Executable or not:
Condition A Condition B Condition C Executable as candidate modules? YES - YES YES YES - NO YES NO YES YES YES NO YES NO NO NO NO YES NO NO NO NO NOIf a module that doesn't support simultaneous execution exists in approved modules stack (NOT satisfy Condition B), no more modules can be added to the stack, and therefore none of the modules can be executed as candidate.
For example, if approved module's setting of enable_simultaneous_execution_as_approved_module
is ENABLE, then only modules whose the setting is ENABLE proceed to the next step.
Other examples:
Process Description If approved modules stack is empty, then all request modules proceed to the next step, regardless of the setting ofenable_simultaneous_execution_as_approved_module
. If approved module's setting of enable_simultaneous_execution_as_approved_module
is DISABLE, then all request modules are discarded."},{"location":"planning/behavior_path_planner/docs/behavior_path_planner_manager_design/#step3_1","title":"Step3","text":"Sort request_modules
by priority.
Check and pick up executable modules as candidate in order of priority based on the following conditions.
enable_simultaneous_execution_as_candidate_module
is true
).Executable or not:
Condition A Condition B Condition C Executable as candidate modules? YES - YES YES YES - NO YES NO YES YES YES NO YES NO NO NO NO YES NO NO NO NO NOFor example, if the highest priority module's setting of enable_simultaneous_execution_as_candidate_module
is DISABLE, then all modules after the second priority are discarded.
Other examples:
Process Description If a module with a higher priority exists, lower priority modules whose setting ofenable_simultaneous_execution_as_candidate_module
is DISABLE are discarded. If all modules' setting of enable_simultaneous_execution_as_candidate_module
is ENABLE, then all modules proceed to the next step."},{"location":"planning/behavior_path_planner/docs/behavior_path_planner_manager_design/#step5_1","title":"Step5","text":"Run all candidate modules.
"},{"location":"planning/behavior_path_planner/docs/behavior_path_planner_manager_design/#how-to-decide-which-modules-output-to-use","title":"How to decide which module's output to use?","text":"Sometimes, multiple candidate modules are running simultaneously.
In this case, the manager selects a candidate modules which output path is used as behavior_path_planner
output by approval condition in the following rules.
priority
), approved modules always have a higher priority than unapproved modules.Note
The smaller the number is, the higher the priority is.
module priority
Additionally, the manager moves the highest priority module to approved modules stack if it is already approved.
"},{"location":"planning/behavior_path_planner/docs/behavior_path_planner_manager_design/#scene-module-unregister-process","title":"Scene module unregister process","text":"The manager removes expired module in approved modules stack based on the module's status.
"},{"location":"planning/behavior_path_planner/docs/behavior_path_planner_manager_design/#waiting-approval-modules","title":"Waiting approval modules","text":"If one module requests multiple path changes, the module may be back to waiting approval condition again. In this case, the manager moves the module to candidate modules stack. If there are some modules that was pushed back to approved modules stack later than the waiting approved module, it is also removed from approved modules stack.
This is because module C is planning output path with the output of module B as input, and if module B is removed from approved modules stack and the input of module C changes, the output path of module C may also change greatly, and the output path will be unstable.
As a result, the module A's output is used as approved modules stack.
"},{"location":"planning/behavior_path_planner/docs/behavior_path_planner_manager_design/#failure-modules","title":"Failure modules","text":"The failure modules return the status ModuleStatus::FAILURE
. The manager removes the module from approved modules stack as well as waiting approval modules, but the failure module is not moved to candidate modules stack.
As a result, the module A's output is used as approved modules stack.
"},{"location":"planning/behavior_path_planner/docs/behavior_path_planner_manager_design/#succeeded-modules","title":"Succeeded modules","text":"The succeeded modules return the status ModuleStatus::SUCCESS
. The manager removes those modules based on Last In First Out policy. In other words, if a module added later to approved modules stack is still running (is in ModuleStatus::RUNNING
), the manager doesn't remove the succeeded module. The reason for this is the same as in removal for waiting approval modules, and is to prevent sudden changes of the running module's output.
As an exception, if Lane Change module returns status ModuleStatus::SUCCESS
, the manager doesn't remove any modules until all modules is in status ModuleStatus::SUCCESS
. This is because when the manager removes the Lane Change (normal LC, external LC, avoidance by LC) module as succeeded module, the manager updates the information of the lane Ego is currently driving in, so root reference path (= module A's input path) changes significantly at that moment.
When the manager removes succeeded modules, the last added module's output is used as approved modules stack.
"},{"location":"planning/behavior_path_planner/docs/behavior_path_planner_manager_design/#reference-path-generation","title":"Reference path generation","text":"The root reference path is generated from the centerline of the lanelet sequence that obtained from the root lanelet, and it is not only used as an input to the first added module of approved modules stack, but also used as the output of behavior_path_planner
if none of the modules are running.
root reference path generation
The root lanelet is the closest lanelet within the route, and the update timing is based on Ego's operation mode state.
OperationModeState::AUTONOMOUS
: Update only when the ego moves to right or left lane by lane change module.OperationModeState::AUTONOMOUS
: Update at the beginning of every planning cycle.The manager needs to know the ego behavior and then generate a root reference path from the lanes that Ego should follow.
For example, during autonomous driving, even if Ego moves into the next lane in order to avoid a parked vehicle, the target lanes that Ego should follow will NOT change because Ego will return to the original lane after the avoidance maneuver. Therefore, the manager does NOT update root lanelet even if the avoidance maneuver is finished.
On the other hand, if the lane change is successful, the manager updates root lanelet because the lane that Ego should follow changes.
In addition, while manual driving, the manager always updates root lanelet because the pilot may move to an adjacent lane regardless of the decision of the autonomous driving system.
/**\n * @brief get reference path from root_lanelet_ centerline.\n * @param planner data.\n * @return reference path.\n */\nBehaviorModuleOutput getReferencePath(const std::shared_ptr<PlannerData> & data) const\n{\nconst auto & route_handler = data->route_handler;\nconst auto & pose = data->self_odometry->pose.pose;\nconst auto p = data->parameters;\n\nconstexpr double extra_margin = 10.0;\nconst auto backward_length =\nstd::max(p.backward_path_length, p.backward_path_length + extra_margin);\n\nconst auto lanelet_sequence = route_handler->getLaneletSequence(\nroot_lanelet_.value(), pose, backward_length, std::numeric_limits<double>::max());\n\nlanelet::ConstLanelet closest_lane{};\nif (lanelet::utils::query::getClosestLaneletWithConstrains(\nlanelet_sequence, pose, &closest_lane, p.ego_nearest_dist_threshold,\np.ego_nearest_yaw_threshold)) {\nreturn utils::getReferencePath(closest_lane, data);\n}\n\nif (lanelet::utils::query::getClosestLanelet(lanelet_sequence, pose, &closest_lane)) {\nreturn utils::getReferencePath(closest_lane, data);\n}\n\nreturn {}; // something wrong.\n}\n
Code is here
"},{"location":"planning/behavior_path_planner/docs/behavior_path_planner_manager_design/#drivable-area-generation","title":"Drivable area generation","text":"Warning
Under Construction
"},{"location":"planning/behavior_path_planner/docs/behavior_path_planner_manager_design/#turn-signal-management","title":"Turn signal management","text":"Warning
Under Construction
"},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_drivable_area_design/","title":"Drivable Area design","text":""},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_drivable_area_design/#drivable-area-design","title":"Drivable Area design","text":"Drivable Area represents the area where ego vehicle can pass.
"},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_drivable_area_design/#purpose-role","title":"Purpose / Role","text":"In order to defined the area that ego vehicle can travel safely, we generate drivable area in behavior path planner module. Our drivable area is represented by two line strings, which are left_bound
line and right_bound
line respectively. Both left_bound
and right_bound
are created from left and right boundaries of lanelets. Note that left_bound
and right bound
are generated by generateDrivableArea
function.
Our drivable area has several assumptions.
follow lane
mode, drivable area should not contain adjacent lanes.Currently, when clipping left bound or right bound, it can clip the bound more than necessary and the generated path might be conservative.
"},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_drivable_area_design/#parameters-for-drivable-area-generation","title":"Parameters for drivable area generation","text":""},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_drivable_area_design/#static-expansion","title":"Static expansion","text":"Name Unit Type Description Default value drivable_area_right_bound_offset [m] double right offset length to expand drivable area 5.0 drivable_area_left_bound_offset [m] double left offset length to expand drivable area 5.0 drivable_area_types_to_skip [-] string linestring types (as defined in the lanelet map) that will not be expanded road_border"},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_drivable_area_design/#dynamic-expansion","title":"Dynamic expansion","text":"Name Unit Type Description Default value enabled [-] boolean if true, dynamically expand the drivable area based on the path curvature true print_runtime [-] boolean if true, runtime is logged by the node true max_expansion_distance [m] double maximum distance by which the original drivable area can be expanded (no limit if set to 0) 0.0 smoothing.curvature_average_window [-] int window size used for smoothing the curvatures using a moving window average 3 smoothing.max_bound_rate [m/m] double maximum rate of change of the bound lateral distance over its arc length 1.0 smoothing.arc_length_range [m] double arc length range where an expansion distance is initially applied 2.0 ego.extra_wheel_base [m] double extra ego wheelbase 0.0 ego.extra_front_overhang [m] double extra ego overhang 0.5 ego.extra_width [m] double extra ego width 1.0 dynamic_objects.avoid [-] boolean if true, the drivable area is not expanded in the predicted path of dynamic objects true dynamic_objects.extra_footprint_offset.front [m] double extra length to add to the front of the ego footprint 0.5 dynamic_objects.extra_footprint_offset.rear [m] double extra length to add to the rear of the ego footprint 0.5 dynamic_objects.extra_footprint_offset.left [m] double extra length to add to the left of the ego footprint 0.5 dynamic_objects.extra_footprint_offset.right [m] double extra length to add to the rear of the ego footprint 0.5 path_preprocessing.max_arc_length [m] double maximum arc length along the path where the ego footprint is projected (0.0 means no limit) 100.0 path_preprocessing.resample_interval [m] double fixed interval between resampled path points (0.0 means path points are directly used) 2.0 path_preprocessing.reuse_max_deviation [m] double if the path changes by more than this value, the curvatures are recalculated. Otherwise they are reused 0.5 avoid_linestring.types [-] string array linestring types in the lanelet maps that will not be crossed when expanding the drivable area [\"road_border\", \"curbstone\"] avoid_linestring.distance [m] double distance to keep between the drivable area and the linestrings to avoid 0.0"},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_drivable_area_design/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"This section gives details of the generation of the drivable area (left_bound
and right_bound
).
Before generating drivable areas, drivable lanes need to be sorted. Drivable Lanes are selected in each module (Lane Follow
, Avoidance
, Lane Change
, Goal Planner
, Pull Out
and etc.), so more details about selection of drivable lanes can be found in each module's document. We use the following structure to define the drivable lanes.
struct DrivalbleLanes\n{\nlanelet::ConstLanelet right_lanelet; // right most lane\nlanelet::ConstLanelet left_lanelet; // left most lane\nlanelet::ConstLanelets middle_lanelets; // middle lanes\n};\n
The image of the sorted drivable lanes is depicted in the following picture.
Note that, the order of drivable lanes become
drivable_lanes = {DrivableLane1, DrivableLanes2, DrivableLanes3, DrivableLanes4, DrivableLanes5}\n
"},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_drivable_area_design/#drivable-area-generation","title":"Drivable Area Generation","text":"In this section, a drivable area is created using drivable lanes arranged in the order in which vehicles pass by. We created left_bound
from left boundary of the leftmost lanelet and right_bound
from right boundary of the rightmost lanelet. The image of the created drivable area will be the following blue lines. Note that the drivable area is defined in the Path
and PathWithLaneId
messages as
std::vector<geometry_msgs::msg::Point> left_bound;\nstd::vector<geometry_msgs::msg::Point> right_bound;\n
and each point of right bound and left bound has a position in the absolute coordinate system.
"},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_drivable_area_design/#drivable-area-expansion","title":"Drivable Area Expansion","text":""},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_drivable_area_design/#static-expansion_1","title":"Static Expansion","text":"Each module can statically expand the left and right bounds of the target lanes by the parameter defined values. This enables large vehicles to pass narrow curve. The image of this process can be described as
Note that we only expand right bound of the rightmost lane and left bound of the leftmost lane.
"},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_drivable_area_design/#dynamic-expansion_1","title":"Dynamic Expansion","text":"The drivable area can also be expanded dynamically based on a minimum width calculated from the path curvature and the ego vehicle's properties. If static expansion is also enabled, the dynamic expansion will be done after the static expansion such that both expansions are applied.
Without dynamic expansion With dynamic expansionNext we detail the algorithm used to expand the drivable area bounds.
"},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_drivable_area_design/#1-calculate-and-smooth-the-path-curvature","title":"1 Calculate and smooth the path curvature","text":"To avoid sudden changes of the dynamically expanded drivable area, we first try to reuse as much of the previous path and its calculated curvatures as possible. Previous path points and curvatures are reused up to the first previous path point that deviates from the new path by more than the reuse_max_deviation
parameter. At this stage, the path is also resampled according to the resampled_interval
and cropped according to the max_arc_length
. With the resulting preprocessed path points and previous curvatures, curvatures of the new path points are calculated using the 3 points method and smoothed using a moving window average with window size curvature_average_window
.
Each path point is projected on the original left and right drivable area bounds to calculate its corresponding bound index, original distance from the bounds, and the projected point. Additionally, for each path point, the minimum drivable area width is calculated using the following equation: Where \\(W\\) is the minimum drivable area width, \\(a\\), is the front overhang of ego, \\(l\\) is the wheelbase of ego, \\(w\\) is the width of ego, and \\(k\\) is the path curvature. This equation was derived from the work of Lim, H., Kim, C., and Jo, A., \"Model Predictive Control-Based Lateral Control of Autonomous Large-Size Bus on Road with Large Curvature,\" SAE Technical Paper 2021-01-0099, 2021.
"},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_drivable_area_design/#3-calculate-maximum-expansion-distances-of-each-bound-point-based-on-dynamic-objects-and-linestring-of-the-vector-map-optional","title":"3 Calculate maximum expansion distances of each bound point based on dynamic objects and linestring of the vector map (optional)","text":"For each drivable area bound point, we calculate its maximum expansion distance as its distance to the closest \"obstacle\" (either a map linestring with type avoid_linestrings.type
, or a dynamic object footprint if dynamic_objects.avoid
is set to true
). If max_expansion_distance
is not 0.0
, it is use here if smaller than the distance to the closest obstacle.
For each bound point, a shift distance is calculated. such that the resulting width between corresponding left and right bound points is as close as possible to the minimum width calculated in step 2 but the individual shift distance stays bellow the previously calculated maximum expansion distance.
"},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_drivable_area_design/#5-shift-bound-points-by-the-values-calculated-in-step-4-and-remove-all-loops-in-the-resulting-bound","title":"5 Shift bound points by the values calculated in step 4 and remove all loops in the resulting bound","text":"Finally, each bound point is shifted away from the path by the distance calculated in step 4. Once all points have been shifted, loops are removed from the bound and we obtain our final expanded drivable area.
"},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_drivable_area_design/#visualizing-maximum-drivable-area-debug","title":"Visualizing maximum drivable area (Debug)","text":"Sometimes, the developers might get a different result between two maps that may look identical during visual inspection.
For example, in the same area, one can perform avoidance and another cannot. This might be related to the maximum drivable area issues due to the non-compliance vector map design from the user.
To debug the issue, the maximum drivable area boundary can be visualized.
The maximum drivable area can be visualize by adding the marker from /planning/scenario_planning/lane_driving/behavior_planning/behavior_path_planner/maximum_drivable_area
If the hatched road markings area is defined in the lanelet map, the area can be used as a drivable area. Since the area is expressed as a polygon format of Lanelet2, several steps are required for correct expansion.
"},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_path_generation_design/","title":"Path Generation design","text":""},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_path_generation_design/#path-generation-design","title":"Path Generation design","text":"This document explains how the path is generated for lane change and avoidance, etc. The implementation can be found in path_shifter.hpp.
"},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_path_generation_design/#overview","title":"Overview","text":"The base idea of the path generation in lane change and avoidance is to smoothly shift the reference path, such as the center line, in the lateral direction. This is achieved by using a constant-jerk profile as in the figure below. More details on how it is used can be found in README. It is assumed that the reference path is smooth enough for this algorithm.
The figure below explains how the application of a constant lateral jerk \\(l^{'''}(s)\\) can be used to induce lateral shifting. In order to comply with the limits on lateral acceleration and velocity, zero-jerk time is employed in the figure ( \\(T_a\\) and \\(T_v\\) ). In each interval where constant jerk is applied, the shift position \\(l(s)\\) can be characterized by a third-degree polynomial. Therefore the shift length from the reference path can then be calculated by combining spline curves.
Note that, due to the rarity of the \\(T_v\\) in almost all cases of lane change and avoidance, \\(T_v\\) is not considered in the current implementation.
"},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_path_generation_design/#mathematical-derivation","title":"Mathematical Derivation","text":"With initial longitudinal velocity \\(v_0^{\\rm lon}\\) and longitudinal acceleration \\(a^{\\rm lon}\\), longitudinal position \\(s(t)\\) and longitudinal velocity at each time \\(v^{\\rm lon}(t)\\) can be derived as:
\\[ \\begin{align} s_1&= v^{\\rm lon}_0 T_j + \\frac{1}{2} a^{\\rm lon} T_j^2 \\\\ v_1&= v^{\\rm lon}_0 + a^{\\rm lon} T_j \\\\ s_2&= v^{\\rm lon}_1 T_a + \\frac{1}{2} a^{\\rm lon} T_a^2 \\\\ v_2&= v^{\\rm lon}_1 + a^{\\rm lon} T_a \\\\ s_3&= v^{\\rm lon}_2 T_j + \\frac{1}{2} a^{\\rm lon} T_j^2 \\\\ v_3&= v^{\\rm lon}_2 + a^{\\rm lon} T_j \\\\ s_4&= v^{\\rm lon}_3 T_v + \\frac{1}{2} a^{\\rm lon} T_v^2 \\\\ v_4&= v^{\\rm lon}_3 + a^{\\rm lon} T_v \\\\ s_5&= v^{\\rm lon}_4 T_j + \\frac{1}{2} a^{\\rm lon} T_j^2 \\\\ v_5&= v^{\\rm lon}_4 + a^{\\rm lon} T_j \\\\ s_6&= v^{\\rm lon}_5 T_a + \\frac{1}{2} a^{\\rm lon} T_a^2 \\\\ v_6&= v^{\\rm lon}_5 + a^{\\rm lon} T_a \\\\ s_7&= v^{\\rm lon}_6 T_j + \\frac{1}{2} a^{\\rm lon} T_j^2 \\\\ v_7&= v^{\\rm lon}_6 + a^{\\rm lon} T_j \\end{align} \\]By applying simple integral operations, the following analytical equations can be derived to describe the shift distance \\(l(t)\\) at each time under lateral jerk, lateral acceleration, and velocity constraints.
\\[ \\begin{align} l_1&= \\frac{1}{6}jT_j^3\\\\[10pt] l_2&= \\frac{1}{6}j T_j^3 + \\frac{1}{2} j T_a T_j^2 + \\frac{1}{2} j T_a^2 T_j\\\\[10pt] l_3&= j T_j^3 + \\frac{3}{2} j T_a T_j^2 + \\frac{1}{2} j T_a^2 T_j\\\\[10pt] l_4&= j T_j^3 + \\frac{3}{2} j T_a T_j^2 + \\frac{1}{2} j T_a^2 T_j + j(T_a + T_j)T_j T_v\\\\[10pt] l_5&= \\frac{11}{6} j T_j^3 + \\frac{5}{2} j T_a T_j^2 + \\frac{1}{2} j T_a^2 T_j + j(T_a + T_j)T_j T_v \\\\[10pt] l_6&= \\frac{11}{6} j T_j^3 + 3 j T_a T_j^2 + j T_a^2 T_j + j(T_a + T_j)T_j T_v\\\\[10pt] l_7&= 2 j T_j^3 + 3 j T_a T_j^2 + j T_a^2 T_j + j(T_a + T_j)T_j T_v \\end{align} \\]These equations are used to determine the shape of a path. Additionally, by applying further mathematical operations to these basic equations, the expressions of the following subsections can be derived.
"},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_path_generation_design/#calculation-of-maximum-acceleration-from-transition-time-and-final-shift-length","title":"Calculation of Maximum Acceleration from transition time and final shift length","text":"In the case where there are no limitations on lateral velocity and lateral acceleration, the maximum lateral acceleration during the shifting can be calculated as follows. The constant-jerk time is given by \\(T_j = T_{\\rm total}/4\\) because of its symmetric property. Since \\(T_a=T_v=0\\), the final shift length \\(L=l_7=2jT_j^3\\) can be determined using the above equation. The maximum lateral acceleration is then given by \\(a_{\\rm max} =jT_j\\). This results in the following expression for the maximum lateral acceleration:
\\[ \\begin{align} a_{\\rm max}^{\\rm lat} = \\frac{8L}{T_{\\rm total}^2} \\end{align} \\]"},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_path_generation_design/#calculation-of-ta-tj-and-jerk-from-acceleration-limit","title":"Calculation of Ta, Tj and jerk from acceleration limit","text":"In the case where there are no limitations on lateral velocity, the constant-jerk and acceleration times, as well as the required jerk can be calculated from the acceleration limit, total time, and final shift length as follows. Since \\(T_v=0\\), the final shift length is given by:
\\[ \\begin{align} L = l_7 = 2 j T_j^3 + 3 j T_a T_j^2 + j T_a^2 T_j \\end{align} \\]Additionally, the velocity profile reveals the following relations:
\\[ \\begin{align} a_{\\rm lim}^{\\rm lat} &= j T_j\\\\ T_{\\rm total} &= 4T_j + 2T_a \\end{align} \\]By solving these three equations, the following can be obtained:
\\[ \\begin{align} T_j&=\\frac{T_{\\rm total}}{2} - \\frac{2L}{a_{\\rm lim}^{\\rm lat} T_{\\rm total}}\\\\[10pt] T_a&=\\frac{4L}{a_{\\rm lim}^{\\rm lat} T_{\\rm total}} - \\frac{T_{\\rm total}}{2}\\\\[10pt] jerk&=\\frac{2a_{\\rm lim} ^2T_{\\rm total}}{a_{\\rm lim}^{\\rm lat} T_{\\rm total}^2-4L} \\end{align} \\]where \\(T_j\\) is the constant-jerk time, \\(T_a\\) is the constant acceleration time, \\(j\\) is the required jerk, \\(a_{\\rm lim}^{\\rm lat}\\) is the lateral acceleration limit, and \\(L\\) is the final shift length.
"},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_path_generation_design/#calculation-of-required-time-from-jerk-and-acceleration-constraint","title":"Calculation of Required Time from Jerk and Acceleration Constraint","text":"In the case where there are no limitations on lateral velocity, the total time required for shifting can be calculated from the lateral jerk and lateral acceleration limits and the final shift length as follows. By solving the two equations given above:
\\[ L = l_7 = 2 j T_j^3 + 3 j T_a T_j^2 + j T_a^2 T_j,\\quad a_{\\rm lim}^{\\rm lat} = j T_j \\]we obtain the following expressions:
\\[ \\begin{align} T_j &= \\frac{a_{\\rm lim}^{\\rm lat}}{j}\\\\[10pt] T_a &= \\frac{1}{2}\\sqrt{\\frac{a_{\\rm lim}^{\\rm lat}}{j}^2 + \\frac{4L}{a_{\\rm lim}^{\\rm lat}}} - \\frac{3a_{\\rm lim}^{\\rm lat}}{2j} \\end{align} \\]The total time required for shifting can then be calculated as \\(T_{\\rm total}=4T_j+2T_a\\).
"},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_path_generation_design/#limitation","title":"Limitation","text":"Safety check function checks if the given path will collide with a given target object.
"},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_safety_check/#purpose-role","title":"Purpose / Role","text":"In the behavior path planner, certain modules (e.g., lane change) need to perform collision checks to ensure the safe navigation of the ego vehicle. These utility functions assist the user in conducting safety checks with other road participants.
"},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_safety_check/#assumptions","title":"Assumptions","text":"The safety check module is based on the following assumptions:
Currently the yaw angle of each point of predicted paths of a target object does not point to the next point. Therefore, the safety check function might returns incorrect result for some edge case.
"},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_safety_check/#inner-working-algorithm","title":"Inner working / Algorithm","text":"The flow of the safety check algorithm is described in the following explanations.
Here we explain each step of the algorithm flow.
"},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_safety_check/#1-get-pose-of-the-target-object-at-a-given-time","title":"1. Get pose of the target object at a given time","text":"For the first step, we obtain the pose of the target object at a given time. This can be done by interpolating the predicted path of the object.
"},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_safety_check/#2-check-overlap","title":"2. Check overlap","text":"With the interpolated pose obtained in the step.1, we check if the object and ego vehicle overlaps at a given time. If they are overlapped each other, the given path is unsafe.
"},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_safety_check/#3-get-front-object","title":"3. Get front object","text":"After the overlap check, it starts to perform the safety check for the broader range. In this step, it judges if ego or target object is in front of the other vehicle. We use arc length of the front point of each object along the given path to judge which one is in front of the other. In the following example, target object (red rectangle) is running in front of the ego vehicle (black rectangle).
"},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_safety_check/#4-calculate-rss-distance","title":"4. Calculate RSS distance","text":"After we find which vehicle is running ahead of the other vehicle, we start to compute the RSS distance. With the reaction time \\(t_{reaction}\\) and safety time margin \\(t_{margin}\\), RSS distance can be described as:
\\[ rss_{dist} = v_{rear} (t_{reaction} + t_{margin}) + \\frac{v_{rear}^2}{2|a_{rear, decel}|} - \\frac{v_{front}^2}{2|a_{front, decel|}} \\]where \\(V_{front}\\), \\(v_{rear}\\) are front and rear vehicle velocity respectively and \\(a_{rear, front}\\), \\(a_{rear, decel}\\) are front and rear vehicle deceleration.
"},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_safety_check/#5-create-extended-ego-and-target-object-polygons","title":"5. Create extended ego and target object polygons","text":"In this step, we compute extended ego and target object polygons. The extended polygons can be described as:
As the picture shows, we expand the rear object polygon. For the longitudinal side, we extend it with the RSS distance, and for the lateral side, we extend it by the lateral margin
"},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_safety_check/#6-check-overlap","title":"6. Check overlap","text":"Similar to the previous step, we check the overlap of the extended rear object polygon and front object polygon. If they are overlapped each other, we regard it as the unsafe situation.
"},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_turn_signal_design/","title":"Turn Signal design","text":""},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_turn_signal_design/#turn-signal-design","title":"Turn Signal design","text":"Turn Signal decider determines necessary blinkers.
"},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_turn_signal_design/#purpose-role","title":"Purpose / Role","text":"This module is responsible for activating a necessary blinker during driving. It uses rule-based algorithm to determine blinkers, and the details of this algorithm are described in the following sections. Note that this algorithm is strictly based on the Japanese Road Traffic Raw.
"},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_turn_signal_design/#assumptions","title":"Assumptions","text":"Autoware has following order of priorities for turn signals.
Currently, this algorithm can sometimes give unnatural (not wrong) blinkers in a complicated situations. This is because it tries to follow the road traffic raw and cannot solve blinker conflicts
clearly in that environment.
Note that the default values for turn_signal_intersection_search_distance
and turn_signal_search_time
is strictly followed by Japanese Road Traffic Laws. So if your country does not allow to use these default values, you should change these values in configuration files.
In this algorithm, it assumes that each blinker has two sections, which are desired section
and required section
. The image of these two sections are depicted in the following diagram.
These two sections have the following meanings.
"},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_turn_signal_design/#-desired-section","title":"- Desired Section","text":"- This section is defined by road traffic laws. It cannot be longer or shorter than the designated length defined by the law.\n- In this section, you do not have to activate the designated blinkers if it is dangerous to do so.\n
"},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_turn_signal_design/#-required-section","title":"- Required Section","text":"- In this section, ego vehicle must activate designated blinkers. However, if there are blinker conflicts, it must solve them based on the algorithm we mention later in this document.\n- Required section cannot be longer than desired section.\n
When turning on the blinker, it decides whether or not to turn on the specified blinker based on the distance from the front of the ego vehicle to the start point of each section. Conversely, when turning off the blinker, it calculates the distance from the base link of the ego vehicle to the end point of each section and decide whether or not to turn it off based on that.
For left turn, right turn, avoidance, lane change, goal planner and pull out, we define these two sections, which are elaborated in the following part.
"},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_turn_signal_design/#1-left-and-right-turn","title":"1. Left and Right turn","text":"Turn signal decider checks each lanelet on the map if it has turn_direction
information. If a lanelet has this information, it activates necessary blinker based on this information.
search_distance
for blinkers at intersections is v * turn_signal_search_time + turn_signal_intersection_search_distance
. Then the start point becomes search_distance
meters before the start point of the intersection lanelet(depicted in gree in the following picture), where v
is the velocity of the ego vehicle. However, if we set turn_signal_distance
in the lanelet, we use that length as search distance.Avoidance can be separated into two sections, first section and second section. The first section is from the start point of the path shift to the end of the path shift. The second section is from the end of shift point to the end of avoidance. Note that avoidance module will not activate turn signal when its shift length is below turn_signal_shift_length_threshold
.
First section
v * turn_signal_search_time
meters before the start point of the avoidance shift path.Second section
v * turn_signal_search_time
meters before the start point of the lane change path.v * turn_signal_search_time
meters before the start point of the pull over path.When it comes to handle several blinkers, it gives priority to the first blinker that comes first. However, this rule sometimes activate unnatural blinkers, so turn signal decider uses the following five rules to decide the necessary turn signal.
Based on these five rules, turn signal decider can solve blinker conflicts
. The following pictures show some examples of this kind of conflicts.
In this scenario, ego vehicle has to pass several turns that are close each other. Since this pattern can be solved by the pattern1 rule, the overall result is depicted in the following picture.
"},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_turn_signal_design/#-avoidance-with-left-turn-1","title":"- Avoidance with left turn (1)","text":"In this scene, ego vehicle has to deal with the obstacle that is on its original path as well as make a left turn. The overall result can be varied by the position of the obstacle, but the image of the result is described in the following picture.
"},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_turn_signal_design/#-avoidance-with-left-turn-2","title":"- Avoidance with left turn (2)","text":"Same as the previous scenario, ego vehicle has to avoid the obstacle as well as make a turn left. However, in this scene, the obstacle is parked after the intersection. Similar to the previous one, the overall result can be varied by the position of the obstacle, but the image of the result is described in the following picture.
"},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_turn_signal_design/#-lane-change-and-left-turn","title":"- Lane change and left turn","text":"In this scenario, ego vehicle has to do lane change before making a left turn. In the following example, ego vehicle does not activate left turn signal until it reaches the end point of the lane change path.
"},{"location":"planning/behavior_path_side_shift_module/","title":"Side Shift design","text":""},{"location":"planning/behavior_path_side_shift_module/#side-shift-design","title":"Side Shift design","text":"(For remote control) Shift the path to left or right according to an external instruction.
"},{"location":"planning/behavior_path_side_shift_module/#overview-of-the-side-shift-module-process","title":"Overview of the Side Shift Module Process","text":"requested_lateral_offset_
under the following conditions: a. Verify if the last update time has elapsed. b. Ensure the required lateral offset value is different from the previous one.Please be aware that requested_lateral_offset_
is continuously updated with the latest values and is not queued.
The side shift has three distinct statuses. Note that during the SHIFTING status, the path cannot be updated:
side shift status"},{"location":"planning/behavior_path_side_shift_module/#flowchart","title":"Flowchart","text":""},{"location":"planning/behavior_path_start_planner_module/","title":"Start Planner design","text":""},{"location":"planning/behavior_path_start_planner_module/#start-planner-design","title":"Start Planner design","text":""},{"location":"planning/behavior_path_start_planner_module/#purpose-role","title":"Purpose / Role","text":"
The Start Planner module is designed to generate a path from the current ego position to the driving lane, avoiding static obstacles and stopping in response to dynamic obstacles when a collision is detected.
Use cases include:
pull out from side of the road lane
pull out from the shoulder lane"},{"location":"planning/behavior_path_start_planner_module/#design","title":"Design","text":""},{"location":"planning/behavior_path_start_planner_module/#general-parameters-for-start_planner","title":"General parameters for start_planner","text":"Name Unit Type Description Default value th_arrived_distance_m [m] double distance threshold for arrival of path termination 1.0 th_distance_to_middle_of_the_road [m] double distance threshold to determine if the vehicle is on the middle of the road 0.1 th_stopped_velocity_mps [m/s] double velocity threshold for arrival of path termination 0.01 th_stopped_time_sec [s] double time threshold for arrival of path termination 1.0 th_turn_signal_on_lateral_offset [m] double lateral distance threshold for turning on blinker 1.0 intersection_search_length [m] double check if intersections exist within this length 30.0 length_ratio_for_turn_signal_deactivation_near_intersection [m] double deactivate turn signal of this module near intersection 0.5 collision_check_margins [m] [double] Obstacle collision check margins list [2.0, 1.5, 1.0] collision_check_distance_from_end [m] double collision check distance from end shift end pose 1.0 collision_check_margin_from_front_object [m] double collision check margin from front object 5.0 center_line_path_interval [m] double reference center line path point interval 1.0"},{"location":"planning/behavior_path_start_planner_module/#safety-check-with-static-obstacles","title":"Safety check with static obstacles","text":"
1.0 m
), that is judged as a unsafe pathThis is based on the concept of RSS. For the logic used, refer to the link below. See safety check feature explanation
"},{"location":"planning/behavior_path_start_planner_module/#collision-check-performed-range","title":"Collision check performed range","text":"A collision check with dynamic objects is primarily performed between the shift start point and end point. The range for safety check varies depending on the type of path generated, so it will be explained for each pattern.
"},{"location":"planning/behavior_path_start_planner_module/#shift-pull-out","title":"Shift pull out","text":"For the \"shift pull out\", safety verification starts at the beginning of the shift and ends at the shift's conclusion.
"},{"location":"planning/behavior_path_start_planner_module/#geometric-pull-out","title":"Geometric pull out","text":"Since there's a stop at the midpoint during the shift, this becomes the endpoint for safety verification. After stopping, safety verification resumes.
"},{"location":"planning/behavior_path_start_planner_module/#backward-pull-out-start-point-search","title":"Backward pull out start point search","text":"During backward movement, no safety check is performed. Safety check begins at the point where the backward movement ends.
"},{"location":"planning/behavior_path_start_planner_module/#ego-vehicles-velocity-planning","title":"Ego vehicle's velocity planning","text":"WIP
"},{"location":"planning/behavior_path_start_planner_module/#safety-check-in-free-space-area","title":"Safety check in free space area","text":"WIP
"},{"location":"planning/behavior_path_start_planner_module/#parameters-for-safety-check","title":"Parameters for safety check","text":""},{"location":"planning/behavior_path_start_planner_module/#stop-condition-parameters","title":"Stop Condition Parameters","text":"Parameters under stop_condition
define the criteria for stopping conditions.
Parameters under path_safety_check.ego_predicted_path
specify the ego vehicle's predicted path characteristics.
Parameters under target_filtering
are related to filtering target objects for safety check.
Parameters under safety_check_params
define the configuration for safety check.
There are two path generation methods.
"},{"location":"planning/behavior_path_start_planner_module/#shift-pull-out_1","title":"shift pull out","text":"This is the most basic method of starting path planning and is used on road lanes and shoulder lanes when there is no particular obstruction.
Pull out distance is calculated by the speed, lateral deviation, and the lateral jerk. The lateral jerk is searched for among the predetermined minimum and maximum values, and the one that generates a safe path is selected.
shift pull out video
"},{"location":"planning/behavior_path_start_planner_module/#parameters-for-shift-pull-out","title":"parameters for shift pull out","text":"Name Unit Type Description Default value enable_shift_pull_out [-] bool flag whether to enable shift pull out true check_shift_path_lane_departure [-] bool flag whether to check if shift path footprints are out of lane false shift_pull_out_velocity [m/s] double velocity of shift pull out 2.0 pull_out_sampling_num [-] int Number of samplings in the minimum to maximum range of lateral_jerk 4 maximum_lateral_jerk [m/s3] double maximum lateral jerk 2.0 minimum_lateral_jerk [m/s3] double minimum lateral jerk 0.1 minimum_shift_pull_out_distance [m] double minimum shift pull out distance. if calculated pull out distance is shorter than this, use this for path generation. 0.0 maximum_curvature [m] double maximum curvature. The pull out distance is calculated so that the curvature is smaller than this value. 0.07"},{"location":"planning/behavior_path_start_planner_module/#geometric-pull-out_1","title":"geometric pull out","text":"Generate two arc paths with discontinuous curvature. Ego-vehicle stops once in the middle of the path to control the steer on the spot. See also [1] for details of the algorithm.
geometric pull out video
"},{"location":"planning/behavior_path_start_planner_module/#parameters-for-geometric-pull-out","title":"parameters for geometric pull out","text":"Name Unit Type Description Default value enable_geometric_pull_out [-] bool flag whether to enable geometric pull out true divide_pull_out_path [-] bool flag whether to divide arc paths. The path is assumed to be divided because the curvature is not continuous. But it requires a stop during the departure. false geometric_pull_out_velocity [m/s] double velocity of geometric pull out 1.0 arc_path_interval [m] double path points interval of arc paths of geometric pull out 1.0 lane_departure_margin [m] double margin of deviation to lane right 0.2 pull_out_max_steer_angle [rad] double maximum steer angle for path generation 0.26"},{"location":"planning/behavior_path_start_planner_module/#backward-pull-out-start-point-search_1","title":"backward pull out start point search","text":"If a safe path cannot be generated from the current position, search backwards for a pull out start point at regular intervals(default: 2.0
).
pull out after backward driving video
"},{"location":"planning/behavior_path_start_planner_module/#search-priority","title":"search priority","text":"If a safe path with sufficient clearance for static obstacles cannot be generated forward, a backward search from the vehicle's current position is conducted to locate a suitable start point for a pull out path generation.
During this backward search, different policies can be applied based on search_priority
parameters:
Selecting efficient_path
focuses on creating a shift pull out path, regardless of how far back the vehicle needs to move. Opting for short_back_distance
aims to find a location with the least possible backward movement.
PriorityOrder
is defined as a vector of pairs, where each element consists of a size_t
index representing a start pose candidate index, and the planner type. The PriorityOrder vector is processed sequentially from the beginning, meaning that the pairs listed at the top of the vector are given priority in the selection process for pull out path generation.
efficient_path
","text":"When search_priority
is set to efficient_path
and the preference is for prioritizing shift_pull_out
, the PriorityOrder
array is populated in such a way that shift_pull_out
is grouped together for all start pose candidates before moving on to the next planner type. This prioritization is reflected in the order of the array, with shift_pull_out
being listed before geometric_pull_out.
This approach prioritizes trying all candidates with shift_pull_out
before proceeding to geometric_pull_out
, which may be efficient in situations where shift_pull_out
is likely to be appropriate.
short_back_distance
","text":"For search_priority
set to short_back_distance
, the array alternates between planner types for each start pose candidate, which can minimize the distance the vehicle needs to move backward if the earlier candidates are successful.
This ordering is beneficial when the priority is to minimize the backward distance traveled, giving an equal chance for each planner to succeed at the closest possible starting position.
"},{"location":"planning/behavior_path_start_planner_module/#parameters-for-backward-pull-out-start-point-search","title":"parameters for backward pull out start point search","text":"Name Unit Type Description Default value enable_back [-] bool flag whether to search backward for start_point true search_priority [-] string In the case ofefficient_path
, use efficient paths even if the back distance is longer. In case of short_back_distance
, use a path with as short a back distance efficient_path max_back_distance [m] double maximum back distance 30.0 backward_search_resolution [m] double distance interval for searching backward pull out start point 2.0 backward_path_update_duration [s] double time interval for searching backward pull out start point. this prevents chattering between back driving and pull_out 3.0 ignore_distance_from_lane_end [m] double If distance from shift start pose to end of shoulder lane is less than this value, this start pose candidate is ignored 15.0"},{"location":"planning/behavior_path_start_planner_module/#freespace-pull-out","title":"freespace pull out","text":"If the vehicle gets stuck with pull out along lanes, execute freespace pull out. To run this feature, you need to set parking_lot
to the map, activate_by_scenario
of costmap_generator to false
and enable_freespace_planner
to true
See freespace_planner for other parameters.
"},{"location":"planning/behavior_velocity_blind_spot_module/","title":"Index","text":""},{"location":"planning/behavior_velocity_blind_spot_module/#blind-spot","title":"Blind Spot","text":""},{"location":"planning/behavior_velocity_blind_spot_module/#role","title":"Role","text":"Blind spot module checks possible collisions with bicycles and pedestrians running on its left/right side while turing left/right before junctions.
"},{"location":"planning/behavior_velocity_blind_spot_module/#activation-timing","title":"Activation Timing","text":"This function is activated when the lane id of the target path has an intersection label (i.e. the turn_direction
attribute is left
or right
).
Sets a stop line, a pass judge line, a detection area and conflict area based on a map information and a self position.
Stop/Go state: When both conditions are met for any of each object, this module state is transited to the \"stop\" state and insert zero velocity to stop the vehicle.
In order to avoid a rapid stop, the \u201cstop\u201d judgement is not executed after the judgment line is passed.
Once a \"stop\" is judged, it will not transit to the \"go\" state until the \"go\" judgment continues for a certain period in order to prevent chattering of the state (e.g. 2 seconds).
"},{"location":"planning/behavior_velocity_blind_spot_module/#module-parameters","title":"Module Parameters","text":"Parameter Type Descriptionstop_line_margin
double [m] a margin that the vehicle tries to stop before stop_line backward_length
double [m] distance from closest path point to the edge of beginning point. ignore_width_from_center_line
double [m] ignore threshold that vehicle behind is collide with ego vehicle or not max_future_movement_time
double [s] maximum time for considering future movement of object adjacent_extend_width
double [m] if adjacent lane e.g. bicycle only lane exists, blind_spot area is expanded by this length"},{"location":"planning/behavior_velocity_blind_spot_module/#flowchart","title":"Flowchart","text":""},{"location":"planning/behavior_velocity_crosswalk_module/","title":"Crosswalk","text":""},{"location":"planning/behavior_velocity_crosswalk_module/#crosswalk","title":"Crosswalk","text":""},{"location":"planning/behavior_velocity_crosswalk_module/#role","title":"Role","text":"This module judges whether the ego should stop in front of the crosswalk in order to provide safe passage for crosswalk users, such as pedestrians and bicycles, based on the objects' behavior and surround traffic.
"},{"location":"planning/behavior_velocity_crosswalk_module/#features","title":"Features","text":""},{"location":"planning/behavior_velocity_crosswalk_module/#yield","title":"Yield","text":""},{"location":"planning/behavior_velocity_crosswalk_module/#target-object","title":"Target Object","text":"The crosswalk module handles objects of the types defined by the following parameters in the object_filtering.target_object
namespace.
unknown
[-] bool whether to look and stop by UNKNOWN objects pedestrian
[-] bool whether to look and stop by PEDESTRIAN objects bicycle
[-] bool whether to look and stop by BICYCLE objects motorcycle
[-] bool whether to look and stop by MOTORCYCLE objects In order to handle the crosswalk users crossing the neighborhood but outside the crosswalk, the crosswalk module creates an attention area around the crosswalk, shown as the yellow polygon in the figure. If the object's predicted path collides with the attention area, the object will be targeted for yield.
The neighborhood is defined by the following parameter in the object_filtering.target_object
namespace.
crosswalk_attention_range
[m] double the detection area is defined as -X meters before the crosswalk to +X meters behind the crosswalk"},{"location":"planning/behavior_velocity_crosswalk_module/#stop-position","title":"Stop Position","text":"First of all, stop_distance_from_object [m]
is always kept at least between the ego and the target object for safety.
When the stop line exists in the lanelet map, the stop position is calculated based on the line. When the stop line does NOT exist in the lanelet map, the stop position is calculated by keeping stop_distance_from_crosswalk [m]
between the ego and the crosswalk.
As an exceptional case, if a pedestrian (or bicycle) is crossing wide crosswalks seen in scramble intersections, and the pedestrian position is more than far_object_threshold
meters away from the stop line, the actual stop position is determined by stop_distance_from_object
and pedestrian position, not at the stop line.
In the stop_position
namespace, the following parameters are defined.
stop_position_threshold
[m] double If the ego vehicle has stopped near the stop line than this value, this module assumes itself to have achieved yielding. stop_distance_from_crosswalk
[m] double make stop line away from crosswalk for the Lanelet2 map with no explicit stop lines far_object_threshold
[m] double If objects cross X meters behind the stop line, the stop position is determined according to the object position (stop_distance_from_object meters before the object) for the case where the crosswalk width is very wide stop_distance_from_object
[m] double the vehicle decelerates to be able to stop in front of object with margin"},{"location":"planning/behavior_velocity_crosswalk_module/#yield-decision","title":"Yield decision","text":"The module makes a decision to yield only when the pedestrian traffic light is GREEN or UNKNOWN. The decision is based on the following variables, along with the calculation of the collision point.
We classify ego behavior at crosswalks into three categories according to the relative relationship between TTC and TTV [1].
The boundary of A and B is interpolated from ego_pass_later_margin_x
and ego_pass_later_margin_y
. In the case of the upper figure, ego_pass_later_margin_x
is {0, 1, 2}
and ego_pass_later_margin_y
is {1, 4, 6}
. In the same way, the boundary of B and C is calculated from ego_pass_first_margin_x
and ego_pass_first_margin_y
. In the case of the upper figure, ego_pass_first_margin_x
is {3, 5}
and ego_pass_first_margin_y
is {0, 1}
.
In the pass_judge
namespace, the following parameters are defined.
ego_pass_first_margin_x
[[s]] double time to collision margin vector for ego pass first situation (the module judges that ego don't have to stop at TTC + MARGIN < TTV condition) ego_pass_first_margin_y
[[s]] double time to vehicle margin vector for ego pass first situation (the module judges that ego don't have to stop at TTC + MARGIN < TTV condition) ego_pass_first_additional_margin
[s] double additional time margin for ego pass first situation to suppress chattering ego_pass_later_margin_x
[[s]] double time to vehicle margin vector for object pass first situation (the module judges that ego don't have to stop at TTV + MARGIN < TTC condition) ego_pass_later_margin_y
[[s]] double time to collision margin vector for object pass first situation (the module judges that ego don't have to stop at TTV + MARGIN < TTC condition) ego_pass_later_additional_margin
[s] double additional time margin for object pass first situation to suppress chattering"},{"location":"planning/behavior_velocity_crosswalk_module/#smooth-yield-decision","title":"Smooth Yield Decision","text":"If the object is stopped near the crosswalk but has no intention of walking, a situation can arise in which the ego continues to yield the right-of-way to the object. To prevent such a deadlock situation, the ego will cancel yielding depending on the situation.
"},{"location":"planning/behavior_velocity_crosswalk_module/#cases-without-traffic-lights","title":"Cases without traffic lights","text":"For the object stopped around the crosswalk but has no intention to walk (*1), after the ego has keep stopping to yield for a specific time (*2), the ego cancels the yield and starts driving.
*1: The time is calculated by the interpolation of distance between the object and crosswalk with distance_map_for_no_intention_to_walk
and timeout_map_for_no_intention_to_walk
.
In the pass_judge
namespace, the following parameters are defined.
distance_map_for_no_intention_to_walk
[[m]] double distance map to calculate the timeout for no intention to walk with interpolation timeout_map_for_no_intention_to_walk
[[s]] double timeout map to calculate the timeout for no intention to walk with interpolation *2: In the pass_judge
namespace, the following parameters are defined.
timeout_ego_stop_for_yield
[s] double If the ego maintains the stop for this amount of time, then the ego proceeds, assuming it has stopped long time enough."},{"location":"planning/behavior_velocity_crosswalk_module/#cases-with-traffic-lights","title":"Cases with traffic lights","text":"The ego will cancel the yield without stopping when the object stops around the crosswalk but has no intention to walk (*1). This comes from the assumption that the object has no intention to walk since it is stopped even though the pedestrian traffic light is green.
*1: The crosswalk user's intention to walk is calculated in the same way as Cases without traffic lights
.
Due to the perception's limited performance where the tree or poll is recognized as a pedestrian or the tracking failure in the crowd or occlusion, even if the surrounding environment does not change, the new pedestrian (= the new ID's pedestrian) may suddenly appear unexpectedly. If this happens while the ego is going to pass the crosswalk, the ego will stop suddenly.
To deal with this issue, the option disable_yield_for_new_stopped_object
is prepared. If true is set, the yield decisions around the crosswalk with a traffic light will ignore the new stopped object.
In the pass_judge
namespace, the following parameters are defined.
disable_yield_for_new_stopped_object
[-] bool If set to true, the new stopped object will be ignored around the crosswalk with a traffic light"},{"location":"planning/behavior_velocity_crosswalk_module/#safety-slow-down-behavior","title":"Safety Slow Down Behavior","text":"In the current autoware implementation, if no target object is detected around a crosswalk, the ego vehicle will not slow down for the crosswalk. However, it may be desirable to slow down in situations, for example, where there are blind spots. Such a situation can be handled by setting some tags to the related crosswalk as instructed in the lanelet2_format_extension.md document.
Parameter Type Descriptionslow_velocity
[m/s] double target vehicle velocity when module receive slow down command from FOA max_slow_down_jerk
[m/sss] double minimum jerk deceleration for safe brake max_slow_down_accel
[m/ss] double minimum accel deceleration for safe brake no_relax_velocity
[m/s] double if the current velocity is less than X m/s, ego always stops at the stop position(not relax deceleration constraints)"},{"location":"planning/behavior_velocity_crosswalk_module/#stuck-vehicle-detection","title":"Stuck Vehicle Detection","text":"The feature will make the ego not to stop on the crosswalk. When there is a low-speed or stopped vehicle ahead of the crosswalk, and there is not enough space between the crosswalk and the vehicle, the crosswalk module plans to stop before the crosswalk even if there are no pedestrians or bicycles.
min_acc
, min_jerk
, and max_jerk
are met. If the ego cannot stop before the crosswalk with these parameters, the stop position will move forward.
In the stuck_vehicle
namespace, the following parameters are defined.
stuck_vehicle_velocity
[m/s] double maximum velocity threshold whether the target vehicle is stopped or not max_stuck_vehicle_lateral_offset
[m] double maximum lateral offset of the target vehicle position required_clearance
[m] double clearance to be secured between the ego and the ahead vehicle min_acc
[m/ss] double minimum acceleration to stop min_jerk
[m/sss] double minimum jerk to stop max_jerk
[m/sss] double maximum jerk to stop"},{"location":"planning/behavior_velocity_crosswalk_module/#others","title":"Others","text":"In the common
namespace, the following parameters are defined.
show_processing_time
[-] bool whether to show processing time traffic_light_state_timeout
[s] double timeout threshold for traffic light signal enable_rtc
[-] bool if true, the scene modules should be approved by (request to cooperate)rtc function. if false, the module can be run without approval from rtc."},{"location":"planning/behavior_velocity_crosswalk_module/#known-issues","title":"Known Issues","text":"/planning/scenario_planning/lane_driving/behavior_planning/behavior_velocity_planner/debug/crosswalk
shows the following markers.
ros2 run behavior_velocity_crosswalk_module time_to_collision_plotter.py\n
enables you to visualize the following figure of the ego and pedestrian's time to collision. The label of each plot is <crosswalk module id>-<pedestrian uuid>
.
ego_pass_later_margin
described in Yield Decisionego_pass_later_margin
described in Yield Decision[1] \u4f50\u85e4 \u307f\u306a\u307f, \u65e9\u5742 \u7965\u4e00, \u6e05\u6c34 \u653f\u884c, \u6751\u91ce \u9686\u5f66, \u6a2a\u65ad\u6b69\u884c\u8005\u306b\u5bfe\u3059\u308b\u30c9\u30e9\u30a4\u30d0\u306e\u30ea\u30b9\u30af\u56de\u907f\u884c\u52d5\u306e\u30e2\u30c7\u30eb\u5316, \u81ea\u52d5\u8eca\u6280\u8853\u4f1a\u8ad6\u6587\u96c6, 2013, 44 \u5dfb, 3 \u53f7, p. 931-936.
"},{"location":"planning/behavior_velocity_detection_area_module/","title":"Index","text":""},{"location":"planning/behavior_velocity_detection_area_module/#detection-area","title":"Detection Area","text":""},{"location":"planning/behavior_velocity_detection_area_module/#role","title":"Role","text":"If pointcloud is detected in a detection area defined on a map, the stop planning will be executed at the predetermined point.
"},{"location":"planning/behavior_velocity_detection_area_module/#activation-timing","title":"Activation Timing","text":"This module is activated when there is a detection area on the target lane.
"},{"location":"planning/behavior_velocity_detection_area_module/#module-parameters","title":"Module Parameters","text":"Parameter Type Descriptionuse_dead_line
bool [-] weather to use dead line or not use_pass_judge_line
bool [-] weather to use pass judge line or not state_clear_time
double [s] when the vehicle is stopping for certain time without incoming obstacle, move to STOPPED state stop_margin
double [m] a margin that the vehicle tries to stop before stop_line dead_line_margin
double [m] ignore threshold that vehicle behind is collide with ego vehicle or not hold_stop_margin_distance
double [m] parameter for restart prevention (See Algorithm section) distance_to_judge_over_stop_line
double [m] parameter for judging that the stop line has been crossed"},{"location":"planning/behavior_velocity_detection_area_module/#inner-workings-algorithm","title":"Inner-workings / Algorithm","text":"If it needs X meters (e.g. 0.5 meters) to stop once the vehicle starts moving due to the poor vehicle control performance, the vehicle goes over the stopping position that should be strictly observed when the vehicle starts to moving in order to approach the near stop point (e.g. 0.3 meters away).
This module has parameter hold_stop_margin_distance
in order to prevent from these redundant restart. If the vehicle is stopped within hold_stop_margin_distance
meters from stop point of the module (_front_to_stop_line < hold_stop_margin_distance), the module judges that the vehicle has already stopped for the module's stop point and plans to keep stopping current position even if the vehicle is stopped due to other factors.
parameters
outside the hold_stop_margin_distance
inside the hold_stop_margin_distance"},{"location":"planning/behavior_velocity_dynamic_obstacle_stop_module/","title":"Index","text":""},{"location":"planning/behavior_velocity_dynamic_obstacle_stop_module/#dynamic-obstacle-stop","title":"Dynamic Obstacle Stop","text":""},{"location":"planning/behavior_velocity_dynamic_obstacle_stop_module/#role","title":"Role","text":"
dynamic_obstacle_stop
is a module that stops the ego vehicle from entering the immediate path of a dynamic object.
The immediate path of an object is the area that the object would traverse during a given time horizon, assuming constant velocity and heading.
"},{"location":"planning/behavior_velocity_dynamic_obstacle_stop_module/#activation-timing","title":"Activation Timing","text":"This module is activated if the launch parameter launch_dynamic_obstacle_stop_module
is set to true in the behavior planning launch file.
The module insert a stop point where the ego path collides with the immediate path of an object. The overall module flow can be summarized with the following 4 steps.
In addition to these 4 steps, 2 mechanisms are in place to make the stop point of this module more stable: an hysteresis and a decision duration buffer.
The hysteresis
parameter is used when a stop point was already being inserted in the previous iteration and it increases the range where dynamic objects are considered close enough to the ego path to be used by the module.
The decision_duration_buffer
parameter defines the duration when the module will keep inserted the previous stop point, even after no collisions were found.
An object is considered by the module only if it meets all of the following conditions:
minimum_object_velocity
parameter;For the last condition, the object is considered close enough if its lateral distance from the ego path is less than the threshold parameter minimum_object_distance_from_ego_path
plus half the width of ego and of the object (including the extra_object_width
parameter). In addition, the value of the hysteresis
parameter is added to the minimum distance if a stop point was inserted in the previous iteration.
For each considered object, a rectangle is created representing its immediate path. The rectangle has the width of the object plus the extra_object_width
parameter and its length is the current speed of the object multiplied by the time_horizon
.
We build the ego path footprints as the set of ego footprint polygons projected on each path point. We then calculate the intersections between these ego path footprints and the previously calculated immediate path rectangles. An intersection is ignored if the object is not driving toward ego, i.e., the absolute angle between the object and the path point is larger than \\(\\frac{3 \\pi}{4}\\).
The collision point with the lowest arc length when projected on the ego path will be used to calculate the final stop point.
"},{"location":"planning/behavior_velocity_dynamic_obstacle_stop_module/#insert-stop-point","title":"Insert stop point","text":"Before inserting a stop point, we calculate the range of path arc lengths where it can be inserted. The minimum is calculated to satisfy the acceleration and jerk constraints of the vehicle. If a stop point was inserted in the previous iteration of the module, its arc length is used as the maximum. Finally, the stop point arc length is calculated to be the arc length of the previously found collision point minus the stop_distance_buffer
and the ego vehicle longitudinal offset, clamped between the minimum and maximum values.
extra_object_width
double [m] extra width around detected objects minimum_object_velocity
double [m/s] objects with a velocity bellow this value are ignored stop_distance_buffer
double [m] extra distance to add between the stop point and the collision point time_horizon
double [s] time horizon used for collision checks hysteresis
double [m] once a collision has been detected, this hysteresis is used on the collision detection decision_duration_buffer
double [s] duration between no collision being detected and the stop decision being cancelled minimum_object_distance_from_ego_path
double [m] minimum distance between the footprints of ego and an object to consider for collision"},{"location":"planning/behavior_velocity_intersection_module/","title":"Intersection","text":""},{"location":"planning/behavior_velocity_intersection_module/#intersection","title":"Intersection","text":""},{"location":"planning/behavior_velocity_intersection_module/#role","title":"Role","text":"The intersection module is responsible for safely passing urban intersections by:
This module is designed to be agnostic to left-hand/right-hand traffic rules and work for crossroads, T-shape junctions, etc. Roundabout is not formally supported in this module.
"},{"location":"planning/behavior_velocity_intersection_module/#activation-condition","title":"Activation condition","text":"This module is activated when the path contains the lanes with turn_direction tag. More precisely, if the lane_ids of the path contain the ids of those lanes, corresponding instances of intersection module are activated on each lane respectively.
"},{"location":"planning/behavior_velocity_intersection_module/#requirementslimitations","title":"Requirements/Limitations","text":"The attention area in the intersection is defined as the set of lanes that are conflicting with ego path and their preceding lanes up to common.attention_area_length
meters. By default RightOfWay tag is not set, so the attention area covers all the conflicting lanes and its preceding lanes as shown in the first row. RightOfWay tag is used to rule out the lanes that each lane has priority given the traffic light relation and turn_direction priority. In the second row, purple lanes are set as the yield_lane of the ego_lane in the RightOfWay tag.
intersection_area, which is supposed to be defined on the HDMap, is an area converting the entire intersection.
"},{"location":"planning/behavior_velocity_intersection_module/#in-phaseanti-phase-signal-group","title":"In-phase/Anti-phase signal group","text":"The terms \"in-phase signal group\" and \"anti-phase signal group\" are introduced to distinguish the lanes by the timing of traffic light regulation as shown in below figure.
The set of intersection lanes whose color is in sync with lane L1 is called the in-phase signal group of L1, and the set of remaining lanes is called the anti-phase signal group.
"},{"location":"planning/behavior_velocity_intersection_module/#how-towhy-set-rightofway-tag","title":"How-to/Why set RightOfWay tag","text":"Ideally RightOfWay tag is unnecessary if ego has perfect knowledge of all traffic signal information because:
That allows ego to generate the attention area dynamically using the real time traffic signal information. However this ideal condition rarely holds unless the traffic signal information is provided through the infrastructure. Also there maybe be very complicated/bad intersection maps where multiple lanes overlap in a complex manner.
common.use_map_right_of_way
to false and there is no need to set RightOfWay tag on the map. The intersection module will generate the attention area by checking traffic signal and corresponding conflicting lanes. This feature is not implemented yet.common.use_map_right_of_way
to true. If you do not want to detect vehicles on the anti-phase signal group lanes, set them as yield_lane for ego lane.To help the intersection module care only a set of limited lanes, RightOfWay tag needs to be properly set.
Following table shows an example of how to set yield_lanes to each lane in a intersection w/o traffic lights. Since it is not apparent how to uniquely determine signal phase group for a set of intersection lanes in geometric/topological manner, yield_lane needs to be set manually. Straight lanes with traffic lights are exceptionally handled to detect no lanes because commonly it has priority over all the other lanes, so no RightOfWay setting is required.
turn direction of right_of_way yield_lane(with traffic light) yield_lane(without traffic light) straight not need to set yield_lane(this case is special) left/right conflicting lanes of in-phase group left(Left hand traffic) all conflicting lanes of the anti-phase group and right conflicting lanes of in-phase group right conflicting lanes of in-phase group right(Left hand traffic) all conflicting lanes of the anti-phase group no yield_lane left(Right hand traffic) all conflicting lanes of the anti-phase group no yield_lane right(Right hand traffic) all conflicting lanes of the anti-phase group and right conflicting lanes of in-phase group left conflicting lanes of in-phase groupThis setting gives the following attention_area
configurations.
For complex/bad intersection map like the one illustrated below, additional RightOfWay setting maybe necessary.
The bad points are:
Following figure illustrates important positions used in the intersection module. Note that each solid line represents ego front line position and the corresponding dot represents the actual inserted stop point position for the vehicle frame, namely the center of the rear wheel.
To precisely calculate stop positions, the path is interpolated at the certain interval of common.path_interpolation_ds
.
common.default_stopline_margin
meters behind first_attention_stopline is defined as default_stopline instead.For stuck vehicle detection and collision detection, this module checks car, bus, truck, trailer, motor cycle, and bicycle type objects.
Objects that satisfy all of the following conditions are considered as target objects (possible collision objects):
common.attention_area_margin
) .common.attention_area_angle_threshold
).There are several behaviors depending on the scene.
behavior scene action Safe Ego detected no occlusion and collision Ego passes the intersection StuckStop The exit of the intersection is blocked by traffic jam Ego stops before the intersection or the boundary of attention area YieldStuck Another vehicle stops to yield ego Ego stops before the intersection or the boundary of attention area NonOccludedCollisionStop Ego detects no occlusion but detects collision Ego stops at default_stopline FirstWaitBeforeOcclusion Ego detected occlusion when entering the intersection Ego stops at default_stopline at first PeekingTowardOcclusion Ego detected occlusion and but no collision within the FOV (after FirstWaitBeforeOcclusion) Ego approaches the boundary of the attention area slowly OccludedCollisionStop Ego detected both occlusion and collision (after FirstWaitBeforeOcclusion) Ego stops immediately FullyPrioritized Ego is fully prioritized by the RED/Arrow signal Ego only cares vehicles still running inside the intersection. Occlusion is ignored OverPassJudgeLine Ego is already inside the attention area and/or cannot stop before the boundary of attention area Ego does not detect collision/occlusion anymore and passes the intersection "},{"location":"planning/behavior_velocity_intersection_module/#stuck-vehicle-detection","title":"Stuck Vehicle Detection","text":"If there is any object on the path inside the intersection and at the exit of the intersection (up to stuck_vehicle.stuck_vehicle_detect_dist
) lane and its velocity is less than the threshold (stuck_vehicle.stuck_vehicle_velocity_threshold
), the object is regarded as a stuck vehicle. If stuck vehicles exist, this module inserts a stopline a certain distance (=default_stopline_margin
) before the overlapped region with other lanes. The stuck vehicle detection area is generated based on the planned path, so the stuck vehicle stopline is not inserted if the upstream module generated an avoidance path.
If there is any stopped object on the attention lanelet between the intersection point with ego path and the position which is yield_stuck.distance_threshold
before that position, the object is regarded as yielding to ego vehicle. In this case ego is given the right-of-way by the yielding object but this module inserts stopline to prevent entry into the intersection. This scene happens when the object is yielding against ego or the object is waiting before the crosswalk around the exit of the intersection.
The following process is performed for the targets objects to determine whether ego can pass the intersection safely. If it is judged that ego cannot pass the intersection with enough margin, this module inserts a stopline on the path.
collision_detection.min_predicted_path_confidence
is used.collision_detection.collision_start_margin_time
, \\(t\\) + collision_detection.collision_end_margin_time
]The parameters collision_detection.collision_start_margin_time
and collision_detection.collision_end_margin_time
can be interpreted as follows:
collision_detection.collision_start_margin_time
.collision_detection.collision_end_margin_time
.If collision is detected, the state transits to \"STOP\" immediately. On the other hand, the state does not transit to \"GO\" unless safe judgement continues for a certain period collision_detection.collision_detection_hold_time
to prevent the chattering of decisions.
Currently, the intersection module uses motion_velocity_smoother
feature to precisely calculate ego velocity profile along the intersection lane under longitudinal/lateral constraints. If the flag collision_detection.velocity_profile.use_upstream
is true, the target velocity profile of the original path is used. Otherwise the target velocity is set to collision.velocity_profile.default_velocity
. In the trajectory smoothing process the target velocity at/before ego trajectory points are set to ego current velocity. The smoothed trajectory is then converted to an array of (time, distance) which indicates the arrival time to each trajectory point on the path from current ego position. You can visualize this array by adding the lane id to debug.ttc
and running
ros2 run behavior_velocity_intersection_module ttc.py --lane_id <lane_id>\n
"},{"location":"planning/behavior_velocity_intersection_module/#occlusion-detection","title":"Occlusion detection","text":"If the flag occlusion.enable
is true this module checks if there is sufficient field of view (FOV) on the attention area up to occlusion.occlusion_attention_area_length
. If FOV is not clear enough ego first makes a brief stop at default_stopline for occlusion.temporal_stop_time_before_peeking
, and then slowly creeps toward occlusion_peeking_stopline. If occlusion.creep_during_peeking.enable
is true occlusion.creep_during_peeking.creep_velocity
is inserted up to occlusion_peeking_stopline. Otherwise only stop line is inserted.
During the creeping if collision is detected this module inserts a stop line in front of ego immediately, and if the FOV gets sufficiently clear the intersection_occlusion wall will disappear. If occlusion is cleared and no collision is detected ego will pass the intersection.
The occlusion is detected as the common area of occlusion attention area(which is partially the same as the normal attention area) and the unknown cells of the occupancy grid map. The occupancy grid map is denoised using morphology with the window size of occlusion.denoise_kernel
. The occlusion attention area lanes are discretized to line strings and they are used to generate a grid whose each cell represents the distance from ego path along the lane as shown below.
If the nearest occlusion cell value is below the threshold occlusion.occlusion_required_clearance_distance
, it means that the FOV of ego is not clear. It is expected that the occlusion gets cleared as the vehicle approaches the occlusion peeking stop line.
At intersection with traffic light, the whereabout of occlusion is estimated by checking if there are any objects between ego and the nearest occlusion cell. While the occlusion is estimated to be caused by some object (DYNAMICALLY occluded), intersection_wall appears at all times. If no objects are found between ego and the nearest occlusion cell (STATICALLY occluded), after ego stopped for the duration of occlusion.static_occlusion_with_traffic_light_timeout
plus occlusion.occlusion_detection_hold_time
, occlusion is intentionally ignored to avoid stuck.
The remaining time is visualized on the intersection_occlusion virtual wall.
"},{"location":"planning/behavior_velocity_intersection_module/#occlusion-handling-at-intersection-without-traffic-light","title":"Occlusion handling at intersection without traffic light","text":"At intersection without traffic light, if occlusion is detected, ego makes a brief stop at default_stopline and first_attention_stopline respectively. After stopping at the first_attention_area_stopline this module inserts occlusion.absence_traffic_light.creep_velocity
velocity between ego and occlusion_wo_tl_pass_judge_line while occlusion is not cleared. If collision is detected, ego immediately stops. Once the occlusion is cleared or ego has passed occlusion_wo_tl_pass_judge_line this module does not detect collision and occlusion because ego footprint is already inside the intersection.
While ego is creeping, yellow intersection_wall appears in front ego.
"},{"location":"planning/behavior_velocity_intersection_module/#traffic-signal-specific-behavior","title":"Traffic signal specific behavior","text":""},{"location":"planning/behavior_velocity_intersection_module/#collision-detection_1","title":"Collision detection","text":"TTC parameter varies depending on the traffic light color/shape as follows.
traffic light color ttc(start) ttc(end) GREENcollision_detection.not_prioritized.collision_start_margin
collision_detection.not_prioritized.collision_end_margin
AMBER collision_detection.partially_prioritized.collision_start_end_margin
collision_detection.partially_prioritized.collision_start_end_margin
RED / Arrow collision_detection.fully_prioritized.collision_start_end_margin
collision_detection.fully_prioritized.collision_start_end_margin
"},{"location":"planning/behavior_velocity_intersection_module/#yield-on-green","title":"yield on GREEN","text":"If the traffic light color changed to GREEN and ego approached the entry of the intersection lane within the distance collision_detection.yield_on_green_traffic_light.distance_to_assigned_lanelet_start
and there is any object whose distance to its stopline is less than collision_detection.yield_on_green_traffic_light.object_dist_to_stopline
, this module commands to stop for the duration of collision_detection.yield_on_green_traffic_light.duration
at default_stopline.
If the traffic light color is AMBER but the object is expected to stop before its stopline under the deceleration of collision_detection.ignore_on_amber_traffic_light.object_expected_deceleration
, collision checking is skipped.
If the traffic light color is RED or Arrow signal is turned on, the attention lanes which are not conflicting with ego lane are not used for detection. And even if the object stops with a certain overshoot from its stopline, but its expected stop position under the deceleration of collision_detection.ignore_on_amber_traffic_light.object_expected_deceleration
is more than the distance collision_detection.ignore_on_red_traffic_light.object_margin_to_path
from collision point, the object is ignored.
When the traffic light color/shape is RED/Arrow, occlusion detection is skipped.
"},{"location":"planning/behavior_velocity_intersection_module/#pass-judge-line","title":"Pass Judge Line","text":"Generally it is not tolerable for vehicles that have lower traffic priority to stop in the middle of the unprotected area in intersections, and they need to stop at the stop line beforehand if there will be any risk of collision, which introduces two requirements:
The position which is before the boundary of unprotected area by the braking distance which is obtained by
\\[ \\dfrac{v_{\\mathrm{ego}}^{2}}{2a_{\\mathrm{max}}} + v_{\\mathrm{ego}} * t_{\\mathrm{delay}} \\]is called pass_judge_line, and safety decision must be made before ego passes this position because ego does not stop anymore.
1st_pass_judge_line is before the first upcoming lane, and at intersections with multiple upcoming lanes, 2nd_pass_judge_line is defined as the position which is before the centerline of the first attention lane by the braking distance. 1st/2nd_pass_judge_line are illustrated in the following figure.
Intersection module will command to GO if
common.enable_pass_judge_before_default_stopline
is true) ANDbecause it is expected to stop or continue stop decision if
common.enable_pass_judge_before_default_stopline
is false ORFor the 3rd condition, it is possible that ego stops with some overshoot to the unprotected area while it is trying to stop for collision detection, because ego should keep stop decision while UNSAFE decision is made even if it passed 1st_pass_judge_line during deceleration.
For the 4th condition, at intersections with 2nd attention lane, even if ego is over the 1st pass_judge_line, still intersection module commands to stop if the most probable collision is expected to happen in the 2nd attention lane.
Also if occlusion.enable
is true, the position of 1st_pass_judge line changes to occlusion_peeking_stopline if ego passed the original 1st_pass_judge_line position while ego is peeking. Otherwise ego could inadvertently judge that it passed 1st_pass_judge during peeking and then abort peeking.
Each data structure is defined in util_type.hpp
.
IntersectionLanelets
","text":""},{"location":"planning/behavior_velocity_intersection_module/#intersectionstoplines","title":"IntersectionStopLines
","text":"Each stop lines are generated from interpolated path points to obtain precise positions.
"},{"location":"planning/behavior_velocity_intersection_module/#targetobject","title":"TargetObject
","text":"TargetObject
holds the object, its belonging lane and corresponding stopline information.
.attention_area_length
double [m] range for object detection .attention_area_margin
double [m] margin for expanding attention area width .attention_area_angle_threshold
double [rad] threshold of angle difference between the detected object and lane .use_intersection_area
bool [-] flag to use intersection_area for collision detection .default_stopline_margin
double [m] margin before_stop_line .stopline_overshoot_margin
double [m] margin for the overshoot from stopline .max_accel
double [m/ss] max acceleration for stop .max_jerk
double [m/sss] max jerk for stop .delay_response_time
double [s] action delay before stop .enable_pass_judge_before_default_stopline
bool [-] flag not to stop before default_stopline even if ego is over pass_judge_line"},{"location":"planning/behavior_velocity_intersection_module/#stuck_vehicleyield_stuck","title":"stuck_vehicle/yield_stuck","text":"Parameter Type Description stuck_vehicle.turn_direction
- [-] turn_direction specifier for stuck vehicle detection stuck_vehicle.stuck_vehicle_detect_dist
double [m] length toward from the exit of intersection for stuck vehicle detection stuck_vehicle.stuck_vehicle_velocity_threshold
double [m/s] velocity threshold for stuck vehicle detection yield_stuck.distance_threshold
double [m/s] distance threshold of yield stuck vehicle from ego path along the lane"},{"location":"planning/behavior_velocity_intersection_module/#collision_detection","title":"collision_detection","text":"Parameter Type Description .consider_wrong_direction_vehicle
bool [-] flag to detect objects in the wrong direction .collision_detection_hold_time
double [s] hold time of collision detection .min_predicted_path_confidence
double [-] minimum confidence value of predicted path to use for collision detection .keep_detection_velocity_threshold
double [s] ego velocity threshold for continuing collision detection before pass judge line .velocity_profile.use_upstream
bool [-] flag to use velocity profile planned by upstream modules .velocity_profile.minimum_upstream_velocity
double [m/s] minimum velocity of upstream velocity profile to avoid zero division .velocity_profile.default_velocity
double [m/s] constant velocity profile when use_upstream is false .velocity_profile.minimum_default_velocity
double [m/s] minimum velocity of default velocity profile to avoid zero division .yield_on_green_traffic_light
- [-] description .ignore_amber_traffic_light
- [-] description .ignore_on_red_traffic_light
- [-] description"},{"location":"planning/behavior_velocity_intersection_module/#occlusion","title":"occlusion","text":"Parameter Type Description .enable
bool [-] flag to calculate occlusion detection .occlusion_attention_area_length
double [m] the length of attention are for occlusion detection .free_space_max
int [-] maximum value of occupancy grid cell to treat at occluded .occupied_min
int [-] minimum value of occupancy grid cell to treat at occluded .denoise_kernel
double [m] morphology window size for preprocessing raw occupancy grid .attention_lane_crop_curvature_threshold
double [m] curvature threshold for trimming curved part of the lane .attention_lane_crop_curvature_ds
double [m] discretization interval of centerline for lane curvature calculation .creep_during_peeking.enable
bool [-] flag to insert creep_velocity
while peeking to intersection occlusion stopline .creep_during_peeking.creep_velocity
double [m/s] the command velocity while peeking to intersection occlusion stopline .peeking_offset
double [m] the offset of the front of the vehicle into the attention area for peeking to occlusion .occlusion_required_clearance_distance
double [m] threshold for the distance to nearest occlusion cell from ego path .possible_object_bbox
[double] [m] minimum bounding box size for checking if occlusion polygon is small enough .ignore_parked_vehicle_speed_threshold
double [m/s] velocity threshold for checking parked vehicle .occlusion_detection_hold_time
double [s] hold time of occlusion detection .temporal_stop_time_before_peeking
double [s] temporal stop duration at default_stopline before starting peeking .temporal_stop_before_attention_area
bool [-] flag to temporarily stop at first_attention_stopline before peeking into attention_area .creep_velocity_without_traffic_light
double [m/s] creep velocity to occlusion_wo_tl_pass_judge_line .static_occlusion_with_traffic_light_timeout
double [s] the timeout duration for ignoring static occlusion at intersection with traffic light"},{"location":"planning/behavior_velocity_intersection_module/#trouble-shooting","title":"Trouble shooting","text":""},{"location":"planning/behavior_velocity_intersection_module/#intersection-module-stops-against-unrelated-vehicles","title":"Intersection module stops against unrelated vehicles","text":"In this case, first visualize /planning/scenario_planning/lane_driving/behavior_planning/behavior_velocity_planner/debug/intersection
topic and check the attention_area
polygon. Intersection module performs collision checking for vehicles running on this polygon, so if it extends to unintended lanes, it needs to have RightOfWay tag.
By lowering common.attention_area_length
you can check which lanes are conflicting with the intersection lane. Then set part of the conflicting lanes as the yield_lane.
The parameter collision_detection.collision_detection_hold_time
suppresses the chattering by keeping UNSAFE decision for this duration until SAFE decision is finally made. The role of this parameter is to account for unstable detection/tracking of objects. By increasing this value you can suppress the chattering. However it could elongate the stopping duration excessively.
If the chattering arises from the acceleration/deceleration of target vehicles, increase collision_detection.collision_detection.collision_end_margin_time
and/or collision_detection.collision_detection.collision_end_margin_time
.
If the intersection wall appears too fast, or ego tends to stop too conservatively for upcoming vehicles, lower the parameter collision_detection.collision_detection.collision_start_margin_time
. If it lasts too long after the target vehicle passed, then lower the parameter collision_detection.collision_detection.collision_end_margin_time
.
If the traffic light color changed from AMBER/RED to UNKNOWN, the intersection module works in the GREEN color mode. So collision and occlusion are likely to be detected again.
"},{"location":"planning/behavior_velocity_intersection_module/#occlusion-is-detected-overly","title":"Occlusion is detected overly","text":"You can check which areas are detected as occlusion by visualizing /planning/scenario_planning/lane_driving/behavior_planning/behavior_velocity_planner/debug/intersection/occlusion_polygons
.
If you do not want to detect / do want to ignore occlusion far from ego or lower the computational cost of occlusion detection, occlusion.occlusion_attention_area_length
should be set to lower value.
If you want to care the occlusion nearby ego more cautiously, set occlusion.occlusion_required_clearance_distance
to a larger value. Then ego will approach the occlusion_peeking_stopline more closely to assure more clear FOV.
occlusion.possible_object_bbox
is used for checking if detected occlusion area is small enough that no vehicles larger than this size can exist inside. By decreasing this size ego will ignore small occluded area.
Refer to the document of probabilistic_occupancy_grid_map for details. If occlusion tends to be detected at apparently free space, increase occlusion.free_space_max
to ignore them.
intersection_occlusion feature is not recommended for use in planning_simulator because the laserscan_based_occupancy_grid_map generates unnatural UNKNOWN cells in 2D manner:
Also many users do not set traffic light information frequently although it is very critical for intersection_occlusion (and in real traffic environment too).
For these reasons, occlusion.enable
is false by default.
On real vehicle or in end-to-end simulator like AWSIM the following pointcloud_based_occupancy_grid_map configuration is highly recommended:
scan_origin_frame: \"velodyne_top\"\n\ngrid_map_type: \"OccupancyGridMapProjectiveBlindSpot\"\nOccupancyGridMapProjectiveBlindSpot:\nprojection_dz_threshold: 0.01 # [m] for avoiding null division\nobstacle_separation_threshold: 1.0 # [m] fill the interval between obstacles with unknown for this length\n
You should set the top lidar link as the scan_origin_frame
. In the example it is velodyne_top
. The method OccupancyGridMapProjectiveBlindSpot
estimates the FOV by running projective ray-tracing from scan_origin
to obstacle or up to the ground and filling the cells on the \"shadow\" of the object as UNKNOWN.
WIP
"},{"location":"planning/behavior_velocity_intersection_module/#merge-from-private","title":"Merge From Private","text":""},{"location":"planning/behavior_velocity_intersection_module/#role_1","title":"Role","text":"When an ego enters a public road from a private road (e.g. a parking lot), it needs to face and stop before entering the public road to make sure it is safe.
This module is activated when there is an intersection at the private area from which the vehicle enters the public road. The stop line is generated both when the goal is in the intersection lane and when the path goes beyond the intersection lane. The basic behavior is the same as the intersection module, but ego must stop once at the stop line.
"},{"location":"planning/behavior_velocity_intersection_module/#activation-timing","title":"Activation Timing","text":"This module is activated when the following conditions are met:
private
tagmerge_from_private_road/stop_duration_sec
double [m] time margin to change state"},{"location":"planning/behavior_velocity_intersection_module/#known-issue","title":"Known Issue","text":"If ego go over the stop line for a certain distance, then it will not transit from STOP.
"},{"location":"planning/behavior_velocity_no_drivable_lane_module/","title":"Index","text":""},{"location":"planning/behavior_velocity_no_drivable_lane_module/#no-drivable-lane","title":"No Drivable Lane","text":""},{"location":"planning/behavior_velocity_no_drivable_lane_module/#role","title":"Role","text":"This module plans the velocity of the related part of the path in case there is a no drivable lane referring to it.
A no drivable lane is a lanelet or more that are out of operation design domain (ODD), i.e., the vehicle must not drive autonomously in this lanelet. A lanelet can be no drivable (out of ODD) due to many reasons, either technical limitations of the SW and/or HW, business requirements, safety considerations, .... etc, or even a combination of those.
Some examples of No Drivable Lanes
A lanelet becomes invalid by adding a new tag under the relevant lanelet in the map file <tag k=\"no_drivable_lane\" v=\"yes\"/>
.
The target of this module is to stop the vehicle before entering the no drivable lane (with configurable stop margin) or keep the vehicle stationary if autonomous mode started inside a no drivable lane. Then ask the human driver to take the responsibility of the driving task (Takeover Request / Request to Intervene)
"},{"location":"planning/behavior_velocity_no_drivable_lane_module/#activation-timing","title":"Activation Timing","text":"This function is activated when the lane id of the target path has an no drivable lane label (i.e. the no_drivable_lane
attribute is yes
).
stop_margin
double [m] margin for ego vehicle to stop before speed_bump print_debug_info
bool whether debug info will be printed or not"},{"location":"planning/behavior_velocity_no_drivable_lane_module/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"INIT
stateAPPROACHING
toward a no drivable lane if:stop_margin
INSIDE_NO_DRIVABLE_LANE
if:stop_margin
STOPPED
when the vehicle is completely stoppedno_drivable_lane
This module plans to avoid stop in 'no stopping area`.
no_stopping_area
, then vehicle stops inside no_stopping_area
so this module makes stop velocity in front of no_stopping_area
This module allows developers to design vehicle velocity in no_stopping_area
module using specific rules. Once ego vehicle go through pass through point, ego vehicle does't insert stop velocity and does't change decision from GO. Also this module only considers dynamic object in order to avoid unnecessarily stop.
state_clear_time
double [s] time to clear stop state stuck_vehicle_vel_thr
double [m/s] vehicles below this velocity are considered as stuck vehicle. stop_margin
double [m] margin to stop line at no stopping area dead_line_margin
double [m] if ego pass this position GO stop_line_margin
double [m] margin to auto-gen stop line at no stopping area detection_area_length
double [m] length of searching polygon stuck_vehicle_front_margin
double [m] obstacle stop max distance"},{"location":"planning/behavior_velocity_no_stopping_area_module/#flowchart","title":"Flowchart","text":""},{"location":"planning/behavior_velocity_occlusion_spot_module/","title":"Index","text":""},{"location":"planning/behavior_velocity_occlusion_spot_module/#occlusion-spot","title":"Occlusion Spot","text":""},{"location":"planning/behavior_velocity_occlusion_spot_module/#role","title":"Role","text":"This module plans safe velocity to slow down before reaching collision point that hidden object is darting out from occlusion spot
where driver can't see clearly because of obstacles.
This module is activated if launch_occlusion_spot
becomes true. To make pedestrian first zone map tag is one of the TODOs.
This module is prototype implementation to care occlusion spot. To solve the excessive deceleration due to false positive of the perception, the logic of detection method can be selectable. This point has not been discussed in detail and needs to be improved.
TODOs are written in each Inner-workings / Algorithms (see the description below).
"},{"location":"planning/behavior_velocity_occlusion_spot_module/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"planning/behavior_velocity_occlusion_spot_module/#logics-working","title":"Logics Working","text":"There are several types of occlusions, such as \"occlusions generated by parked vehicles\" and \"occlusions caused by obstructions\". In situations such as driving on road with obstacles, where people jump out of the way frequently, all possible occlusion spots must be taken into account. This module considers all occlusion spots calculated from the occupancy grid, but it is not reasonable to take into account all occlusion spots for example, people jumping out from behind a guardrail, or behind cruising vehicle. Therefore currently detection area will be limited to to use predicted object information.
Note that this decision logic is still under development and needs to be improved.
"},{"location":"planning/behavior_velocity_occlusion_spot_module/#detectionarea-polygon","title":"DetectionArea Polygon","text":"This module considers TTV from pedestrian velocity and lateral distance to occlusion spot. TTC is calculated from ego velocity and acceleration and longitudinal distance until collision point using motion velocity smoother. To compute fast this module only consider occlusion spot whose TTV is less than TTC and only consider area within \"max lateral distance\".
"},{"location":"planning/behavior_velocity_occlusion_spot_module/#occlusion-spot-occupancy-grid-base","title":"Occlusion Spot Occupancy Grid Base","text":"This module considers any occlusion spot around ego path computed from the occupancy grid. Due to the computational cost occupancy grid is not high resolution and this will make occupancy grid noisy so this module add information of occupancy to occupancy grid map.
TODO: consider hight of obstacle point cloud to generate occupancy grid.
"},{"location":"planning/behavior_velocity_occlusion_spot_module/#collision-free-judgement","title":"Collision Free Judgement","text":"obstacle that can run out from occlusion should have free space until intersection from ego vehicle
"},{"location":"planning/behavior_velocity_occlusion_spot_module/#partition-lanelet","title":"Partition Lanelet","text":"By using lanelet information of \"guard_rail\", \"fence\", \"wall\" tag, it's possible to remove unwanted occlusion spot.
By using static object information, it is possible to make occupancy grid more accurate.
To make occupancy grid for planning is one of the TODOs.
"},{"location":"planning/behavior_velocity_occlusion_spot_module/#possible-collision","title":"Possible Collision","text":"obstacle that can run out from occlusion is interrupted by moving vehicle.
"},{"location":"planning/behavior_velocity_occlusion_spot_module/#about-safe-motion","title":"About safe motion","text":""},{"location":"planning/behavior_velocity_occlusion_spot_module/#the-concept-of-safe-velocity-and-margin","title":"The Concept of Safe Velocity and Margin","text":"The safe slowdown velocity is calculated from the below parameters of ego emergency braking system and time to collision. Below calculation is included but change velocity dynamically is not recommended for planner.
time to collision of pedestrian[s] with these parameters we can briefly define safe motion before occlusion spot for ideal environment.
This module defines safe margin to consider ego distance to stop and collision path point geometrically. While ego is cruising from safe margin to collision path point, ego vehicle keeps the same velocity as occlusion spot safe velocity.
Note: This logic assumes high-precision vehicle speed tracking and margin for decel point might not be the best solution, and override with manual driver is considered if pedestrian really run out from occlusion spot.
TODO: consider one of the best choices
The maximum slowdown velocity is calculated from the below parameters of ego current velocity and acceleration with maximum slowdown jerk and maximum slowdown acceleration in order not to slowdown too much.
pedestrian_vel
double [m/s] maximum velocity assumed pedestrian coming out from occlusion point. pedestrian_radius
double [m] assumed pedestrian radius which fits in occlusion spot. Parameter Type Description use_object_info
bool [-] whether to reflect object info to occupancy grid map or not. use_partition_lanelet
bool [-] whether to use partition lanelet map data. Parameter /debug Type Description is_show_occlusion
bool [-] whether to show occlusion point markers.\u3000 is_show_cv_window
bool [-] whether to show open_cv debug window. is_show_processing_time
bool [-] whether to show processing time. Parameter /threshold Type Description detection_area_length
double [m] the length of path to consider occlusion spot stuck_vehicle_vel
double [m/s] velocity below this value is assumed to stop lateral_distance
double [m] maximum lateral distance to consider hidden collision Parameter /motion Type Description safety_ratio
double [-] safety ratio for jerk and acceleration max_slow_down_jerk
double [m/s^3] jerk for safe brake max_slow_down_accel
double [m/s^2] deceleration for safe brake non_effective_jerk
double [m/s^3] weak jerk for velocity planning. non_effective_acceleration
double [m/s^2] weak deceleration for velocity planning. min_allowed_velocity
double [m/s] minimum velocity allowed safe_margin
double [m] maximum error to stop with emergency braking system. Parameter /detection_area Type Description min_occlusion_spot_size
double [m] the length of path to consider occlusion spot slice_length
double [m] the distance of divided detection area max_lateral_distance
double [m] buffer around the ego path used to build the detection_area area. Parameter /grid Type Description free_space_max
double [-] maximum value of a free space cell in the occupancy grid occupied_min
double [-] buffer around the ego path used to build the detection_area area."},{"location":"planning/behavior_velocity_occlusion_spot_module/#flowchart","title":"Flowchart","text":""},{"location":"planning/behavior_velocity_occlusion_spot_module/#rough-overview-of-the-whole-process","title":"Rough overview of the whole process","text":""},{"location":"planning/behavior_velocity_occlusion_spot_module/#detail-process-for-predicted-objectnot-updated","title":"Detail process for predicted object(not updated)","text":""},{"location":"planning/behavior_velocity_occlusion_spot_module/#detail-process-for-occupancy-grid-base","title":"Detail process for Occupancy grid base","text":""},{"location":"planning/behavior_velocity_out_of_lane_module/","title":"Index","text":""},{"location":"planning/behavior_velocity_out_of_lane_module/#out-of-lane","title":"Out of Lane","text":""},{"location":"planning/behavior_velocity_out_of_lane_module/#role","title":"Role","text":"out_of_lane
is the module that decelerates and stops to prevent the ego vehicle from entering another lane with incoming dynamic objects.
This module is activated if launch_out_of_lane
is set to true.
The algorithm is made of the following steps.
In this first step, the ego footprint is projected at each path point and are eventually inflated based on the extra_..._offset
parameters.
In the second step, the set of lanes to consider for overlaps is generated. This set is built by selecting all lanelets within some distance from the ego vehicle, and then removing non-relevant lanelets. The selection distance is chosen as the maximum between the slowdown.distance_threshold
and the stop.distance_threshold
.
A lanelet is deemed non-relevant if it meets one of the following conditions.
In the third step, overlaps between the ego path footprints and the other lanes are calculated. For each pair of other lane \\(l\\) and ego path footprint \\(f\\), we calculate the overlapping polygons using boost::geometry::intersection
. For each overlapping polygon found, if the distance inside the other lane \\(l\\) is above the overlap.minimum_distance
threshold, then the overlap is ignored. Otherwise, the arc length range (relative to the ego path) and corresponding points of the overlapping polygons are stored. Ultimately, for each other lane \\(l\\), overlapping ranges of successive overlaps are built with the following information:
In the fourth step, a decision to either slow down or stop before each overlapping range is taken based on the dynamic objects. The conditions for the decision depend on the value of the mode
parameter.
Whether it is decided to slow down or stop is determined by the distance between the ego vehicle and the start of the overlapping range (in arc length along the ego path). If this distance is bellow the actions.slowdown.threshold
, a velocity of actions.slowdown.velocity
will be used. If the distance is bellow the actions.stop.threshold
, a velocity of 0
m/s will be used.
With the mode
set to \"threshold\"
, a decision to stop or slow down before a range is made if an incoming dynamic object is estimated to reach the overlap within threshold.time_threshold
.
With the mode
set to \"ttc\"
, estimates for the times when ego and the dynamic objects reach the start and end of the overlapping range are calculated. This is then used to calculate the time to collision over the period where ego crosses the overlap. If the time to collision is predicted to go bellow the ttc.threshold
, the decision to stop or slow down is made.
With the mode
set to \"intervals\"
, the estimated times when ego and the dynamic objects reach the start and end points of the overlapping range are used to create time intervals. These intervals can be made shorter or longer using the intervals.ego_time_buffer
and intervals.objects_time_buffer
parameters. If the time interval of ego overlaps with the time interval of an object, the decision to stop or slow down is made.
To estimate the times when ego will reach an overlap, it is assumed that ego travels along its path at its current velocity or at half the velocity of the path points, whichever is higher.
"},{"location":"planning/behavior_velocity_out_of_lane_module/#dynamic-objects","title":"Dynamic objects","text":"Two methods are used to estimate the time when a dynamic objects with reach some point. If objects.use_predicted_paths
is set to true
, the predicted paths of the dynamic object are used if their confidence value is higher than the value set by the objects.predicted_path_min_confidence
parameter. Otherwise, the lanelet map is used to estimate the distance between the object and the point and the time is calculated assuming the object keeps its current velocity.
Finally, for each decision to stop or slow down before an overlapping range, a point is inserted in the path. For a decision taken for an overlapping range with a lane \\(l\\) starting at ego path point index \\(i\\), a point is inserted in the path between index \\(i\\) and \\(i-1\\) such that the ego footprint projected at the inserted point does not overlap \\(l\\). Such point with no overlap must exist since, by definition of the overlapping range, we know that there is no overlap at \\(i-1\\).
If the point would cause a higher deceleration than allowed by the max_accel
parameter (node parameter), it is skipped.
Moreover, parameter action.distance_buffer
adds an extra distance between the ego footprint and the overlap when possible.
mode
string [-] mode used to consider a dynamic object. Candidates: threshold, intervals, ttc skip_if_already_overlapping
bool [-] if true, do not run this module when ego already overlaps another lane Parameter /threshold Type Description time_threshold
double [s] consider objects that will reach an overlap within this time Parameter /intervals Type Description ego_time_buffer
double [s] extend the ego time interval by this buffer objects_time_buffer
double [s] extend the time intervals of objects by this buffer Parameter /ttc Type Description threshold
double [s] consider objects with an estimated time to collision bellow this value while ego is on the overlap Parameter /objects Type Description minimum_velocity
double [m/s] ignore objects with a velocity lower than this value predicted_path_min_confidence
double [-] minimum confidence required for a predicted path to be considered use_predicted_paths
bool [-] if true, use the predicted paths to estimate future positions; if false, assume the object moves at constant velocity along all lanelets it currently is located in Parameter /overlap Type Description minimum_distance
double [m] minimum distance inside a lanelet for an overlap to be considered extra_length
double [m] extra arc length to add to the front and back of an overlap (used to calculate enter/exit times) Parameter /action Type Description skip_if_over_max_decel
bool [-] if true, do not take an action that would cause more deceleration than the maximum allowed distance_buffer
double [m] buffer distance to try to keep between the ego footprint and lane slowdown.distance_threshold
double [m] insert a slow down when closer than this distance from an overlap slowdown.velocity
double [m] slow down velocity stop.distance_threshold
double [m] insert a stop when closer than this distance from an overlap Parameter /ego Type Description extra_front_offset
double [m] extra front distance to add to the ego footprint extra_rear_offset
double [m] extra rear distance to add to the ego footprint extra_left_offset
double [m] extra left distance to add to the ego footprint extra_right_offset
double [m] extra right distance to add to the ego footprint"},{"location":"planning/behavior_velocity_planner/","title":"Behavior Velocity Planner","text":""},{"location":"planning/behavior_velocity_planner/#behavior-velocity-planner","title":"Behavior Velocity Planner","text":""},{"location":"planning/behavior_velocity_planner/#overview","title":"Overview","text":"behavior_velocity_planner
is a planner that adjust velocity based on the traffic rules. It loads modules as plugins. Please refer to the links listed below for detail on each module.
When each module plans velocity, it considers based on base_link
(center of rear-wheel axis) pose. So for example, in order to stop at a stop line with the vehicles' front on the stop line, it calculates base_link
position from the distance between base_link
to front and modifies path velocity from the base_link
position.
~input/path_with_lane_id
autoware_auto_planning_msgs::msg::PathWithLaneId path with lane_id ~input/vector_map
autoware_auto_mapping_msgs::msg::HADMapBin vector map ~input/vehicle_odometry
nav_msgs::msg::Odometry vehicle velocity ~input/dynamic_objects
autoware_auto_perception_msgs::msg::PredictedObjects dynamic objects ~input/no_ground_pointcloud
sensor_msgs::msg::PointCloud2 obstacle pointcloud ~/input/compare_map_filtered_pointcloud
sensor_msgs::msg::PointCloud2 obstacle pointcloud filtered by compare map. Note that this is used only when the detection method of run out module is Points. ~input/traffic_signals
autoware_perception_msgs::msg::TrafficSignalArray traffic light states"},{"location":"planning/behavior_velocity_planner/#output-topics","title":"Output topics","text":"Name Type Description ~output/path
autoware_auto_planning_msgs::msg::Path path to be followed ~output/stop_reasons
tier4_planning_msgs::msg::StopReasonArray reasons that cause the vehicle to stop"},{"location":"planning/behavior_velocity_planner/#node-parameters","title":"Node parameters","text":"Parameter Type Description launch_modules
vector<string> module names to launch forward_path_length
double forward path length backward_path_length
double backward path length max_accel
double (to be a global parameter) max acceleration of the vehicle system_delay
double (to be a global parameter) delay time until output control command delay_response_time
double (to be a global parameter) delay time of the vehicle's response to control commands"},{"location":"planning/behavior_velocity_planner_common/","title":"Behavior Velocity Planner Common","text":""},{"location":"planning/behavior_velocity_planner_common/#behavior-velocity-planner-common","title":"Behavior Velocity Planner Common","text":"This package provides common functions as a library, which are used in the behavior_velocity_planner
node and modules.
run_out
is the module that decelerates and stops for dynamic obstacles such as pedestrians and bicycles.
This module is activated if launch_run_out
becomes true
Calculate the expected target velocity for the ego vehicle path to calculate time to collision with obstacles more precisely. The expected target velocity is calculated with motion velocity smoother module by using current velocity, current acceleration and velocity limits directed by the map and external API.
"},{"location":"planning/behavior_velocity_run_out_module/#extend-the-path","title":"Extend the path","text":"The path is extended by the length of base link to front to consider obstacles after the goal.
"},{"location":"planning/behavior_velocity_run_out_module/#trim-path-from-ego-position","title":"Trim path from ego position","text":"The path is trimmed from ego position to a certain distance to reduce calculation time. Trimmed distance is specified by parameter of detection_distance
.
This module can handle multiple types of obstacles by creating abstracted dynamic obstacle data layer. Currently we have 3 types of detection method (Object, ObjectWithoutPath, Points) to create abstracted obstacle data.
"},{"location":"planning/behavior_velocity_run_out_module/#abstracted-dynamic-obstacle","title":"Abstracted dynamic obstacle","text":"Abstracted obstacle data has following information.
Name Type Description posegeometry_msgs::msg::Pose
pose of the obstacle classifications std::vector<autoware_auto_perception_msgs::msg::ObjectClassification>
classifications with probability shape autoware_auto_perception_msgs::msg::Shape
shape of the obstacle predicted_paths std::vector<DynamicObstacle::PredictedPath>
predicted paths with confidence. this data doesn't have time step because we use minimum and maximum velocity instead. min_velocity_mps float
minimum velocity of the obstacle. specified by parameter of dynamic_obstacle.min_vel_kmph
max_velocity_mps float
maximum velocity of the obstacle. specified by parameter of dynamic_obstacle.max_vel_kmph
Enter the maximum/minimum velocity of the object as a parameter, adding enough margin to the expected velocity. This parameter is used to create polygons for collision detection.
Future work: Determine the maximum/minimum velocity from the estimated velocity with covariance of the object
"},{"location":"planning/behavior_velocity_run_out_module/#3-types-of-detection-method","title":"3 types of detection method","text":"We have 3 types of detection method to meet different safety and availability requirements. The characteristics of them are shown in the table below. Method of Object
has high availability (less false positive) because it detects only objects whose predicted path is on the lane. However, sometimes it is not safe because perception may fail to detect obstacles or generate incorrect predicted paths. On the other hand, method of Points
has high safety (less false negative) because it uses pointcloud as input. Since points don't have a predicted path, the path that moves in the direction normal to the path of ego vehicle is considered to be the predicted path of abstracted dynamic obstacle data. However, without proper adjustment of filter of points, it may detect a lot of points and it will result in very low availability. Method of ObjectWithoutPath
has the characteristics of an intermediate of Object
and Points
.
This module can exclude the obstacles outside of partition such as guardrail, fence, and wall. We need lanelet map that has the information of partition to use this feature. By this feature, we can reduce unnecessary deceleration by obstacles that are unlikely to jump out to the lane. You can choose whether to use this feature by parameter of use_partition_lanelet
.
Along the ego vehicle path, determine the points where collision detection is to be performed for each detection_span
.
The travel times to the each points are calculated from the expected target velocity.
For the each points, collision detection is performed using the footprint polygon of the ego vehicle and the polygon of the predicted location of the obstacles. The predicted location of the obstacles is described as rectangle or polygon that has the range calculated by min velocity, max velocity and the ego vehicle's travel time to the point. If the input type of the dynamic obstacle is Points
, the obstacle shape is defined as a small cylinder.
Multiple points are detected as collision points because collision detection is calculated between two polygons. So we select the point that is on the same side as the obstacle and close to ego vehicle as the collision point.
"},{"location":"planning/behavior_velocity_run_out_module/#insert-velocity","title":"Insert velocity","text":""},{"location":"planning/behavior_velocity_run_out_module/#insert-velocity-to-decelerate-for-obstacles","title":"Insert velocity to decelerate for obstacles","text":"If the collision is detected, stop point is inserted on distance of base link to front + stop margin from the selected collision point. The base link to front means the distance between base_link (center of rear-wheel axis) and front of the car. Stop margin is determined by the parameter of stop_margin
.
If you select the method of Points
or ObjectWithoutPath
, sometimes ego keeps stopping in front of the obstacle. To avoid this problem, This feature has option to approach the obstacle with slow velocity after stopping. If the parameter of approaching.enable
is set to true, ego will approach the obstacle after ego stopped for state.stop_time_thresh
seconds. The maximum velocity of approaching can be specified by the parameter of approaching.limit_vel_kmph
. The decision to approach the obstacle is determined by a simple state transition as following image.
The maximum slowdown velocity is calculated in order not to slowdown too much. See the Occlusion Spot document for more details. You can choose whether to use this feature by parameter of slow_down_limit.enable
.
detection_method
string [-] candidate: Object, ObjectWithoutPath, Points use_partition_lanelet
bool [-] whether to use partition lanelet map data specify_decel_jerk
bool [-] whether to specify jerk when ego decelerates stop_margin
double [m] the vehicle decelerates to be able to stop with this margin passing_margin
double [m] the vehicle begins to accelerate if the vehicle's front in predicted position is ahead of the obstacle + this margin deceleration_jerk
double [m/s^3] ego decelerates with this jerk when stopping for obstacles detection_distance
double [m] ahead distance from ego to detect the obstacles detection_span
double [m] calculate collision with this span to reduce calculation time min_vel_ego_kmph
double [km/h] min velocity to calculate time to collision Parameter /detection_area Type Description margin_ahead
double [m] ahead margin for detection area polygon margin_behind
double [m] behind margin for detection area polygon Parameter /dynamic_obstacle Type Description use_mandatory_area
double [-] whether to use mandatory detection area assume_fixed_velocity.enable
double [-] If enabled, the obstacle's velocity is assumed to be within the minimum and maximum velocity values specified below assume_fixed_velocity.min_vel_kmph
double [km/h] minimum velocity for dynamic obstacles assume_fixed_velocity.max_vel_kmph
double [km/h] maximum velocity for dynamic obstacles diameter
double [m] diameter of obstacles. used for creating dynamic obstacles from points height
double [m] height of obstacles. used for creating dynamic obstacles from points max_prediction_time
double [sec] create predicted path until this time time_step
double [sec] time step for each path step. used for creating dynamic obstacles from points or objects without path points_interval
double [m] divide obstacle points into groups with this interval, and detect only lateral nearest point. used only for Points method Parameter /approaching Type Description enable
bool [-] whether to enable approaching after stopping margin
double [m] distance on how close ego approaches the obstacle limit_vel_kmph
double [km/h] limit velocity for approaching after stopping Parameter /state Type Description stop_thresh
double [m/s] threshold to decide if ego is stopping stop_time_thresh
double [sec] threshold for stopping time to transit to approaching state disable_approach_dist
double [m] end the approaching state if distance to the obstacle is longer than this value keep_approach_duration
double [sec] keep approach state for this duration to avoid chattering of state transition Parameter /slow_down_limit Type Description enable
bool [-] whether to enable to limit velocity with max jerk and acc max_jerk
double [m/s^3] minimum jerk deceleration for safe brake. max_acc
double [m/s^2] minimum accel deceleration for safe brake. Parameter /ignore_momentary_detection Type Description enable
bool [-] whether to ignore momentary detection time_threshold
double [sec] ignores detections that persist for less than this duration"},{"location":"planning/behavior_velocity_run_out_module/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":"This module plans the velocity of the related part of the path in case there is speed bump regulatory element referring to it.
"},{"location":"planning/behavior_velocity_speed_bump_module/#activation-timing","title":"Activation Timing","text":"The manager launch speed bump scene module when there is speed bump regulatory element referring to the reference path.
"},{"location":"planning/behavior_velocity_speed_bump_module/#module-parameters","title":"Module Parameters","text":"Parameter Type Descriptionslow_start_margin
double [m] margin for ego vehicle to slow down before speed_bump slow_end_margin
double [m] margin for ego vehicle to accelerate after speed_bump print_debug_info
bool whether debug info will be printed or not"},{"location":"planning/behavior_velocity_speed_bump_module/#speed-calculation","title":"Speed Calculation","text":"min_height
double [m] minimum height assumption of the speed bump max_height
double [m] maximum height assumption of the speed bump min_speed
double [m/s] minimum speed assumption of slow down speed max_speed
double [m/s] maximum speed assumption of slow down speed"},{"location":"planning/behavior_velocity_speed_bump_module/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"slow_down_speed
wrt to speed_bump_height
specified in regulatory element or read slow_down_speed
tag from speed bump annotation if availableNote: If in speed bump annotation slow_down_speed
tag is used then calculating the speed wrt the speed bump height will be ignored. In such case, specified slow_down_speed
value in [kph] is being used.
slow_start_point
& slow_end_point
wrt the intersection points and insert them to pathslow_start_point
or slow_end_point
can not be inserted with given/calculated offset values check if any path point can be virtually assigned as slow_start_point
or slow_end_point
slow_down_speed
to the path points between slow_start_point
or slow_end_point
This module plans velocity so that the vehicle can stop right before stop lines and restart driving after stopped.
"},{"location":"planning/behavior_velocity_stop_line_module/#activation-timing","title":"Activation Timing","text":"This module is activated when there is a stop line in a target lane.
"},{"location":"planning/behavior_velocity_stop_line_module/#module-parameters","title":"Module Parameters","text":"Parameter Type Descriptionstop_margin
double a margin that the vehicle tries to stop before stop_line stop_duration_sec
double [s] time parameter for the ego vehicle to stop in front of a stop line hold_stop_margin_distance
double [m] parameter for restart prevention (See Algorithm section). Also, when the ego vehicle is within this distance from a stop line, the ego state becomes STOPPED from APPROACHING use_initialization_stop_state
bool A flag to determine whether to return to the approaching state when the vehicle moves away from a stop line. show_stop_line_collision_check
bool A flag to determine whether to show the debug information of collision check with a stop line"},{"location":"planning/behavior_velocity_stop_line_module/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"stop_duration_sec
seconds.This algorithm is based on segment
. segment
consists of two node points. It's useful for removing boundary conditions because if segment(i)
exists we can assume node(i)
and node(i+1)
exist.
First, this algorithm finds a collision between reference path and stop line. Then, we can get collision segment
and collision point
.
Next, based on collision point
, it finds offset segment
by iterating backward points up to a specific offset length. The offset length is stop_margin
(parameter) + base_link to front
(to adjust head pose to stop line). Then, we can get offset segment
and offset from segment start
.
After that, we can calculate a offset point from offset segment
and offset
. This will be stop_pose
.
If it needs X meters (e.g. 0.5 meters) to stop once the vehicle starts moving due to the poor vehicle control performance, the vehicle goes over the stopping position that should be strictly observed when the vehicle starts to moving in order to approach the near stop point (e.g. 0.3 meters away).
This module has parameter hold_stop_margin_distance
in order to prevent from these redundant restart. If the vehicle is stopped within hold_stop_margin_distance
meters from stop point of the module (_front_to_stop_line < hold_stop_margin_distance), the module judges that the vehicle has already stopped for the module's stop point and plans to keep stopping current position even if the vehicle is stopped due to other factors.
parameters
outside the hold_stop_margin_distance
inside the hold_stop_margin_distance"},{"location":"planning/behavior_velocity_template_module/","title":"Index","text":""},{"location":"planning/behavior_velocity_template_module/#template","title":"Template","text":"
A template for behavior velocity modules based on the behavior_velocity_speed_bump_module.
"},{"location":"planning/behavior_velocity_template_module/#autoware-behavior-velocity-module-template","title":"Autoware Behavior Velocity Module Template","text":""},{"location":"planning/behavior_velocity_template_module/#scene","title":"Scene
","text":""},{"location":"planning/behavior_velocity_template_module/#templatemodule-class","title":"TemplateModule
Class","text":"The TemplateModule
class serves as a foundation for creating a scene module within the Autoware behavior velocity planner. It defines the core methods and functionality needed for the module's behavior. You should replace the placeholder code with actual implementations tailored to your specific behavior velocity module.
TemplateModule
takes the essential parameters to create a module: const int64_t module_id
, const rclcpp::Logger & logger
, and const rclcpp::Clock::SharedPtr clock
. These parameters are supplied by the TemplateModuleManager
when registering a new module. Other parameters can be added to the constructor, if required by your specific module implementation.modifyPathVelocity
Method","text":"TemplateModule
class, is expected to modify the velocity of the input path based on certain conditions. In the provided code, it logs an informational message once when the template module is executing.createDebugMarkerArray
Method","text":"TemplateModule
class, is responsible for creating a visualization of debug markers and returning them as a visualization_msgs::msg::MarkerArray
. In the provided code, it returns an empty MarkerArray
.createVirtualWalls
Method","text":"createVirtualWalls
method creates virtual walls for the scene and returns them as motion_utils::VirtualWalls
. In the provided code, it returns an empty VirtualWalls
object.Manager
","text":"The managing of your modules is defined in manager.hpp and manager.cpp. The managing is handled by two classes:
TemplateModuleManager
class defines the core logic for managing and launching the behavior_velocity_template scenes (defined in behavior_velocity_template_module/src/scene.cpp/hpp). It inherits essential manager attributes from its parent class SceneModuleManagerInterface
.TemplateModulePlugin
class provides a way to integrate the TemplateModuleManager
into the logic of the Behavior Velocity Planner.TemplateModuleManager
Class","text":""},{"location":"planning/behavior_velocity_template_module/#constructor-templatemodulemanager","title":"Constructor TemplateModuleManager
","text":"TemplateModuleManager
class, and it takes an rclcpp::Node
reference as a parameter.dummy_parameter
to 0.0.getModuleName()
Method","text":"SceneModuleManagerInterface
class.launchNewModules()
Method","text":"autoware_auto_planning_msgs::msg::PathWithLaneId
.TemplateModule
class.module_id
to 0 and checks if a module with the same ID is already registered. If not, it registers a new TemplateModule
with the module ID. Note that each module managed by the TemplateModuleManager
should have a unique ID. The template code registers a single module, so the module_id
is set as 0 for simplicity.getModuleExpiredFunction()
Method","text":"autoware_auto_planning_msgs::msg::PathWithLaneId
.std::function<bool(const std::shared_ptr<SceneModuleInterface>&)>
. This function is used by the behavior velocity planner to determine whether a particular module has expired or not based on the given path.Please note that the specific functionality of the methods launchNewModules()
and getModuleExpiredFunction()
would depend on the details of your behavior velocity modules and how they are intended to be managed within the Autoware system. You would need to implement these methods according to your module's requirements.
TemplateModulePlugin
Class","text":""},{"location":"planning/behavior_velocity_template_module/#templatemoduleplugin-class_1","title":"TemplateModulePlugin
Class","text":"PluginWrapper<TemplateModuleManager>
. It essentially wraps your TemplateModuleManager
class within a plugin, which can be loaded and managed dynamically.Example Usage
","text":"In the following example, we take each point of the path, and multiply it by 2. Essentially duplicating the speed. Note that the velocity smoother will further modify the path speed after all the behavior velocity modules are executed.
bool TemplateModule::modifyPathVelocity(\n[[maybe_unused]] PathWithLaneId * path, [[maybe_unused]] StopReason * stop_reason)\n{\nfor (auto & p : path->points) {\np.point.longitudinal_velocity_mps *= 2.0;\n}\n\nreturn false;\n}\n
"},{"location":"planning/behavior_velocity_traffic_light_module/","title":"Index","text":""},{"location":"planning/behavior_velocity_traffic_light_module/#traffic-light","title":"Traffic Light","text":""},{"location":"planning/behavior_velocity_traffic_light_module/#role","title":"Role","text":"Judgement whether a vehicle can go into an intersection or not by traffic light status, and planning a velocity of the stop if necessary. This module is designed for rule-based velocity decision that is easy for developers to design its behavior. It generates proper velocity for traffic light scene.
"},{"location":"planning/behavior_velocity_traffic_light_module/#limitations","title":"Limitations","text":"This module allows developers to design STOP/GO in traffic light module using specific rules. Due to the property of rule-based planning, the algorithm is greatly depends on object detection and perception accuracy considering traffic light. Also, this module only handles STOP/Go at traffic light scene, so rushing or quick decision according to traffic condition is future work.
"},{"location":"planning/behavior_velocity_traffic_light_module/#activation-timing","title":"Activation Timing","text":"This module is activated when there is traffic light in ego lane.
"},{"location":"planning/behavior_velocity_traffic_light_module/#algorithm","title":"Algorithm","text":"Obtains a traffic light mapped to the route and a stop line correspond to the traffic light from a map information.
Uses the highest reliability one of the traffic light recognition result and if the color of that was not green or corresponding arrow signal, generates a stop point.
stop_time_hysteresis
, it treats as a signal to pass. This feature is to prevent chattering.When vehicle current velocity is
When it to be judged that vehicle can\u2019t stop before stop line, autoware chooses one of the following behaviors
yellow lamp line
It\u2019s called \u201cyellow lamp line\u201d which shows the distance traveled by the vehicle during yellow lamp.
dilemma zone
It\u2019s called \u201cdilemma zone\u201d which satisfies following conditions:
vehicle can\u2019t stop under deceleration and jerk limit.(left side of the pass judge curve)
\u21d2emergency stop(relax deceleration and jerk limitation in order to observe the traffic regulation)
optional zone
It\u2019s called \u201coptional zone\u201d which satisfies following conditions:
vehicle can stop under deceleration and jerk limit.(right side of the pass judge curve)
\u21d2 stop(autoware selects the safety choice)
stop_margin
double [m] margin before stop point tl_state_timeout
double [s] time out for detected traffic light result. stop_time_hysteresis
double [s] time threshold to decide stop planning for chattering prevention yellow_lamp_period
double [s] time for yellow lamp enable_pass_judge
bool [-] whether to use pass judge"},{"location":"planning/behavior_velocity_traffic_light_module/#flowchart","title":"Flowchart","text":""},{"location":"planning/behavior_velocity_traffic_light_module/#known-limits","title":"Known Limits","text":"Autonomous vehicles have to cooperate with the infrastructures such as:
The following items are example cases:
Traffic control by traffic lights with V2X support
Intersection coordination of multiple vehicles by FMS.
It's possible to make each function individually, however, the use cases can be generalized with these three elements.
start
: Start a cooperation procedure after the vehicle enters a certain zone.stop
: Stop at a defined stop line according to the status received from infrastructures.end
: Finalize the cooperation procedure after the vehicle reaches the exit zone. This should be done within the range of stable communication.This module sends/receives status from infrastructures and plans the velocity of the cooperation result.
"},{"location":"planning/behavior_velocity_virtual_traffic_light_module/#system-configuration-diagram","title":"System Configuration Diagram","text":"Planner and each infrastructure communicate with each other using common abstracted messages.
FMS: Intersection coordination when multiple vehicles are in operation and the relevant lane is occupied
Support different communication methods for different infrastructures
Have different meta-information for each geographic location
FMS: Fleet Management System
"},{"location":"planning/behavior_velocity_virtual_traffic_light_module/#module-parameters","title":"Module Parameters","text":"Parameter Type Descriptionmax_delay_sec
double [s] maximum allowed delay for command near_line_distance
double [m] threshold distance to stop line to check ego stop. dead_line_margin
double [m] threshold distance that this module continue to insert stop line. hold_stop_margin_distance
double [m] parameter for restart prevention (See following section) check_timeout_after_stop_line
bool [-] check timeout to stop when linkage is disconnected"},{"location":"planning/behavior_velocity_virtual_traffic_light_module/#restart-prevention","title":"Restart prevention","text":"If it needs X meters (e.g. 0.5 meters) to stop once the vehicle starts moving due to the poor vehicle control performance, the vehicle goes over the stopping position that should be strictly observed when the vehicle starts to moving in order to approach the near stop point (e.g. 0.3 meters away).
This module has parameter hold_stop_margin_distance
in order to prevent from these redundant restart. If the vehicle is stopped within hold_stop_margin_distance
meters from stop point of the module (_front_to_stop_line < hold_stop_margin_distance), the module judges that the vehicle has already stopped for the module's stop point and plans to keep stopping current position even if the vehicle is stopped due to other factors.
parameters
outside the hold_stop_margin_distance
inside the hold_stop_margin_distance"},{"location":"planning/behavior_velocity_virtual_traffic_light_module/#flowchart","title":"Flowchart","text":""},{"location":"planning/behavior_velocity_virtual_traffic_light_module/#map-format","title":"Map Format","text":"
This module decide to stop before the ego will cross the walkway including crosswalk to enter or exit the private area.
"},{"location":"planning/costmap_generator/","title":"costmap_generator","text":""},{"location":"planning/costmap_generator/#costmap_generator","title":"costmap_generator","text":""},{"location":"planning/costmap_generator/#costmap_generator_node","title":"costmap_generator_node","text":"This node reads PointCloud
and/or DynamicObjectArray
and creates an OccupancyGrid
and GridMap
. VectorMap(Lanelet2)
is optional.
~input/objects
autoware_auto_perception_msgs::PredictedObjects predicted objects, for obstacles areas ~input/points_no_ground
sensor_msgs::PointCloud2 ground-removed points, for obstacle areas which can't be detected as objects ~input/vector_map
autoware_auto_mapping_msgs::HADMapBin vector map, for drivable areas ~input/scenario
tier4_planning_msgs::Scenario scenarios to be activated, for node activation"},{"location":"planning/costmap_generator/#output-topics","title":"Output topics","text":"Name Type Description ~output/grid_map
grid_map_msgs::GridMap costmap as GridMap, values are from 0.0 to 1.0 ~output/occupancy_grid
nav_msgs::OccupancyGrid costmap as OccupancyGrid, values are from 0 to 100"},{"location":"planning/costmap_generator/#output-tfs","title":"Output TFs","text":"None
"},{"location":"planning/costmap_generator/#how-to-launch","title":"How to launch","text":"Execute the command source install/setup.bash
to setup the environment
Run ros2 launch costmap_generator costmap_generator.launch.xml
to launch the node
update_rate
double timer's update rate activate_by_scenario
bool if true, activate by scenario = parking. Otherwise, activate if vehicle is inside parking lot. use_objects
bool whether using ~input/objects
or not use_points
bool whether using ~input/points_no_ground
or not use_wayarea
bool whether using wayarea
from ~input/vector_map
or not use_parkinglot
bool whether using parkinglot
from ~input/vector_map
or not costmap_frame
string created costmap's coordinate vehicle_frame
string vehicle's coordinate map_frame
string map's coordinate grid_min_value
double minimum cost for gridmap grid_max_value
double maximum cost for gridmap grid_resolution
double resolution for gridmap grid_length_x
int size of gridmap for x direction grid_length_y
int size of gridmap for y direction grid_position_x
int offset from coordinate in x direction grid_position_y
int offset from coordinate in y direction maximum_lidar_height_thres
double maximum height threshold for pointcloud data minimum_lidar_height_thres
double minimum height threshold for pointcloud data expand_rectangle_size
double expand object's rectangle with this value size_of_expansion_kernel
int kernel size for blurring effect on object's costmap"},{"location":"planning/costmap_generator/#flowchart","title":"Flowchart","text":""},{"location":"planning/external_velocity_limit_selector/","title":"External Velocity Limit Selector","text":""},{"location":"planning/external_velocity_limit_selector/#external-velocity-limit-selector","title":"External Velocity Limit Selector","text":""},{"location":"planning/external_velocity_limit_selector/#purpose","title":"Purpose","text":"The external_velocity_limit_selector_node
is a node that keeps consistency of external velocity limits. This module subscribes
VelocityLimit.msg contains not only max velocity but also information about the acceleration/jerk constraints on deceleration. The external_velocity_limit_selector_node
integrates the lowest velocity limit and the highest jerk constraint to calculate the hardest velocity limit that protects all the deceleration points and max velocities sent by API and Autoware internal modules.
WIP
"},{"location":"planning/external_velocity_limit_selector/#inputs","title":"Inputs","text":"Name Type Description~input/velocity_limit_from_api
tier4_planning_msgs::VelocityLimit velocity limit from api ~input/velocity_limit_from_internal
tier4_planning_msgs::VelocityLimit velocity limit from autoware internal modules ~input/velocity_limit_clear_command_from_internal
tier4_planning_msgs::VelocityLimitClearCommand velocity limit clear command"},{"location":"planning/external_velocity_limit_selector/#outputs","title":"Outputs","text":"Name Type Description ~output/max_velocity
tier4_planning_msgs::VelocityLimit current information of the hardest velocity limit"},{"location":"planning/external_velocity_limit_selector/#parameters","title":"Parameters","text":"Parameter Type Description max_velocity
double default max velocity [m/s] normal.min_acc
double minimum acceleration [m/ss] normal.max_acc
double maximum acceleration [m/ss] normal.min_jerk
double minimum jerk [m/sss] normal.max_jerk
double maximum jerk [m/sss] limit.min_acc
double minimum acceleration to be observed [m/ss] limit.max_acc
double maximum acceleration to be observed [m/ss] limit.min_jerk
double minimum jerk to be observed [m/sss] limit.max_jerk
double maximum jerk to be observed [m/sss]"},{"location":"planning/external_velocity_limit_selector/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"planning/external_velocity_limit_selector/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"planning/external_velocity_limit_selector/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"planning/external_velocity_limit_selector/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"planning/external_velocity_limit_selector/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"planning/freespace_planner/","title":"The `freespace_planner`","text":""},{"location":"planning/freespace_planner/#the-freespace_planner","title":"The freespace_planner
","text":""},{"location":"planning/freespace_planner/#freespace_planner_node","title":"freespace_planner_node","text":"freespace_planner_node
is a global path planner node that plans trajectory in the space having static/dynamic obstacles. This node is currently based on Hybrid A* search algorithm in freespace_planning_algorithms
package. Other algorithms such as rrt* will be also added and selectable in the future.
Note Due to the constraint of trajectory following, the output trajectory will be split to include only the single direction path. In other words, the output trajectory doesn't include both forward and backward trajectories at once.
"},{"location":"planning/freespace_planner/#input-topics","title":"Input topics","text":"Name Type Description~input/route
autoware_auto_planning_msgs::Route route and goal pose ~input/occupancy_grid
nav_msgs::OccupancyGrid costmap, for drivable areas ~input/odometry
nav_msgs::Odometry vehicle velocity, for checking whether vehicle is stopped ~input/scenario
tier4_planning_msgs::Scenario scenarios to be activated, for node activation"},{"location":"planning/freespace_planner/#output-topics","title":"Output topics","text":"Name Type Description ~output/trajectory
autoware_auto_planning_msgs::Trajectory trajectory to be followed is_completed
bool (implemented as rosparam) whether all split trajectory are published"},{"location":"planning/freespace_planner/#output-tfs","title":"Output TFs","text":"None
"},{"location":"planning/freespace_planner/#how-to-launch","title":"How to launch","text":"freespace_planner.launch
or add args when executing roslaunch
roslaunch freespace_planner freespace_planner.launch
planning_algorithms
string algorithms used in the node vehicle_shape_margin_m
float collision margin in planning algorithm update_rate
double timer's update rate waypoints_velocity
double velocity in output trajectory (currently, only constant velocity is supported) th_arrived_distance_m
double threshold distance to check if vehicle has arrived at the trajectory's endpoint th_stopped_time_sec
double threshold time to check if vehicle is stopped th_stopped_velocity_mps
double threshold velocity to check if vehicle is stopped th_course_out_distance_m
double threshold distance to check if vehicle is out of course vehicle_shape_margin_m
double vehicle margin replan_when_obstacle_found
bool whether replanning when obstacle has found on the trajectory replan_when_course_out
bool whether replanning when vehicle is out of course"},{"location":"planning/freespace_planner/#planner-common-parameters","title":"Planner common parameters","text":"Parameter Type Description time_limit
double time limit of planning minimum_turning_radius
double minimum turning radius of robot maximum_turning_radius
double maximum turning radius of robot theta_size
double the number of angle's discretization lateral_goal_range
double goal range of lateral position longitudinal_goal_range
double goal range of longitudinal position angle_goal_range
double goal range of angle curve_weight
double additional cost factor for curve actions reverse_weight
double additional cost factor for reverse actions obstacle_threshold
double threshold for regarding a certain grid as obstacle"},{"location":"planning/freespace_planner/#a-search-parameters","title":"A* search parameters","text":"Parameter Type Description only_behind_solutions
bool whether restricting the solutions to be behind the goal use_back
bool whether using backward trajectory distance_heuristic_weight
double heuristic weight for estimating node's cost"},{"location":"planning/freespace_planner/#rrt-search-parameters","title":"RRT* search parameters","text":"Parameter Type Description max planning time
double maximum planning time [msec] (used only when enable_update
is set true
) enable_update
bool whether update after feasible solution found until max_planning time
elapse use_informed_sampling
bool Use informed RRT* (of Gammell et al.) neighbor_radius
double neighbor radius of RRT* algorithm margin
double safety margin ensured in path's collision checking in RRT* algorithm"},{"location":"planning/freespace_planner/#flowchart","title":"Flowchart","text":""},{"location":"planning/freespace_planning_algorithms/","title":"freespace planning algorithms","text":""},{"location":"planning/freespace_planning_algorithms/#freespace-planning-algorithms","title":"freespace planning algorithms","text":""},{"location":"planning/freespace_planning_algorithms/#role","title":"Role","text":"This package is for development of path planning algorithms in free space.
"},{"location":"planning/freespace_planning_algorithms/#implemented-algorithms","title":"Implemented algorithms","text":"Please see rrtstar.md for a note on the implementation for informed-RRT*.
NOTE: As for RRT*, one can choose whether update after feasible solution found in RRT*. If not doing so, the algorithm is the almost (but exactly because of rewiring procedure) same as vanilla RRT. If you choose update, then you have option if the sampling after feasible solution found is \"informed\". If set true, then the algorithm is equivalent to informed RRT\\* of Gammell et al. 2014
.
There is a trade-off between algorithm speed and resulting solution quality. When we sort the algorithms by the spectrum of (high quality solution/ slow) -> (low quality solution / fast) it would be A* -> informed RRT* -> RRT. Note that in almost all case informed RRT* is better than RRT* for solution quality given the same computational time budget. So, RRT* is omitted in the comparison.
Some selection criteria would be:
AbstractPlanningAlgorithm
class. If necessary, please overwrite the virtual functions.nav_msgs::OccupancyGrid
-typed costmap. Thus, AbstractPlanningAlgorithm
class mainly implements the collision checking using the costmap, grid-based indexing, and coordinate transformation related to costmap.PlannerCommonParam
-typed and algorithm-specific- type structs as inputs of the constructor. For example, AstarSearch
class's constructor takes both PlannerCommonParam
and AstarParam
.Building the package with ros-test and run tests:
colcon build --packages-select freespace_planning_algorithms\ncolcon test --packages-select freespace_planning_algorithms\n
Inside the test, simulation results are stored in /tmp/fpalgos-{algorithm_type}-case{scenario_number}
as a rosbag. Loading these resulting files, by using test/debug_plot.py, one can create plots visualizing the path and obstacles as shown in the figures below. The created figures are then again saved in /tmp
.
The black cells, green box, and red box, respectively, indicate obstacles, start configuration, and goal configuration. The sequence of the blue boxes indicate the solution path.
"},{"location":"planning/freespace_planning_algorithms/#license-notice","title":"License notice","text":"Files src/reeds_shepp.cpp
and include/astar_search/reeds_shepp.h
are fetched from pyReedsShepp. Note that the implementation in pyReedsShepp
is also heavily based on the code in ompl. Both pyReedsShepp
and ompl
are distributed under 3-clause BSD license.
Let us define \\(f(x)\\) as minimum cost of the path when path is constrained to pass through \\(x\\) (so path will be \\(x_{\\mathrm{start}} \\to \\mathrm{x} \\to \\mathrm{x_{\\mathrm{goal}}}\\)). Also, let us define \\(c_{\\mathrm{best}}\\) as the current minimum cost of the feasible paths. Let us define a set $ X(f) = \\left{ x \\in X | f(x) < c*{\\mathrm{best}} \\right} $. If we could sample a new point from \\(X_f\\) instead of \\(X\\) as in vanilla RRT*, chance that \\(c*{\\mathrm{best}}\\) is updated is increased, thus the convergence rate is improved.
In most case, \\(f(x)\\) is unknown, thus it is straightforward to approximate the function \\(f\\) by a heuristic function \\(\\hat{f}\\). A heuristic function is admissible if \\(\\forall x \\in X, \\hat{f}(x) < f(x)\\), which is sufficient condition of conversion to optimal path. The good heuristic function \\(\\hat{f}\\) has two properties: 1) it is an admissible tight lower bound of \\(f\\) and 2) sampling from \\(X(\\hat{f})\\) is easy.
According to Gammell et al [1], a good heuristic function when path is always straight is \\(\\hat{f}(x) = ||x_{\\mathrm{start}} - x|| + ||x - x_{\\mathrm{goal}}||\\). If we don't assume any obstacle information the heuristic is tightest. Also, \\(X(\\hat{f})\\) is hyper-ellipsoid, and hence sampling from it can be done analytically.
"},{"location":"planning/freespace_planning_algorithms/rrtstar/#modification-to-fit-reeds-sheep-path-case","title":"Modification to fit reeds-sheep path case","text":"In the vehicle case, state is \\(x = (x_{1}, x_{2}, \\theta)\\). Unlike normal informed-RRT* where we can connect path by a straight line, here we connect the vehicle path by a reeds-sheep path. So, we need some modification of the original algorithm a bit. To this end, one might first consider a heuristic function \\(\\hat{f}_{\\mathrm{RS}}(x) = \\mathrm{RS}(x_{\\mathrm{start}}, x) + \\mathrm{RS}(x, x_{\\mathrm{goal}}) < f(x)\\) where \\(\\mathrm{RS}\\) computes reeds-sheep distance. Though it is good in the sense of tightness, however, sampling from \\(X(\\hat{f}_{RS})\\) is really difficult. Therefore, we use \\(\\hat{f}_{euc} = ||\\mathrm{pos}(x_{\\mathrm{start}}) - \\mathrm{pos}(x)|| + ||\\mathrm{pos}(x)- \\mathrm{pos}(x_{\\mathrm{goal}})||\\), which is admissible because \\(\\forall x \\in X, \\hat{f}_{euc}(x) < \\hat{f}_{\\mathrm{RS}}(x) < f(x)\\). Here, \\(\\mathrm{pos}\\) function returns position \\((x_{1}, x_{2})\\) of the vehicle.
Sampling from \\(X(\\hat{f}_{\\mathrm{euc}})\\) is easy because \\(X(\\hat{f}_{\\mathrm{euc}}) = \\mathrm{Ellipse} \\times (-\\pi, \\pi]\\). Here \\(\\mathrm{Ellipse}\\)'s focal points are \\(x_{\\mathrm{start}}\\) and \\(x_{\\mathrm{goal}}\\) and conjugate diameters is $\\sqrt{c^{2}{\\mathrm{best}} - ||\\mathrm{pos}(x}) - \\mathrm{pos}(x_{\\mathrm{goal}}))|| } $ (similar to normal informed-rrtstar's ellipsoid). Please notice that \\(\\theta\\) can be arbitrary because \\(\\hat{f}_{\\mathrm{euc}}\\) is independent of \\(\\theta\\).
[1] Gammell et al., \"Informed RRT*: Optimal sampling-based path planning focused via direct sampling of an admissible ellipsoidal heuristic.\" IROS (2014)
"},{"location":"planning/mission_planner/","title":"Mission Planner","text":""},{"location":"planning/mission_planner/#mission-planner","title":"Mission Planner","text":""},{"location":"planning/mission_planner/#purpose","title":"Purpose","text":"Mission Planner
calculates a route that navigates from the current ego pose to the goal pose following the given check points. The route is made of a sequence of lanes on a static map. Dynamic objects (e.g. pedestrians and other vehicles) and dynamic map information (e.g. road construction which blocks some lanes) are not considered during route planning. Therefore, the output topic is only published when the goal pose or check points are given and will be latched until the new goal pose or check points are given.
The core implementation does not depend on a map format. In current Autoware.universe, only Lanelet2 map format is supported.
"},{"location":"planning/mission_planner/#interfaces","title":"Interfaces","text":""},{"location":"planning/mission_planner/#parameters","title":"Parameters","text":"Name Type Descriptionmap_frame
string The frame name for map arrival_check_angle_deg
double Angle threshold for goal check arrival_check_distance
double Distance threshold for goal check arrival_check_duration
double Duration threshold for goal check goal_angle_threshold
double Max goal pose angle for goal approve enable_correct_goal_pose
bool Enabling correction of goal pose according to the closest lanelet orientation reroute_time_threshold
double If the time to the rerouting point at the current velocity is greater than this threshold, rerouting is possible minimum_reroute_length
double Minimum Length for publishing a new route consider_no_drivable_lanes
bool This flag is for considering no_drivable_lanes in planning or not."},{"location":"planning/mission_planner/#services","title":"Services","text":"Name Type Description /planning/mission_planning/clear_route
autoware_adapi_v1_msgs/srv/ClearRoute route clear request /planning/mission_planning/set_route_points
autoware_adapi_v1_msgs/srv/SetRoutePoints route request with pose waypoints. Assumed the vehicle is stopped. /planning/mission_planning/set_route
autoware_adapi_v1_msgs/srv/SetRoute route request with lanelet waypoints. Assumed the vehicle is stopped. /planning/mission_planning/change_route_points
autoware_adapi_v1_msgs/srv/SetRoutePoints route change request with pose waypoints. This can be called when the vehicle is moving. /planning/mission_planning/change_route
autoware_adapi_v1_msgs/srv/SetRoute route change request with lanelet waypoints. This can be called when the vehicle is moving. ~/srv/set_mrm_route
autoware_adapi_v1_msgs/srv/SetRoutePoints set emergency route. This can be called when the vehicle is moving. ~/srv/clear_mrm_route
std_srvs/srv/Trigger clear emergency route."},{"location":"planning/mission_planner/#subscriptions","title":"Subscriptions","text":"Name Type Description input/vector_map
autoware_auto_mapping_msgs/HADMapBin vector map of Lanelet2 input/modified_goal
geometry_msgs/PoseWithUuidStamped modified goal pose"},{"location":"planning/mission_planner/#publications","title":"Publications","text":"Name Type Description /planning/mission_planning/route_state
autoware_adapi_v1_msgs/msg/RouteState route state /planning/mission_planning/route
autoware_planning_msgs/LaneletRoute route debug/route_marker
visualization_msgs/msg/MarkerArray route marker for debug debug/goal_footprint
visualization_msgs/msg/MarkerArray goal footprint for debug"},{"location":"planning/mission_planner/#route-section","title":"Route section","text":"Route section, whose type is autoware_planning_msgs/LaneletSegment
, is a \"slice\" of a road that bundles lane changeable lanes. Note that the most atomic unit of route is autoware_auto_mapping_msgs/LaneletPrimitive
, which has the unique id of a lane in a vector map and its type. Therefore, route message does not contain geometric information about the lane since we did not want to have planning module\u2019s message to have dependency on map data structure.
The ROS message of route section contains following three elements for each route section.
preferred_primitive
: Preferred lane to follow towards the goal.primitives
: All neighbor lanes in the same direction including the preferred lane.The mission planner has control mechanism to validate the given goal pose and create a route. If goal pose angle between goal pose lanelet and goal pose' yaw is greater than goal_angle_threshold
parameter, the goal is rejected. Another control mechanism is the creation of a footprint of the goal pose according to the dimensions of the vehicle and checking whether this footprint is within the lanelets. If goal footprint exceeds lanelets, then the goal is rejected.
At the image below, there are sample goal pose validation cases.
"},{"location":"planning/mission_planner/#implementation","title":"Implementation","text":""},{"location":"planning/mission_planner/#mission-planner_1","title":"Mission Planner","text":"Two callbacks (goal and check points) are a trigger for route planning. Routing graph, which plans route in Lanelet2, must be created before those callbacks, and this routing graph is created in vector map callback.
plan route
is explained in detail in the following section.
plan route
is executed with check points including current ego pose and goal pose.
plan path between each check points
firstly calculates closest lanes to start and goal pose. Then routing graph of Lanelet2 plans the shortest path from start and goal pose.
initialize route lanelets
initializes route handler, and calculates route_lanelets
. route_lanelets
, all of which will be registered in route sections, are lanelets next to the lanelets in the planned path, and used when planning lane change. To calculate route_lanelets
,
route_lanelets
.candidate_lanelets
.candidate_lanelets
are route_lanelets
, the candidate_lanelet
is registered as route_lanelets
candidate_lanelet
(an adjacent lane) is not lane-changeable, we can pass the candidate_lanelet
without lane change if the following and previous lanelets of the candidate_lanelet
are route_lanelets
get preferred lanelets
extracts preferred_primitive
from route_lanelets
with the route handler.
create route sections
extracts primitives
from route_lanelets
for each route section with the route handler, and creates route sections.
Reroute here means changing the route while driving. Unlike route setting, it is required to keep a certain distance from vehicle to the point where the route is changed.
And there are three use cases that require reroute.
change_route_points
change_route
This is route change that the application makes using the API. It is used when changing the destination while driving or when driving a divided loop route. When the vehicle is driving on a MRM route, normal rerouting by this interface is not allowed.
"},{"location":"planning/mission_planner/#emergency-route","title":"Emergency route","text":"set_mrm_route
clear_mrm_route
This interface for the MRM that pulls over the road shoulder. It has to be stopped as soon as possible, so a reroute is required. The MRM route has priority over the normal route. And if MRM route is cleared, try to return to the normal route also with a rerouting safety check.
"},{"location":"planning/mission_planner/#goal-modification","title":"Goal modification","text":"modified_goal
This is a goal change to pull over, avoid parked vehicles, and so on by a planning component. If the modified goal is outside the calculated route, a reroute is required. This goal modification is executed by checking the local environment and path safety as the vehicle actually approaches the destination. And this modification is allowed for both normal_route and mrm_route. The new route generated here is sent to the AD API so that it can also be referenced by the application. Note, however, that the specifications here are subject to change in the future.
"},{"location":"planning/mission_planner/#rerouting-limitations","title":"Rerouting Limitations","text":"modified_goal
needs to be guaranteed by the behavior_path_planner, e.g., that it is not placed in the wrong lane, that it can be safely rerouted, etc.motion_velocity_smoother
outputs a desired velocity profile on a reference trajectory. This module plans a velocity profile within the limitations of the velocity, the acceleration and the jerk to realize both the maximization of velocity and the ride quality. We call this module motion_velocity_smoother
because the limitations of the acceleration and the jerk means the smoothness of the velocity profile.
For the point on the reference trajectory closest to the center of the rear wheel axle of the vehicle, it extracts the reference path between extract_behind_dist
behind and extract_ahead_dist
ahead.
It applies the velocity limit input from the external of motion_velocity_smoother
. Remark that the external velocity limit is different from the velocity limit already set on the map and the reference trajectory. The external velocity is applied at the position that it is able to reach the velocity limit with the deceleration and the jerk constraints set as the parameter.
It applies the velocity limit near the stopping point. This function is used to approach near the obstacle or improve the accuracy of stopping.
"},{"location":"planning/motion_velocity_smoother/#apply-lateral-acceleration-limit","title":"Apply lateral acceleration limit","text":"It applies the velocity limit to decelerate at the curve. It calculates the velocity limit from the curvature of the reference trajectory and the maximum lateral acceleration max_lateral_accel
. The velocity limit is set as not to fall under min_curve_velocity
.
Note: velocity limit that requests larger than nominal.jerk
is not applied. In other words, even if a sharp curve is planned just in front of the ego, no deceleration is performed.
It calculates the desired steering angles of trajectory points. and it applies the steering rate limit. If the (steering_angle_rate
> max_steering_angle_rate
), it decreases the velocity of the trajectory point to acceptable velocity.
It resamples the points on the reference trajectory with designated time interval. Note that the range of the length of the trajectory is set between min_trajectory_length
and max_trajectory_length
, and the distance between two points is longer than min_trajectory_interval_distance
. It samples densely up to the distance traveled between resample_time
with the current velocity, then samples sparsely after that. By sampling according to the velocity, both calculation load and accuracy are achieved since it samples finely at low velocity and coarsely at high velocity.
Calculate initial values for velocity planning. The initial values are calculated according to the situation as shown in the following table.
Situation Initial velocity Initial acceleration First calculation Current velocity 0.0 Engagingengage_velocity
engage_acceleration
Deviate between the planned velocity and the current velocity Current velocity Previous planned value Normal Previous planned value Previous planned value"},{"location":"planning/motion_velocity_smoother/#smooth-velocity","title":"Smooth velocity","text":"It plans the velocity. The algorithm of velocity planning is chosen from JerkFiltered
, L2
and Linf
, and it is set in the launch file. In these algorithms, they use OSQP[1] as the solver of the optimization.
It minimizes the sum of the minus of the square of the velocity and the square of the violation of the velocity limit, the acceleration limit and the jerk limit.
"},{"location":"planning/motion_velocity_smoother/#l2","title":"L2","text":"It minimizes the sum of the minus of the square of the velocity, the square of the the pseudo-jerk[2] and the square of the violation of the velocity limit and the acceleration limit.
"},{"location":"planning/motion_velocity_smoother/#linf","title":"Linf","text":"It minimizes the sum of the minus of the square of the velocity, the maximum absolute value of the the pseudo-jerk[2] and the square of the violation of the velocity limit and the acceleration limit.
"},{"location":"planning/motion_velocity_smoother/#post-process","title":"Post process","text":"It performs the post-process of the planned velocity.
max_velocity
post resampling
)After the optimization, a resampling called post resampling
is performed before passing the optimized trajectory to the next node. Since the required path interval from optimization may be different from the one for the next module, post resampling
helps to fill this gap. Therefore, in post resampling
, it is necessary to check the path specification of the following module to determine the parameters. Note that if the computational load of the optimization algorithm is high and the path interval is sparser than the path specification of the following module in the first resampling, post resampling
would resample the trajectory densely. On the other hand, if the computational load of the optimization algorithm is small and the path interval is denser than the path specification of the following module in the first resampling, the path is sparsely resampled according to the specification of the following module.
~/input/trajectory
autoware_auto_planning_msgs/Trajectory
Reference trajectory /planning/scenario_planning/max_velocity
std_msgs/Float32
External velocity limit [m/s] /localization/kinematic_state
nav_msgs/Odometry
Current odometry /tf
tf2_msgs/TFMessage
TF /tf_static
tf2_msgs/TFMessage
TF static"},{"location":"planning/motion_velocity_smoother/#output","title":"Output","text":"Name Type Description ~/output/trajectory
autoware_auto_planning_msgs/Trajectory
Modified trajectory /planning/scenario_planning/current_max_velocity
std_msgs/Float32
Current external velocity limit [m/s] ~/closest_velocity
std_msgs/Float32
Planned velocity closest to ego base_link (for debug) ~/closest_acceleration
std_msgs/Float32
Planned acceleration closest to ego base_link (for debug) ~/closest_jerk
std_msgs/Float32
Planned jerk closest to ego base_link (for debug) ~/debug/trajectory_raw
autoware_auto_planning_msgs/Trajectory
Extracted trajectory (for debug) ~/debug/trajectory_external_velocity_limited
autoware_auto_planning_msgs/Trajectory
External velocity limited trajectory (for debug) ~/debug/trajectory_lateral_acc_filtered
autoware_auto_planning_msgs/Trajectory
Lateral acceleration limit filtered trajectory (for debug) ~/debug/trajectory_steering_rate_limited
autoware_auto_planning_msgs/Trajectory
Steering angle rate limit filtered trajectory (for debug) ~/debug/trajectory_time_resampled
autoware_auto_planning_msgs/Trajectory
Time resampled trajectory (for debug) ~/distance_to_stopline
std_msgs/Float32
Distance to stop line from current ego pose (max 50 m) (for debug) ~/stop_speed_exceeded
std_msgs/Bool
It publishes true
if planned velocity on the point which the maximum velocity is zero is over threshold"},{"location":"planning/motion_velocity_smoother/#parameters","title":"Parameters","text":""},{"location":"planning/motion_velocity_smoother/#constraint-parameters","title":"Constraint parameters","text":"Name Type Description Default value max_velocity
double
Max velocity limit [m/s] 20.0 max_accel
double
Max acceleration limit [m/ss] 1.0 min_decel
double
Min deceleration limit [m/ss] -0.5 stop_decel
double
Stop deceleration value at a stop point [m/ss] 0.0 max_jerk
double
Max jerk limit [m/sss] 1.0 min_jerk
double
Min jerk limit [m/sss] -0.5"},{"location":"planning/motion_velocity_smoother/#external-velocity-limit-parameter","title":"External velocity limit parameter","text":"Name Type Description Default value margin_to_insert_external_velocity_limit
double
margin distance to insert external velocity limit [m] 0.3"},{"location":"planning/motion_velocity_smoother/#curve-parameters","title":"Curve parameters","text":"Name Type Description Default value enable_lateral_acc_limit
bool
To toggle the lateral acceleration filter on and off. You can switch it dynamically at runtime. true max_lateral_accel
double
Max lateral acceleration limit [m/ss] 0.5 min_curve_velocity
double
Min velocity at lateral acceleration limit [m/ss] 2.74 decel_distance_before_curve
double
Distance to slowdown before a curve for lateral acceleration limit [m] 3.5 decel_distance_after_curve
double
Distance to slowdown after a curve for lateral acceleration limit [m] 2.0 min_decel_for_lateral_acc_lim_filter
double
Deceleration limit to avoid sudden braking by the lateral acceleration filter [m/ss]. Strong limitation degrades the deceleration response to the appearance of sharp curves due to obstacle avoidance, etc. -2.5"},{"location":"planning/motion_velocity_smoother/#engage-replan-parameters","title":"Engage & replan parameters","text":"Name Type Description Default value replan_vel_deviation
double
Velocity deviation to replan initial velocity [m/s] 5.53 engage_velocity
double
Engage velocity threshold [m/s] (if the trajectory velocity is higher than this value, use this velocity for engage vehicle speed) 0.25 engage_acceleration
double
Engage acceleration [m/ss] (use this acceleration when engagement) 0.1 engage_exit_ratio
double
Exit engage sequence to normal velocity planning when the velocity exceeds engage_exit_ratio x engage_velocity. 0.5 stop_dist_to_prohibit_engage
double
If the stop point is in this distance, the speed is set to 0 not to move the vehicle [m] 0.5"},{"location":"planning/motion_velocity_smoother/#stopping-velocity-parameters","title":"Stopping velocity parameters","text":"Name Type Description Default value stopping_velocity
double
change target velocity to this value before v=0 point [m/s] 2.778 stopping_distance
double
distance for the stopping_velocity [m]. 0 means the stopping velocity is not applied. 0.0"},{"location":"planning/motion_velocity_smoother/#extraction-parameters","title":"Extraction parameters","text":"Name Type Description Default value extract_ahead_dist
double
Forward trajectory distance used for planning [m] 200.0 extract_behind_dist
double
backward trajectory distance used for planning [m] 5.0 delta_yaw_threshold
double
Allowed delta yaw between ego pose and trajectory pose [radian] 1.0472"},{"location":"planning/motion_velocity_smoother/#resampling-parameters","title":"Resampling parameters","text":"Name Type Description Default value max_trajectory_length
double
Max trajectory length for resampling [m] 200.0 min_trajectory_length
double
Min trajectory length for resampling [m] 30.0 resample_time
double
Resample total time [s] 10.0 dense_dt
double
resample time interval for dense sampling [s] 0.1 dense_min_interval_distance
double
minimum points-interval length for dense sampling [m] 0.1 sparse_dt
double
resample time interval for sparse sampling [s] 0.5 sparse_min_interval_distance
double
minimum points-interval length for sparse sampling [m] 4.0"},{"location":"planning/motion_velocity_smoother/#resampling-parameters-for-post-process","title":"Resampling parameters for post process","text":"Name Type Description Default value post_max_trajectory_length
double
max trajectory length for resampling [m] 300.0 post_min_trajectory_length
double
min trajectory length for resampling [m] 30.0 post_resample_time
double
resample total time for dense sampling [s] 10.0 post_dense_dt
double
resample time interval for dense sampling [s] 0.1 post_dense_min_interval_distance
double
minimum points-interval length for dense sampling [m] 0.1 post_sparse_dt
double
resample time interval for sparse sampling [s] 0.1 post_sparse_min_interval_distance
double
minimum points-interval length for sparse sampling [m] 1.0"},{"location":"planning/motion_velocity_smoother/#limit-steering-angle-rate-parameters","title":"Limit steering angle rate parameters","text":"Name Type Description Default value enable_steering_rate_limit
bool
To toggle the steer rate filter on and off. You can switch it dynamically at runtime. true max_steering_angle_rate
double
Maximum steering angle rate [degree/s] 40.0 resample_ds
double
Distance between trajectory points [m] 0.1 curvature_threshold
double
If curvature > curvature_threshold, steeringRateLimit is triggered [1/m] 0.02 curvature_calculation_distance
double
Distance of points while curvature is calculating [m] 1.0"},{"location":"planning/motion_velocity_smoother/#weights-for-optimization","title":"Weights for optimization","text":""},{"location":"planning/motion_velocity_smoother/#jerkfiltered_1","title":"JerkFiltered","text":"Name Type Description Default value jerk_weight
double
Weight for \"smoothness\" cost for jerk 10.0 over_v_weight
double
Weight for \"over speed limit\" cost 100000.0 over_a_weight
double
Weight for \"over accel limit\" cost 5000.0 over_j_weight
double
Weight for \"over jerk limit\" cost 1000.0"},{"location":"planning/motion_velocity_smoother/#l2_1","title":"L2","text":"Name Type Description Default value pseudo_jerk_weight
double
Weight for \"smoothness\" cost 100.0 over_v_weight
double
Weight for \"over speed limit\" cost 100000.0 over_a_weight
double
Weight for \"over accel limit\" cost 1000.0"},{"location":"planning/motion_velocity_smoother/#linf_1","title":"Linf","text":"Name Type Description Default value pseudo_jerk_weight
double
Weight for \"smoothness\" cost 100.0 over_v_weight
double
Weight for \"over speed limit\" cost 100000.0 over_a_weight
double
Weight for \"over accel limit\" cost 1000.0"},{"location":"planning/motion_velocity_smoother/#others","title":"Others","text":"Name Type Description Default value over_stop_velocity_warn_thr
double
Threshold to judge that the optimized velocity exceeds the input velocity on the stop point [m/s] 1.389"},{"location":"planning/motion_velocity_smoother/#assumptions-known-limits","title":"Assumptions / Known limits","text":"[1] B. Stellato, et al., \"OSQP: an operator splitting solver for quadratic programs\", Mathematical Programming Computation, 2020, 10.1007/s12532-020-00179-2.
[2] Y. Zhang, et al., \"Toward a More Complete, Flexible, and Safer Speed Planning for Autonomous Driving via Convex Optimization\", Sensors, vol. 18, no. 7, p. 2185, 2018, 10.3390/s18072185
"},{"location":"planning/motion_velocity_smoother/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"planning/motion_velocity_smoother/README.ja/","title":"Motion Velocity Smoother","text":""},{"location":"planning/motion_velocity_smoother/README.ja/#motion-velocity-smoother","title":"Motion Velocity Smoother","text":""},{"location":"planning/motion_velocity_smoother/README.ja/#purpose","title":"Purpose","text":"motion_velocity_smoother
\u306f\u76ee\u6a19\u8ecc\u9053\u4e0a\u306e\u5404\u70b9\u306b\u304a\u3051\u308b\u671b\u307e\u3057\u3044\u8eca\u901f\u3092\u8a08\u753b\u3057\u3066\u51fa\u529b\u3059\u308b\u30e2\u30b8\u30e5\u30fc\u30eb\u3067\u3042\u308b\u3002 \u3053\u306e\u30e2\u30b8\u30e5\u30fc\u30eb\u306f\u3001\u901f\u5ea6\u306e\u6700\u5927\u5316\u3068\u4e57\u308a\u5fc3\u5730\u306e\u826f\u3055\u3092\u4e21\u7acb\u3059\u308b\u305f\u3081\u306b\u3001\u4e8b\u524d\u306b\u6307\u5b9a\u3055\u308c\u305f\u5236\u9650\u901f\u5ea6\u3001\u5236\u9650\u52a0\u901f\u5ea6\u304a\u3088\u3073\u5236\u9650\u8e8d\u5ea6\u306e\u7bc4\u56f2\u3067\u8eca\u901f\u3092\u8a08\u753b\u3059\u308b\u3002 \u52a0\u901f\u5ea6\u3084\u8e8d\u5ea6\u306e\u5236\u9650\u3092\u4e0e\u3048\u308b\u3053\u3068\u306f\u8eca\u901f\u306e\u5909\u5316\u3092\u6ed1\u3089\u304b\u306b\u3059\u308b\u3053\u3068\u306b\u5bfe\u5fdc\u3059\u308b\u305f\u3081\u3001\u3053\u306e\u30e2\u30b8\u30e5\u30fc\u30eb\u3092motion_velocity_smoother
\u3068\u547c\u3093\u3067\u3044\u308b\u3002
\u81ea\u8eca\u5f8c\u8f2a\u8ef8\u4e2d\u5fc3\u4f4d\u7f6e\u306b\u6700\u3082\u8fd1\u3044\u53c2\u7167\u7d4c\u8def\u4e0a\u306e\u70b9\u306b\u5bfe\u3057\u3001extract_behind_dist
\u3060\u3051\u623b\u3063\u305f\u70b9\u304b\u3089extract_ahead_dist
\u3060\u3051\u9032\u3093\u3060\u70b9\u307e\u3067\u306e\u53c2\u7167\u7d4c\u8def\u3092\u629c\u304d\u51fa\u3059\u3002
\u30e2\u30b8\u30e5\u30fc\u30eb\u5916\u90e8\u304b\u3089\u6307\u5b9a\u3055\u308c\u305f\u901f\u5ea6\u5236\u9650\u3092\u9069\u7528\u3059\u308b\u3002 \u3053\u3053\u3067\u6271\u3046\u5916\u90e8\u306e\u901f\u5ea6\u5236\u9650\u306f/planning/scenario_planning/max_velocity
\u306e topic \u3067\u6e21\u3055\u308c\u308b\u3082\u306e\u3067\u3001\u5730\u56f3\u4e0a\u3067\u8a2d\u5b9a\u3055\u308c\u305f\u901f\u5ea6\u5236\u9650\u306a\u3069\u3001\u53c2\u7167\u7d4c\u8def\u306b\u3059\u3067\u306b\u8a2d\u5b9a\u3055\u308c\u3066\u3044\u308b\u5236\u9650\u901f\u5ea6\u3068\u306f\u5225\u3067\u3042\u308b\u3002 \u5916\u90e8\u304b\u3089\u6307\u5b9a\u3055\u308c\u308b\u901f\u5ea6\u5236\u9650\u306f\u3001\u30d1\u30e9\u30e1\u30fc\u30bf\u3067\u6307\u5b9a\u3055\u308c\u3066\u3044\u308b\u6e1b\u901f\u5ea6\u304a\u3088\u3073\u8e8d\u5ea6\u306e\u5236\u9650\u306e\u7bc4\u56f2\u3067\u6e1b\u901f\u53ef\u80fd\u306a\u4f4d\u7f6e\u304b\u3089\u901f\u5ea6\u5236\u9650\u3092\u9069\u7528\u3059\u308b\u3002
\u505c\u6b62\u70b9\u306b\u8fd1\u3065\u3044\u305f\u3068\u304d\u306e\u901f\u5ea6\u3092\u8a2d\u5b9a\u3059\u308b\u3002\u969c\u5bb3\u7269\u8fd1\u508d\u307e\u3067\u8fd1\u3065\u304f\u5834\u5408\u3084\u3001\u6b63\u7740\u7cbe\u5ea6\u5411\u4e0a\u306a\u3069\u306e\u76ee\u7684\u306b\u7528\u3044\u308b\u3002
"},{"location":"planning/motion_velocity_smoother/README.ja/#apply-lateral-acceleration-limit","title":"Apply lateral acceleration limit","text":"\u7d4c\u8def\u306e\u66f2\u7387\u306b\u5fdc\u3058\u3066\u3001\u6307\u5b9a\u3055\u308c\u305f\u6700\u5927\u6a2a\u52a0\u901f\u5ea6max_lateral_accel
\u3092\u8d85\u3048\u306a\u3044\u901f\u5ea6\u3092\u5236\u9650\u901f\u5ea6\u3068\u3057\u3066\u8a2d\u5b9a\u3059\u308b\u3002\u305f\u3060\u3057\u3001\u5236\u9650\u901f\u5ea6\u306fmin_curve_velocity
\u3092\u4e0b\u56de\u3089\u306a\u3044\u3088\u3046\u306b\u8a2d\u5b9a\u3059\u308b\u3002
\u6307\u5b9a\u3055\u308c\u305f\u6642\u9593\u9593\u9694\u3067\u7d4c\u8def\u306e\u70b9\u3092\u518d\u30b5\u30f3\u30d7\u30eb\u3059\u308b\u3002\u305f\u3060\u3057\u3001\u7d4c\u8def\u5168\u4f53\u306e\u9577\u3055\u306fmin_trajectory_length
\u304b\u3089max_trajectory_length
\u306e\u9593\u3068\u306a\u308b\u3088\u3046\u306b\u518d\u30b5\u30f3\u30d7\u30eb\u3092\u884c\u3044\u3001\u70b9\u306e\u9593\u9694\u306fmin_trajectory_interval_distance
\u3088\u308a\u5c0f\u3055\u304f\u306a\u3089\u306a\u3044\u3088\u3046\u306b\u3059\u308b\u3002 \u73fe\u5728\u8eca\u901f\u3067resample_time
\u306e\u9593\u9032\u3080\u8ddd\u96e2\u307e\u3067\u306f\u5bc6\u306b\u30b5\u30f3\u30d7\u30ea\u30f3\u30b0\u3057\u3001\u305d\u308c\u4ee5\u964d\u306f\u758e\u306b\u30b5\u30f3\u30d7\u30ea\u30f3\u30b0\u3059\u308b\u3002 \u3053\u306e\u65b9\u6cd5\u3067\u30b5\u30f3\u30d7\u30ea\u30f3\u30b0\u3059\u308b\u3053\u3068\u3067\u3001\u4f4e\u901f\u6642\u306f\u5bc6\u306b\u3001\u9ad8\u901f\u6642\u306f\u758e\u306b\u30b5\u30f3\u30d7\u30eb\u3055\u308c\u308b\u305f\u3081\u3001\u505c\u6b62\u7cbe\u5ea6\u3068\u8a08\u7b97\u8ca0\u8377\u8efd\u6e1b\u306e\u4e21\u7acb\u3092\u56f3\u3063\u3066\u3044\u308b\u3002
\u901f\u5ea6\u8a08\u753b\u306e\u305f\u3081\u306e\u521d\u671f\u5024\u3092\u8a08\u7b97\u3059\u308b\u3002\u521d\u671f\u5024\u306f\u72b6\u6cc1\u306b\u5fdc\u3058\u3066\u4e0b\u8868\u306e\u3088\u3046\u306b\u8a08\u7b97\u3059\u308b\u3002
\u72b6\u6cc1 \u521d\u671f\u901f\u5ea6 \u521d\u671f\u52a0\u901f\u5ea6 \u6700\u521d\u306e\u8a08\u7b97\u6642 \u73fe\u5728\u8eca\u901f 0.0 \u767a\u9032\u6642engage_velocity
engage_acceleration
\u73fe\u5728\u8eca\u901f\u3068\u8a08\u753b\u8eca\u901f\u304c\u4e56\u96e2 \u73fe\u5728\u8eca\u901f \u524d\u56de\u8a08\u753b\u5024 \u901a\u5e38\u6642 \u524d\u56de\u8a08\u753b\u5024 \u524d\u56de\u8a08\u753b\u5024"},{"location":"planning/motion_velocity_smoother/README.ja/#smooth-velocity","title":"Smooth velocity","text":"\u901f\u5ea6\u306e\u8a08\u753b\u3092\u884c\u3046\u3002\u901f\u5ea6\u8a08\u753b\u306e\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u306fJerkFiltered
, L2
, Linf
\u306e 3 \u7a2e\u985e\u306e\u3046\u3061\u304b\u3089\u30b3\u30f3\u30d5\u30a3\u30b0\u3067\u6307\u5b9a\u3059\u308b\u3002 \u6700\u9069\u5316\u306e\u30bd\u30eb\u30d0\u306f OSQP[1]\u3092\u5229\u7528\u3059\u308b\u3002
\u901f\u5ea6\u306e 2 \u4e57\uff08\u6700\u5c0f\u5316\u3067\u8868\u3059\u305f\u3081\u8ca0\u5024\u3067\u8868\u73fe\uff09\u3001\u5236\u9650\u901f\u5ea6\u9038\u8131\u91cf\u306e 2 \u4e57\u3001\u5236\u9650\u52a0\u5ea6\u9038\u8131\u91cf\u306e 2 \u4e57\u3001\u5236\u9650\u30b8\u30e3\u30fc\u30af\u9038\u8131\u91cf\u306e 2 \u4e57\u3001\u30b8\u30e3\u30fc\u30af\u306e 2 \u4e57\u306e\u7dcf\u548c\u3092\u6700\u5c0f\u5316\u3059\u308b\u3002
"},{"location":"planning/motion_velocity_smoother/README.ja/#l2","title":"L2","text":"\u901f\u5ea6\u306e 2 \u4e57\uff08\u6700\u5c0f\u5316\u3067\u8868\u3059\u305f\u3081\u8ca0\u5024\u3067\u8868\u73fe\uff09\u3001\u5236\u9650\u901f\u5ea6\u9038\u8131\u91cf\u306e 2 \u4e57\u3001\u5236\u9650\u52a0\u5ea6\u9038\u8131\u91cf\u306e 2 \u4e57\u3001\u7591\u4f3c\u30b8\u30e3\u30fc\u30af[2]\u306e 2 \u4e57\u306e\u7dcf\u548c\u3092\u6700\u5c0f\u5316\u3059\u308b\u3002
"},{"location":"planning/motion_velocity_smoother/README.ja/#linf","title":"Linf","text":"\u901f\u5ea6\u306e 2 \u4e57\uff08\u6700\u5c0f\u5316\u3067\u8868\u3059\u305f\u3081\u8ca0\u5024\u3067\u8868\u73fe\uff09\u3001\u5236\u9650\u901f\u5ea6\u9038\u8131\u91cf\u306e 2 \u4e57\u3001\u5236\u9650\u52a0\u5ea6\u9038\u8131\u91cf\u306e 2 \u4e57\u306e\u7dcf\u548c\u3068\u7591\u4f3c\u30b8\u30e3\u30fc\u30af[2]\u306e\u7d76\u5bfe\u6700\u5927\u5024\u306e\u548c\u306e\u6700\u5c0f\u5316\u3059\u308b\u3002
"},{"location":"planning/motion_velocity_smoother/README.ja/#post-process","title":"Post process","text":"\u8a08\u753b\u3055\u308c\u305f\u8ecc\u9053\u306e\u5f8c\u51e6\u7406\u3092\u884c\u3046\u3002
max_velocity
\u4ee5\u4e0b\u3068\u306a\u308b\u3088\u3046\u306b\u8a2d\u5b9apost resampling
)\u6700\u9069\u5316\u306e\u8a08\u7b97\u304c\u7d42\u308f\u3063\u305f\u3042\u3068\u3001\u6b21\u306e\u30ce\u30fc\u30c9\u306b\u7d4c\u8def\u3092\u6e21\u3059\u524d\u306bpost resampling
\u3068\u547c\u3070\u308c\u308b\u30ea\u30b5\u30f3\u30d7\u30ea\u30f3\u30b0\u3092\u884c\u3046\u3002\u3053\u3053\u3067\u518d\u5ea6\u30ea\u30b5\u30f3\u30d7\u30ea\u30f3\u30b0\u3092\u884c\u3063\u3066\u3044\u308b\u7406\u7531\u3068\u3057\u3066\u306f\u3001\u6700\u9069\u5316\u524d\u3067\u5fc5\u8981\u306a\u7d4c\u8def\u9593\u9694\u3068\u5f8c\u6bb5\u306e\u30e2\u30b8\u30e5\u30fc\u30eb\u306b\u6e21\u3059\u7d4c\u8def\u9593\u9694\u304c\u5fc5\u305a\u3057\u3082\u4e00\u81f4\u3057\u3066\u3044\u306a\u3044\u304b\u3089\u3067\u3042\u308a\u3001\u305d\u306e\u5dee\u3092\u57cb\u3081\u308b\u305f\u3081\u306b\u518d\u5ea6\u30b5\u30f3\u30d7\u30ea\u30f3\u30b0\u3092\u884c\u3063\u3066\u3044\u308b\u3002\u305d\u306e\u305f\u3081\u3001post resampling
\u3067\u306f\u5f8c\u6bb5\u30e2\u30b8\u30e5\u30fc\u30eb\u306e\u7d4c\u8def\u4ed5\u69d8\u3092\u78ba\u8a8d\u3057\u3066\u30d1\u30e9\u30e1\u30fc\u30bf\u3092\u6c7a\u3081\u308b\u5fc5\u8981\u304c\u3042\u308b\u3002\u306a\u304a\u3001\u6700\u9069\u5316\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u306e\u8a08\u7b97\u8ca0\u8377\u304c\u9ad8\u304f\u3001\u6700\u521d\u306e\u30ea\u30b5\u30f3\u30d7\u30ea\u30f3\u30b0\u3067\u7d4c\u8def\u9593\u9694\u304c\u5f8c\u6bb5\u30e2\u30b8\u30e5\u30fc\u30eb\u306e\u7d4c\u8def\u4ed5\u69d8\u3088\u308a\u758e\u306b\u306a\u3063\u3066\u3044\u308b\u5834\u5408\u3001post resampling
\u3067\u7d4c\u8def\u3092\u871c\u306b\u30ea\u30b5\u30f3\u30d7\u30ea\u30f3\u30b0\u3059\u308b\u3002\u9006\u306b\u6700\u9069\u5316\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u306e\u8a08\u7b97\u8ca0\u8377\u304c\u5c0f\u3055\u304f\u3001\u6700\u521d\u306e\u30ea\u30b5\u30f3\u30d7\u30ea\u30f3\u30b0\u3067\u7d4c\u8def\u9593\u9694\u304c\u5f8c\u6bb5\u306e\u7d4c\u8def\u4ed5\u69d8\u3088\u308a\u871c\u306b\u306a\u3063\u3066\u3044\u308b\u5834\u5408\u306f\u3001post resampling
\u3067\u7d4c\u8def\u3092\u305d\u306e\u4ed5\u69d8\u306b\u5408\u308f\u305b\u3066\u758e\u306b\u30ea\u30b5\u30f3\u30d7\u30ea\u30f3\u30b0\u3059\u308b\u3002
~/input/trajectory
autoware_auto_planning_msgs/Trajectory
Reference trajectory /planning/scenario_planning/max_velocity
std_msgs/Float32
External velocity limit [m/s] /localization/kinematic_state
nav_msgs/Odometry
Current odometry /tf
tf2_msgs/TFMessage
TF /tf_static
tf2_msgs/TFMessage
TF static"},{"location":"planning/motion_velocity_smoother/README.ja/#output","title":"Output","text":"Name Type Description ~/output/trajectory
autoware_auto_planning_msgs/Trajectory
Modified trajectory /planning/scenario_planning/current_max_velocity
std_msgs/Float32
Current external velocity limit [m/s] ~/closest_velocity
std_msgs/Float32
Planned velocity closest to ego base_link (for debug) ~/closest_acceleration
std_msgs/Float32
Planned acceleration closest to ego base_link (for debug) ~/closest_jerk
std_msgs/Float32
Planned jerk closest to ego base_link (for debug) ~/debug/trajectory_raw
autoware_auto_planning_msgs/Trajectory
Extracted trajectory (for debug) ~/debug/trajectory_external_velocity_limited
autoware_auto_planning_msgs/Trajectory
External velocity limited trajectory (for debug) ~/debug/trajectory_lateral_acc_filtered
autoware_auto_planning_msgs/Trajectory
Lateral acceleration limit filtered trajectory (for debug) ~/debug/trajectory_time_resampled
autoware_auto_planning_msgs/Trajectory
Time resampled trajectory (for debug) ~/distance_to_stopline
std_msgs/Float32
Distance to stop line from current ego pose (max 50 m) (for debug) ~/stop_speed_exceeded
std_msgs/Bool
It publishes true
if planned velocity on the point which the maximum velocity is zero is over threshold"},{"location":"planning/motion_velocity_smoother/README.ja/#parameters","title":"Parameters","text":""},{"location":"planning/motion_velocity_smoother/README.ja/#constraint-parameters","title":"Constraint parameters","text":"Name Type Description Default value max_velocity
double
Max velocity limit [m/s] 20.0 max_accel
double
Max acceleration limit [m/ss] 1.0 min_decel
double
Min deceleration limit [m/ss] -0.5 stop_decel
double
Stop deceleration value at a stop point [m/ss] 0.0 max_jerk
double
Max jerk limit [m/sss] 1.0 min_jerk
double
Min jerk limit [m/sss] -0.5"},{"location":"planning/motion_velocity_smoother/README.ja/#external-velocity-limit-parameter","title":"External velocity limit parameter","text":"Name Type Description Default value margin_to_insert_external_velocity_limit
double
margin distance to insert external velocity limit [m] 0.3"},{"location":"planning/motion_velocity_smoother/README.ja/#curve-parameters","title":"Curve parameters","text":"Name Type Description Default value max_lateral_accel
double
Max lateral acceleration limit [m/ss] 0.5 min_curve_velocity
double
Min velocity at lateral acceleration limit [m/ss] 2.74 decel_distance_before_curve
double
Distance to slowdown before a curve for lateral acceleration limit [m] 3.5 decel_distance_after_curve
double
Distance to slowdown after a curve for lateral acceleration limit [m] 2.0"},{"location":"planning/motion_velocity_smoother/README.ja/#engage-replan-parameters","title":"Engage & replan parameters","text":"Name Type Description Default value replan_vel_deviation
double
Velocity deviation to replan initial velocity [m/s] 5.53 engage_velocity
double
Engage velocity threshold [m/s] (if the trajectory velocity is higher than this value, use this velocity for engage vehicle speed) 0.25 engage_acceleration
double
Engage acceleration [m/ss] (use this acceleration when engagement) 0.1 engage_exit_ratio
double
Exit engage sequence to normal velocity planning when the velocity exceeds engage_exit_ratio x engage_velocity. 0.5 stop_dist_to_prohibit_engage
double
If the stop point is in this distance, the speed is set to 0 not to move the vehicle [m] 0.5"},{"location":"planning/motion_velocity_smoother/README.ja/#stopping-velocity-parameters","title":"Stopping velocity parameters","text":"Name Type Description Default value stopping_velocity
double
change target velocity to this value before v=0 point [m/s] 2.778 stopping_distance
double
distance for the stopping_velocity [m]. 0 means the stopping velocity is not applied. 0.0"},{"location":"planning/motion_velocity_smoother/README.ja/#extraction-parameters","title":"Extraction parameters","text":"Name Type Description Default value extract_ahead_dist
double
Forward trajectory distance used for planning [m] 200.0 extract_behind_dist
double
backward trajectory distance used for planning [m] 5.0 delta_yaw_threshold
double
Allowed delta yaw between ego pose and trajectory pose [radian] 1.0472"},{"location":"planning/motion_velocity_smoother/README.ja/#resampling-parameters","title":"Resampling parameters","text":"Name Type Description Default value max_trajectory_length
double
Max trajectory length for resampling [m] 200.0 min_trajectory_length
double
Min trajectory length for resampling [m] 30.0 resample_time
double
Resample total time [s] 10.0 dense_resample_dt
double
resample time interval for dense sampling [s] 0.1 dense_min_interval_distance
double
minimum points-interval length for dense sampling [m] 0.1 sparse_resample_dt
double
resample time interval for sparse sampling [s] 0.5 sparse_min_interval_distance
double
minimum points-interval length for sparse sampling [m] 4.0"},{"location":"planning/motion_velocity_smoother/README.ja/#resampling-parameters-for-post-process","title":"Resampling parameters for post process","text":"Name Type Description Default value post_max_trajectory_length
double
max trajectory length for resampling [m] 300.0 post_min_trajectory_length
double
min trajectory length for resampling [m] 30.0 post_resample_time
double
resample total time for dense sampling [s] 10.0 post_dense_resample_dt
double
resample time interval for dense sampling [s] 0.1 post_dense_min_interval_distance
double
minimum points-interval length for dense sampling [m] 0.1 post_sparse_resample_dt
double
resample time interval for sparse sampling [s] 0.1 post_sparse_min_interval_distance
double
minimum points-interval length for sparse sampling [m] 1.0"},{"location":"planning/motion_velocity_smoother/README.ja/#weights-for-optimization","title":"Weights for optimization","text":""},{"location":"planning/motion_velocity_smoother/README.ja/#jerkfiltered_1","title":"JerkFiltered","text":"Name Type Description Default value jerk_weight
double
Weight for \"smoothness\" cost for jerk 10.0 over_v_weight
double
Weight for \"over speed limit\" cost 100000.0 over_a_weight
double
Weight for \"over accel limit\" cost 5000.0 over_j_weight
double
Weight for \"over jerk limit\" cost 1000.0"},{"location":"planning/motion_velocity_smoother/README.ja/#l2_1","title":"L2","text":"Name Type Description Default value pseudo_jerk_weight
double
Weight for \"smoothness\" cost 100.0 over_v_weight
double
Weight for \"over speed limit\" cost 100000.0 over_a_weight
double
Weight for \"over accel limit\" cost 1000.0"},{"location":"planning/motion_velocity_smoother/README.ja/#linf_1","title":"Linf","text":"Name Type Description Default value pseudo_jerk_weight
double
Weight for \"smoothness\" cost 100.0 over_v_weight
double
Weight for \"over speed limit\" cost 100000.0 over_a_weight
double
Weight for \"over accel limit\" cost 1000.0"},{"location":"planning/motion_velocity_smoother/README.ja/#others","title":"Others","text":"Name Type Description Default value over_stop_velocity_warn_thr
double
Threshold to judge that the optimized velocity exceeds the input velocity on the stop point [m/s] 1.389"},{"location":"planning/motion_velocity_smoother/README.ja/#assumptions-known-limits","title":"Assumptions / Known limits","text":"[1] B. Stellato, et al., \"OSQP: an operator splitting solver for quadratic programs\", Mathematical Programming Computation, 2020, 10.1007/s12532-020-00179-2.
[2] Y. Zhang, et al., \"Toward a More Complete, Flexible, and Safer Speed Planning for Autonomous Driving via Convex Optimization\", Sensors, vol. 18, no. 7, p. 2185, 2018, 10.3390/s18072185
"},{"location":"planning/motion_velocity_smoother/README.ja/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"planning/objects_of_interest_marker_interface/","title":"Objects Of Interest Marker Interface","text":""},{"location":"planning/objects_of_interest_marker_interface/#objects-of-interest-marker-interface","title":"Objects Of Interest Marker Interface","text":"Warning
Under Construction
"},{"location":"planning/objects_of_interest_marker_interface/#purpose","title":"Purpose","text":""},{"location":"planning/objects_of_interest_marker_interface/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"planning/objects_of_interest_marker_interface/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"planning/objects_of_interest_marker_interface/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"planning/objects_of_interest_marker_interface/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":""},{"location":"planning/obstacle_avoidance_planner/","title":"Obstacle Avoidance Planner","text":""},{"location":"planning/obstacle_avoidance_planner/#obstacle-avoidance-planner","title":"Obstacle Avoidance Planner","text":""},{"location":"planning/obstacle_avoidance_planner/#purpose","title":"Purpose","text":"This package generates a trajectory that is kinematically-feasible to drive and collision-free based on the input path, drivable area. Only position and orientation of trajectory are updated in this module, and velocity is just taken over from the one in the input path.
"},{"location":"planning/obstacle_avoidance_planner/#feature","title":"Feature","text":"This package is able to
Note that the velocity is just taken over from the input path.
"},{"location":"planning/obstacle_avoidance_planner/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"planning/obstacle_avoidance_planner/#input","title":"input","text":"Name Type Description~/input/path
autoware_auto_planning_msgs/msg/Path Reference path and the corresponding drivable area ~/input/odometry
nav_msgs/msg/Odometry Current Velocity of ego vehicle"},{"location":"planning/obstacle_avoidance_planner/#output","title":"output","text":"Name Type Description ~/output/trajectory
autoware_auto_planning_msgs/msg/Trajectory Optimized trajectory that is feasible to drive and collision-free"},{"location":"planning/obstacle_avoidance_planner/#flowchart","title":"Flowchart","text":"Flowchart of functions is explained here.
"},{"location":"planning/obstacle_avoidance_planner/#createplannerdata","title":"createPlannerData","text":"The following data for planning is created.
struct PlannerData\n{\n// input\nHeader header;\nstd::vector<TrajectoryPoint> traj_points; // converted from the input path\nstd::vector<geometry_msgs::msg::Point> left_bound;\nstd::vector<geometry_msgs::msg::Point> right_bound;\n\n// ego\ngeometry_msgs::msg::Pose ego_pose;\ndouble ego_vel;\n};\n
"},{"location":"planning/obstacle_avoidance_planner/#check-replan","title":"check replan","text":"When one of the following conditions are met, trajectory optimization will be executed. Otherwise, previously optimized trajectory is used with updating the velocity from the latest input path.
max_path_shape_around_ego_lat_dist
replan.max_ego_moving_dist
in one cycle. (default: 3.0 [m])replan.max_goal_moving_dist
in one cycle. (default: 15.0 [ms])replan.max_path_shape_around_ego_lat_dist
in one cycle. (default: 2.0)This module makes the trajectory kinematically-feasible and collision-free. We define vehicle pose in the frenet coordinate, and minimize tracking errors by optimization. This optimization considers vehicle kinematics and collision checking with road boundary and obstacles. To decrease the computation cost, the optimization is applied to the shorter trajectory (default: 50 [m]) than the whole trajectory, and concatenate the remained trajectory with the optimized one at last.
The trajectory just in front of the ego must not be changed a lot so that the steering wheel will be stable. Therefore, we use the previously generated trajectory in front of the ego.
Optimization center on the vehicle, that tries to locate just on the trajectory, can be tuned along side the vehicle vertical axis. This parameter mpt.kinematics.optimization center offset
is defined as the signed length from the back-wheel center to the optimization center. Some examples are shown in the following figure, and it is shown that the trajectory of vehicle shape differs according to the optimization center even if the reference trajectory (green one) is the same.
More details can be seen here.
"},{"location":"planning/obstacle_avoidance_planner/#applyinputvelocity","title":"applyInputVelocity","text":"Velocity is assigned in the optimized trajectory from the velocity in the behavior path. The shapes of the optimized trajectory and the path are different, therefore the each nearest trajectory point to the path is searched and the velocity is interpolated with zero-order hold.
"},{"location":"planning/obstacle_avoidance_planner/#insertzerovelocityoutsidedrivablearea","title":"insertZeroVelocityOutsideDrivableArea","text":"Optimized trajectory is too short for velocity planning, therefore extend the trajectory by concatenating the optimized trajectory and the behavior path considering drivability. Generated trajectory is checked if it is inside the drivable area or not, and if outside drivable area, output a trajectory inside drivable area with the behavior path or the previously generated trajectory.
As described above, the behavior path is separated into two paths: one is for optimization and the other is the remain. The first path becomes optimized trajectory, and the second path just is transformed to a trajectory. Then a trajectory inside the drivable area is calculated as follows.
Optimization failure is dealt with the same as if the optimized trajectory is outside the drivable area. The output trajectory is memorized as a previously generated trajectory for the next cycle.
Rationale In the current design, since there are some modelling errors, the constraints are considered to be soft constraints. Therefore, we have to make sure that the optimized trajectory is inside the drivable area or not after optimization.
"},{"location":"planning/obstacle_avoidance_planner/#limitation","title":"Limitation","text":"behavior_path_planner
and obstacle_avoidance_planner
are not decided clearly. Both can avoid obstacles.Trajectory planning problem that satisfies kinematically-feasibility and collision-free has two main characteristics that makes hard to be solved: one is non-convex and the other is high dimension. Based on the characteristics, we investigate pros/cons of the typical planning methods: optimization-based, sampling-based, and learning-based method.
"},{"location":"planning/obstacle_avoidance_planner/#optimization-based-method","title":"Optimization-based method","text":"Based on these pros/cons, we chose the optimization-based planner first. Although it has a cons to converge to the local minima, it can get a good solution by the preprocessing to approximate the problem to convex that almost equals to the original non-convex problem.
"},{"location":"planning/obstacle_avoidance_planner/#how-to-tune-parameters","title":"How to Tune Parameters","text":""},{"location":"planning/obstacle_avoidance_planner/#drivability-in-narrow-roads","title":"Drivability in narrow roads","text":"mpt.clearance.soft_clearance_from_road
modify mpt.kinematics.optimization_center_offset
mpt.weight.steer_input_weight
or mpt.weight.steer_rate_weight
larger, which are stability of steering wheel along the trajectory.option.enable_skip_optimization
skips MPT optimization.option.enable_calculation_time_info
enables showing each calculation time for functions and total calculation time on the terminal.option.enable_outside_drivable_area_stop
enables stopping just before the generated trajectory point will be outside the drivable area.How to debug can be seen here.
"},{"location":"planning/obstacle_avoidance_planner/docs/debug/","title":"Debug","text":""},{"location":"planning/obstacle_avoidance_planner/docs/debug/#debug","title":"Debug","text":""},{"location":"planning/obstacle_avoidance_planner/docs/debug/#debug-visualization","title":"Debug visualization","text":"The visualization markers of the planning flow (Input, Model Predictive Trajectory, and Output) are explained here.
All the following markers can be visualized by
ros2 launch obstacle_avoidance_planner launch_visualiation.launch.xml vehilce_model:=sample_vehicle\n
The vehicle_model
must be specified to make footprints with vehicle's size.
behavior
planner.behavior
planner is converted to footprints.behavior
planner does not support it.behavior
planner.obstacle_avoidance_planner
will try to make the trajectory fully inside the drivable area.behavior
planner, please make sure that the drivable area is expanded correctly.obstacle_avoidance_planner
will try to make the these circles inside the above boundaries' width.The obstacle_avoidance_planner
consists of many functions such as boundaries' width calculation, collision-free planning, etc. We can see the calculation time for each function as follows.
Enable option.enable_calculation_time_info
or echo the topic as follows.
$ ros2 topic echo /planning/scenario_planning/lane_driving/motion_planning/obstacle_avoidance_planner/debug/calculation_time --field data\n---\n insertFixedPoint:= 0.008 [ms]\ngetPaddedTrajectoryPoints:= 0.002 [ms]\nupdateConstraint:= 0.741 [ms]\noptimizeTrajectory:= 0.101 [ms]\nconvertOptimizedPointsToTrajectory:= 0.014 [ms]\ngetEBTrajectory:= 0.991 [ms]\nresampleReferencePoints:= 0.058 [ms]\nupdateFixedPoint:= 0.237 [ms]\nupdateBounds:= 0.22 [ms]\nupdateVehicleBounds:= 0.509 [ms]\ncalcReferencePoints:= 1.649 [ms]\ncalcMatrix:= 0.209 [ms]\ncalcValueMatrix:= 0.015 [ms]\ncalcObjectiveMatrix:= 0.305 [ms]\ncalcConstraintMatrix:= 0.641 [ms]\ninitOsqp:= 6.896 [ms]\nsolveOsqp:= 2.796 [ms]\ncalcOptimizedSteerAngles:= 9.856 [ms]\ncalcMPTPoints:= 0.04 [ms]\ngetModelPredictiveTrajectory:= 12.782 [ms]\noptimizeTrajectory:= 12.981 [ms]\napplyInputVelocity:= 0.577 [ms]\ninsertZeroVelocityOutsideDrivableArea:= 0.81 [ms]\ngetDebugMarker:= 0.684 [ms]\npublishDebugMarker:= 4.354 [ms]\npublishDebugMarkerOfOptimization:= 5.047 [ms]\ngenerateOptimizedTrajectory:= 20.374 [ms]\nextendTrajectory:= 0.326 [ms]\npublishDebugData:= 0.008 [ms]\nonPath:= 20.737 [ms]\n
"},{"location":"planning/obstacle_avoidance_planner/docs/debug/#plot","title":"Plot","text":"With the following script, any calculation time of the above functions can be plot.
ros2 run obstacle_avoidance_planner calculation_time_plotter.py\n
You can specify functions to plot with the -f
option.
ros2 run obstacle_avoidance_planner calculation_time_plotter.py -f \"onPath, generateOptimizedTrajectory, calcReferencePoints\"\n
"},{"location":"planning/obstacle_avoidance_planner/docs/debug/#qa-for-debug","title":"Q&A for Debug","text":""},{"location":"planning/obstacle_avoidance_planner/docs/debug/#the-output-frequency-is-low","title":"The output frequency is low","text":"Check the function which is comparatively heavy according to this information.
For your information, the following functions for optimization and its initialization may be heavy in some complicated cases.
initOsqp
solveOsqp
Some of the following may have an issue. Please check if there is something weird by the visualization.
Some of the following may have an issue. Please check if there is something weird by the visualization.
Model Predictive Trajectory (MPT) calculates the trajectory that meets the following conditions.
Conditions for collision free is considered to be not hard constraints but soft constraints. When the optimization failed or the optimized trajectory is not collision free, the output trajectory will be previously generated trajectory.
Trajectory near the ego must be stable, therefore the condition where trajectory points near the ego are the same as previously generated trajectory is considered, and this is the only hard constraints in MPT.
"},{"location":"planning/obstacle_avoidance_planner/docs/mpt/#flowchart","title":"Flowchart","text":""},{"location":"planning/obstacle_avoidance_planner/docs/mpt/#vehicle-kinematics","title":"Vehicle kinematics","text":"As the following figure, we consider the bicycle kinematics model in the frenet frame to track the reference path. At time step \\(k\\), we define lateral distance to the reference path, heading angle against the reference path, and steer angle as \\(y_k\\), \\(\\theta_k\\), and \\(\\delta_k\\) respectively.
Assuming that the commanded steer angle is \\(\\delta_{des, k}\\), the kinematics model in the frenet frame is formulated as follows. We also assume that the steer angle \\(\\delta_k\\) is first-order lag to the commanded one.
\\[ \\begin{align} y_{k+1} & = y_{k} + v \\sin \\theta_k dt \\\\ \\theta_{k+1} & = \\theta_k + \\frac{v \\tan \\delta_k}{L}dt - \\kappa_k v \\cos \\theta_k dt \\\\ \\delta_{k+1} & = \\delta_k - \\frac{\\delta_k - \\delta_{des,k}}{\\tau}dt \\end{align} \\]"},{"location":"planning/obstacle_avoidance_planner/docs/mpt/#linearization","title":"Linearization","text":"Then we linearize these equations. \\(y_k\\) and \\(\\theta_k\\) are tracking errors, so we assume that those are small enough. Therefore \\(\\sin \\theta_k \\approx \\theta_k\\).
Since \\(\\delta_k\\) is a steer angle, it is not always small. By using a reference steer angle \\(\\delta_{\\mathrm{ref}, k}\\) calculated by the reference path curvature \\(\\kappa_k\\), we express \\(\\delta_k\\) with a small value \\(\\Delta \\delta_k\\).
Note that the steer angle \\(\\delta_k\\) is within the steer angle limitation \\(\\delta_{\\max}\\). When the reference steer angle \\(\\delta_{\\mathrm{ref}, k}\\) is larger than the steer angle limitation \\(\\delta_{\\max}\\), and \\(\\delta_{\\mathrm{ref}, k}\\) is used to linearize the steer angle, \\(\\Delta \\delta_k\\) is \\(\\Delta \\delta_k = \\delta - \\delta_{\\mathrm{ref}, k} = \\delta_{\\max} - \\delta_{\\mathrm{ref}, k}\\), and the absolute \\(\\Delta \\delta_k\\) gets larger. Therefore, we have to apply the steer angle limitation to \\(\\delta_{\\mathrm{ref}, k}\\) as well.
\\[ \\begin{align} \\delta_{\\mathrm{ref}, k} & = \\mathrm{clamp}(\\arctan(L \\kappa_k), -\\delta_{\\max}, \\delta_{\\max}) \\\\ \\delta_k & = \\delta_{\\mathrm{ref}, k} + \\Delta \\delta_k, \\ \\Delta \\delta_k \\ll 1 \\\\ \\end{align} \\]\\(\\mathrm{clamp}(v, v_{\\min}, v_{\\max})\\) is a function to convert \\(v\\) to be larger than \\(v_{\\min}\\) and smaller than \\(v_{\\max}\\).
Using this \\(\\delta_{\\mathrm{ref}, k}\\), \\(\\tan \\delta_k\\) is linearized as follows.
\\[ \\begin{align} \\tan \\delta_k & \\approx \\tan \\delta_{\\mathrm{ref}, k} + \\left.\\frac{d \\tan \\delta}{d \\delta}\\right|_{\\delta = \\delta_{\\mathrm{ref}, k}} \\Delta \\delta_k \\\\ & = \\tan \\delta_{\\mathrm{ref}, k} + \\left.\\frac{d \\tan \\delta}{d \\delta}\\right|_{\\delta = \\delta_{\\mathrm{ref}, k}} (\\delta_{\\mathrm{ref}, k} - \\delta_k) \\\\ & = \\tan \\delta_{\\mathrm{ref}, k} - \\frac{\\delta_{\\mathrm{ref}, k}}{\\cos^2 \\delta_{\\mathrm{ref}, k}} + \\frac{1}{\\cos^2 \\delta_{\\mathrm{ref}, k}} \\delta_k \\end{align} \\]"},{"location":"planning/obstacle_avoidance_planner/docs/mpt/#one-step-state-equation","title":"One-step state equation","text":"Based on the linearization, the error kinematics is formulated with the following linear equations,
\\[ \\begin{align} \\begin{pmatrix} y_{k+1} \\\\ \\theta_{k+1} \\end{pmatrix} = \\begin{pmatrix} 1 & v dt \\\\ 0 & 1 \\\\ \\end{pmatrix} \\begin{pmatrix} y_k \\\\ \\theta_k \\\\ \\end{pmatrix} + \\begin{pmatrix} 0 \\\\ \\frac{v dt}{L \\cos^{2} \\delta_{\\mathrm{ref}, k}} \\\\ \\end{pmatrix} \\delta_{k} + \\begin{pmatrix} 0 \\\\ \\frac{v \\tan(\\delta_{\\mathrm{ref}, k}) dt}{L} - \\frac{v \\delta_{\\mathrm{ref}, k} dt}{L \\cos^{2} \\delta_{\\mathrm{ref}, k}} - \\kappa_k v dt\\\\ \\end{pmatrix} \\end{align} \\]which can be formulated as follows with the state \\(\\boldsymbol{x}\\), control input \\(u\\) and some matrices, where \\(\\boldsymbol{x} = (y_k, \\theta_k)\\)
\\[ \\begin{align} \\boldsymbol{x}_{k+1} = A_k \\boldsymbol{x}_k + \\boldsymbol{b}_k u_k + \\boldsymbol{w}_k \\end{align} \\]"},{"location":"planning/obstacle_avoidance_planner/docs/mpt/#time-series-state-equation","title":"Time-series state equation","text":"Then, we formulate time-series state equation by concatenating states, control inputs and matrices respectively as
\\[ \\begin{align} \\boldsymbol{x} = A \\boldsymbol{x}_0 + B \\boldsymbol{u} + \\boldsymbol{w} \\end{align} \\]where
\\[ \\begin{align} \\boldsymbol{x} = (\\boldsymbol{x}^T_1, \\boldsymbol{x}^T_2, \\boldsymbol{x}^T_3, \\dots, \\boldsymbol{x}^T_{n-1})^T \\\\ \\boldsymbol{u} = (u_0, u_1, u_2, \\dots, u_{n-2})^T \\\\ \\boldsymbol{w} = (\\boldsymbol{w}^T_0, \\boldsymbol{w}^T_1, \\boldsymbol{w}^T_2, \\dots, \\boldsymbol{w}^T_{n-1})^T. \\\\ \\end{align} \\]In detail, each matrices are constructed as follows.
\\[ \\begin{align} \\begin{pmatrix} \\boldsymbol{x}_1 \\\\ \\boldsymbol{x}_2 \\\\ \\boldsymbol{x}_3 \\\\ \\vdots \\\\ \\boldsymbol{x}_{n-1} \\end{pmatrix} = \\begin{pmatrix} A_0 \\\\ A_1 A_0 \\\\ A_2 A_1 A_0\\\\ \\vdots \\\\ \\prod\\limits_{k=0}^{n-1} A_{k} \\end{pmatrix} \\boldsymbol{x}_0 + \\begin{pmatrix} B_0 & 0 & & \\dots & 0 \\\\ A_0 B_0 & B_1 & 0 & \\dots & 0 \\\\ A_1 A_0 B_0 & A_0 B_1 & B_2 & \\dots & 0 \\\\ \\vdots & \\vdots & & \\ddots & 0 \\\\ \\prod\\limits_{k=0}^{n-3} A_k B_0 & \\prod\\limits_{k=0}^{n-4} A_k B_1 & \\dots & A_0 B_{n-3} & B_{n-2} \\end{pmatrix} \\begin{pmatrix} u_0 \\\\ u_1 \\\\ u_2 \\\\ \\vdots \\\\ u_{n-2} \\end{pmatrix} + \\begin{pmatrix} I & 0 & & \\dots & 0 \\\\ A_0 & I & 0 & \\dots & 0 \\\\ A_1 A_0 & A_0 & I & \\dots & 0 \\\\ \\vdots & \\vdots & & \\ddots & 0 \\\\ \\prod\\limits_{k=0}^{n-3} A_k & \\prod\\limits_{k=0}^{n-4} A_k & \\dots & A_0 & I \\end{pmatrix} \\begin{pmatrix} \\boldsymbol{w}_0 \\\\ \\boldsymbol{w}_1 \\\\ \\boldsymbol{w}_2 \\\\ \\vdots \\\\ \\boldsymbol{w}_{n-2} \\end{pmatrix} \\end{align} \\]"},{"location":"planning/obstacle_avoidance_planner/docs/mpt/#free-boundary-conditioned-time-series-state-equation","title":"Free-boundary-conditioned time-series state equation","text":"For path planning which does not start from the current ego pose, \\(\\boldsymbol{x}_0\\) should be the design variable of optimization. Therefore, we make \\(\\boldsymbol{u}'\\) by concatenating \\(\\boldsymbol{x}_0\\) and \\(\\boldsymbol{u}\\), and redefine \\(\\boldsymbol{x}\\) as follows.
\\[ \\begin{align} \\boldsymbol{u}' & = (\\boldsymbol{x}^T_0, \\boldsymbol{u}^T)^T \\\\ \\boldsymbol{x} & = (\\boldsymbol{x}^T_0, \\boldsymbol{x}^T_1, \\boldsymbol{x}^T_2, \\dots, \\boldsymbol{x}^T_{n-1})^T \\end{align} \\]Then we get the following state equation
\\[ \\begin{align} \\boldsymbol{x}' = B \\boldsymbol{u}' + \\boldsymbol{w}, \\end{align} \\]which is in detail
\\[ \\begin{align} \\begin{pmatrix} \\boldsymbol{x}_0 \\\\ \\boldsymbol{x}_1 \\\\ \\boldsymbol{x}_2 \\\\ \\boldsymbol{x}_3 \\\\ \\vdots \\\\ \\boldsymbol{x}_{n-1} \\end{pmatrix} = \\begin{pmatrix} I & 0 & \\dots & & & 0 \\\\ A_0 & B_0 & 0 & & \\dots & 0 \\\\ A_1 A_0 & A_0 B_0 & B_1 & 0 & \\dots & 0 \\\\ A_2 A_1 A_0 & A_1 A_0 B_0 & A_0 B_1 & B_2 & \\dots & 0 \\\\ \\vdots & \\vdots & \\vdots & & \\ddots & 0 \\\\ \\prod\\limits_{k=0}^{n-1} A_k & \\prod\\limits_{k=0}^{n-3} A_k B_0 & \\prod\\limits_{k=0}^{n-4} A_k B_1 & \\dots & A_0 B_{n-3} & B_{n-2} \\end{pmatrix} \\begin{pmatrix} \\boldsymbol{x}_0 \\\\ u_0 \\\\ u_1 \\\\ u_2 \\\\ \\vdots \\\\ u_{n-2} \\end{pmatrix} + \\begin{pmatrix} 0 & \\dots & & & 0 \\\\ I & 0 & & \\dots & 0 \\\\ A_0 & I & 0 & \\dots & 0 \\\\ A_1 A_0 & A_0 & I & \\dots & 0 \\\\ \\vdots & \\vdots & & \\ddots & 0 \\\\ \\prod\\limits_{k=0}^{n-3} A_k & \\prod\\limits_{k=0}^{n-4} A_k & \\dots & A_0 & I \\end{pmatrix} \\begin{pmatrix} \\boldsymbol{w}_0 \\\\ \\boldsymbol{w}_1 \\\\ \\boldsymbol{w}_2 \\\\ \\vdots \\\\ \\boldsymbol{w}_{n-2} \\end{pmatrix}. \\end{align} \\]"},{"location":"planning/obstacle_avoidance_planner/docs/mpt/#objective-function","title":"Objective function","text":"The objective function for smoothing and tracking is shown as follows, which can be formulated with value function matrices \\(Q, R\\).
\\[ \\begin{align} J_1 (\\boldsymbol{x}', \\boldsymbol{u}') & = w_y \\sum_{k} y_k^2 + w_{\\theta} \\sum_{k} \\theta_k^2 + w_{\\delta} \\sum_k \\delta_k^2 + w_{\\dot{\\delta}} \\sum_k \\dot{\\delta}_k^2 + w_{\\ddot{\\delta}} \\sum_k \\ddot{\\delta}_k^2 \\\\ & = \\boldsymbol{x}'^T Q \\boldsymbol{x}' + \\boldsymbol{u}'^T R \\boldsymbol{u}' \\\\ & = \\boldsymbol{u}'^T H \\boldsymbol{u}' + \\boldsymbol{u}'^T \\boldsymbol{f} \\end{align} \\]As mentioned before, the constraints to be collision free with obstacles and road boundaries are formulated to be soft constraints. Assuming that the lateral distance to the road boundaries or obstacles from the back wheel center, front wheel center, and the point between them are \\(y_{\\mathrm{base}, k}, y_{\\mathrm{top}, k}, y_{\\mathrm{mid}, k}\\) respectively, and slack variables for each point are \\(\\lambda_{\\mathrm{base}}, \\lambda_{\\mathrm{top}}, \\lambda_{\\mathrm{mid}}\\), the soft constraints can be formulated as follows.
\\[ y_{\\mathrm{base}, k, \\min} - \\lambda_{\\mathrm{base}, k} \\leq y_{\\mathrm{base}, k} (y_k) \\leq y_{\\mathrm{base}, k, \\max} + \\lambda_{\\mathrm{base}, k}\\\\ y_{\\mathrm{top}, k, \\min} - \\lambda_{\\mathrm{top}, k} \\leq y_{\\mathrm{top}, k} (y_k) \\leq y_{\\mathrm{top}, k, \\max} + \\lambda_{\\mathrm{top}, k}\\\\ y_{\\mathrm{mid}, k, \\min} - \\lambda_{\\mathrm{mid}, k} \\leq y_{\\mathrm{mid}, k} (y_k) \\leq y_{\\mathrm{mid}, k, \\max} + \\lambda_{\\mathrm{mid}, k} \\\\ 0 \\leq \\lambda_{\\mathrm{base}, k} \\\\ 0 \\leq \\lambda_{\\mathrm{top}, k} \\\\ 0 \\leq \\lambda_{\\mathrm{mid}, k} \\]Since \\(y_{\\mathrm{base}, k}, y_{\\mathrm{top}, k}, y_{\\mathrm{mid}, k}\\) is formulated as a linear function of \\(y_k\\), the objective function for soft constraints is formulated as follows.
\\[ \\begin{align} J_2 & (\\boldsymbol{\\lambda}_\\mathrm{base}, \\boldsymbol{\\lambda}_\\mathrm{top}, \\boldsymbol {\\lambda}_\\mathrm{mid})\\\\ & = w_{\\mathrm{base}} \\sum_{k} \\lambda_{\\mathrm{base}, k} + w_{\\mathrm{mid}} \\sum_k \\lambda_{\\mathrm{mid}, k} + w_{\\mathrm{top}} \\sum_k \\lambda_{\\mathrm{top}, k} \\end{align} \\]Slack variables are also design variables for optimization. We define a vector \\(\\boldsymbol{v}\\), that concatenates all the design variables.
\\[ \\begin{align} \\boldsymbol{v} = \\begin{pmatrix} \\boldsymbol{u}'^T & \\boldsymbol{\\lambda}_\\mathrm{base}^T & \\boldsymbol{\\lambda}_\\mathrm{top}^T & \\boldsymbol{\\lambda}_\\mathrm{mid}^T \\end{pmatrix}^T \\end{align} \\]The summation of these two objective functions is the objective function for the optimization problem.
\\[ \\begin{align} \\min_{\\boldsymbol{v}} J (\\boldsymbol{v}) = \\min_{\\boldsymbol{v}} J_1 (\\boldsymbol{u}') + J_2 (\\boldsymbol{\\lambda}_\\mathrm{base}, \\boldsymbol{\\lambda}_\\mathrm{top}, \\boldsymbol{\\lambda}_\\mathrm{mid}) \\end{align} \\]As mentioned before, we use hard constraints where some trajectory points in front of the ego are the same as the previously generated trajectory points. This hard constraints is formulated as follows.
\\[ \\begin{align} \\delta_k = \\delta_{k}^{\\mathrm{prev}} (0 \\leq i \\leq N_{\\mathrm{fix}}) \\end{align} \\]Finally we transform those objective functions to the following QP problem, and solve it.
\\[ \\begin{align} \\min_{\\boldsymbol{v}} \\ & \\frac{1}{2} \\boldsymbol{v}^T \\boldsymbol{H} \\boldsymbol{v} + \\boldsymbol{f} \\boldsymbol{v} \\\\ \\mathrm{s.t.} \\ & \\boldsymbol{b}_{lower} \\leq \\boldsymbol{A} \\boldsymbol{v} \\leq \\boldsymbol{b}_{upper} \\end{align} \\]"},{"location":"planning/obstacle_avoidance_planner/docs/mpt/#constraints","title":"Constraints","text":""},{"location":"planning/obstacle_avoidance_planner/docs/mpt/#steer-angle-limitation","title":"Steer angle limitation","text":"Steer angle has a limitation \\(\\delta_{max}\\) and \\(\\delta_{min}\\). Therefore we add linear inequality equations.
\\[ \\begin{align} \\delta_{min} \\leq \\delta_i \\leq \\delta_{max} \\end{align} \\]"},{"location":"planning/obstacle_avoidance_planner/docs/mpt/#collision-free","title":"Collision free","text":"To realize collision-free trajectory planning, we have to formulate constraints that the vehicle is inside the road and also does not collide with obstacles in linear equations. For linearity, we implemented some methods to approximate the vehicle shape with a set of circles, that is reliable and easy to implement.
Now we formulate the linear constraints where a set of circles on each trajectory point is collision-free. By using the drivable area, we calculate upper and lower boundaries along reference points, which will be interpolated on any position on the trajectory. NOTE that upper and lower boundary is left and right, respectively.
Assuming that upper and lower boundaries are \\(b_l\\), \\(b_u\\) respectively, and \\(r\\) is a radius of a circle, lateral deviation of the circle center \\(y'\\) has to be
\\[ b_l + r \\leq y' \\leq b_u - r. \\]Based on the following figure, \\(y'\\) can be formulated as follows.
\\[ \\begin{align} y' & = L \\sin(\\theta + \\beta) + y \\cos \\beta - l \\sin(\\gamma - \\phi_a) \\\\ & = L \\sin \\theta \\cos \\beta + L \\cos \\theta \\sin \\beta + y \\cos \\beta - l \\sin(\\gamma - \\phi_a) \\\\ & \\approx L \\theta \\cos \\beta + L \\sin \\beta + y \\cos \\beta - l \\sin(\\gamma - \\phi_a) \\end{align} \\] \\[ b_l + r - \\lambda \\leq y' \\leq b_u - r + \\lambda. \\] \\[ \\begin{align} y' & = C_1 \\boldsymbol{x} + C_2 \\\\ & = C_1 (B \\boldsymbol{v} + \\boldsymbol{w}) + C_2 \\\\ & = C_1 B \\boldsymbol{v} + \\boldsymbol{w} + C_2 \\end{align} \\]Note that longitudinal position of the circle center and the trajectory point to calculate boundaries are different. But each boundaries are vertical against the trajectory, resulting in less distortion by the longitudinal position difference since road boundaries does not change so much. For example, if the boundaries are not vertical against the trajectory and there is a certain difference of longitudinal position between the circe center and the trajectory point, we can easily guess that there is much more distortion when comparing lateral deviation and boundaries.
\\[ \\begin{align} A_{blk} & = \\begin{pmatrix} C_1 B & O & \\dots & O & I_{N_{ref} \\times N_{ref}} & O \\dots & O\\\\ -C_1 B & O & \\dots & O & I & O \\dots & O\\\\ O & O & \\dots & O & I & O \\dots & O \\end{pmatrix} \\in \\boldsymbol{R}^{3 N_{ref} \\times D_v + N_{circle} N_{ref}} \\\\ \\boldsymbol{b}_{lower, blk} & = \\begin{pmatrix} \\boldsymbol{b}_{lower} - C_1 \\boldsymbol{w} - C_2 \\\\ -\\boldsymbol{b}_{upper} + C_1 \\boldsymbol{w} + C_2 \\\\ O \\end{pmatrix} \\in \\boldsymbol{R}^{3 N_{ref}} \\\\ \\boldsymbol{b}_{upper, blk} & = \\boldsymbol{\\infty} \\in \\boldsymbol{R}^{3 N_{ref}} \\end{align} \\]We will explain options for optimization.
"},{"location":"planning/obstacle_avoidance_planner/docs/mpt/#l-infinity-optimization","title":"L-infinity optimization","text":"The above formulation is called L2 norm for slack variables. Instead, if we use L-infinity norm where slack variables are shared by enabling l_inf_norm
.
In order to make the trajectory optimization problem stabler to solve, the boundary constraint which the trajectory footprints should be inside and optimization weights are modified.
"},{"location":"planning/obstacle_avoidance_planner/docs/mpt/#keep-minimum-boundary-width","title":"Keep minimum boundary width","text":"The drivable area's width is sometimes smaller than the vehicle width since the behavior module does not consider the width. To realize the stable trajectory optimization, the drivable area's width is guaranteed to be larger than the vehicle width and an additional margin in a rule-based way.
We cannot distinguish the boundary by roads from the boundary by obstacles for avoidance in the motion planner, the drivable area is modified in the following multi steps assuming that \\(l_{width}\\) is the vehicle width and \\(l_{margin}\\) is an additional margin.
"},{"location":"planning/obstacle_avoidance_planner/docs/mpt/#extend-violated-boundary","title":"Extend violated boundary","text":""},{"location":"planning/obstacle_avoidance_planner/docs/mpt/#avoid-sudden-steering","title":"Avoid sudden steering","text":"When the obstacle suddenly appears which is determined to avoid by the behavior module, the drivable area's shape just in front of the ego will change, resulting in the sudden steering. To prevent this, the drivable area's shape close to the ego is fixed as previous drivable area's shape.
Assume that \\(v_{ego}\\) is the ego velocity, and \\(t_{fix}\\) is the time to fix the forward drivable area's shape.
"},{"location":"planning/obstacle_avoidance_planner/docs/mpt/#calculate-avoidance-cost","title":"Calculate avoidance cost","text":""},{"location":"planning/obstacle_avoidance_planner/docs/mpt/#change-optimization-weights","title":"Change optimization weights","text":"\\[ \\begin{align} r & = \\mathrm{lerp}(w^{\\mathrm{steer}}_{\\mathrm{normal}}, w^{\\mathrm{steer}}_{\\mathrm{avoidance}}, c) \\\\ w^{\\mathrm{lat}} & = \\mathrm{lerp}(w^{\\mathrm{lat}}_{\\mathrm{normal}}, w^{\\mathrm{lat}}_{\\mathrm{avoidance}}, r) \\\\ w^{\\mathrm{yaw}} & = \\mathrm{lerp}(w^{\\mathrm{yaw}}_{\\mathrm{normal}}, w^{\\mathrm{yaw}}_{\\mathrm{avoidance}}, r) \\end{align} \\]Assume that \\(c\\) is the normalized avoidance cost, \\(w^{\\mathrm{lat}}\\) is the weight for lateral error, \\(w^{\\mathrm{yaw}}\\) is the weight for yaw error, and other variables are as follows.
Parameter Type Description \\(w^{\\mathrm{steer}}_{\\mathrm{normal}}\\) double weight for steering minimization in normal cases \\(w^{\\mathrm{steer}}_{\\mathrm{avoidance}}\\) double weight for steering minimization in avoidance cases \\(w^{\\mathrm{lat}}_{\\mathrm{normal}}\\) double weight for lateral error minimization in normal cases \\(w^{\\mathrm{lat}}_{\\mathrm{avoidance}}\\) double weight for lateral error minimization in avoidance cases \\(w^{\\mathrm{yaw}}_{\\mathrm{normal}}\\) double weight for yaw error minimization in normal cases \\(w^{\\mathrm{yaw}}_{\\mathrm{avoidance}}\\) double weight for yaw error minimization in avoidance cases"},{"location":"planning/obstacle_cruise_planner/","title":"Obstacle Cruise Planner","text":""},{"location":"planning/obstacle_cruise_planner/#obstacle-cruise-planner","title":"Obstacle Cruise Planner","text":""},{"location":"planning/obstacle_cruise_planner/#overview","title":"Overview","text":"The obstacle_cruise_planner
package has following modules.
~/input/trajectory
autoware_auto_planning_msgs::Trajectory input trajectory ~/input/objects
autoware_auto_perception_msgs::PredictedObjects dynamic objects ~/input/odometry
nav_msgs::msg::Odometry ego odometry"},{"location":"planning/obstacle_cruise_planner/#output-topics","title":"Output topics","text":"Name Type Description ~/output/trajectory
autoware_auto_planning_msgs::Trajectory output trajectory ~/output/velocity_limit
tier4_planning_msgs::VelocityLimit velocity limit for cruising ~/output/clear_velocity_limit
tier4_planning_msgs::VelocityLimitClearCommand clear command for velocity limit ~/output/stop_reasons
tier4_planning_msgs::StopReasonArray reasons that make the vehicle to stop"},{"location":"planning/obstacle_cruise_planner/#design","title":"Design","text":"Design for the following functions is defined here.
A data structure for cruise and stop planning is as follows. This planner data is created first, and then sent to the planning algorithm.
struct PlannerData\n{\nrclcpp::Time current_time;\nautoware_auto_planning_msgs::msg::Trajectory traj;\ngeometry_msgs::msg::Pose current_pose;\ndouble ego_vel;\ndouble current_acc;\nstd::vector<Obstacle> target_obstacles;\n};\n
struct Obstacle\n{\nrclcpp::Time stamp; // This is not the current stamp, but when the object was observed.\ngeometry_msgs::msg::Pose pose; // interpolated with the current stamp\nbool orientation_reliable;\nTwist twist;\nbool twist_reliable;\nObjectClassification classification;\nstd::string uuid;\nShape shape;\nstd::vector<PredictedPath> predicted_paths;\n};\n
"},{"location":"planning/obstacle_cruise_planner/#behavior-determination-against-obstacles","title":"Behavior determination against obstacles","text":"Obstacles for cruising, stopping and slowing down are selected in this order based on their pose and velocity. The obstacles not in front of the ego will be ignored.
"},{"location":"planning/obstacle_cruise_planner/#determine-cruise-vehicles","title":"Determine cruise vehicles","text":"The obstacles meeting the following condition are determined as obstacles for cruising.
behavior_determination.cruise.max_lat_margin
.common.cruise_obstacle_type.*
.common.cruise_obstacle_type.inside.*
.behavior_determination.obstacle_velocity_threshold_from_cruise_to_stop
.common.cruise_obstacle_type.outside.*
.behavior_determination.cruise.outside_obstacle.obstacle_velocity_threshold
.behavior_determination.cruise.outside_obstacle.ego_obstacle_overlap_time_threshold
.common.cruise_obstacle_type.inside.unknown
bool flag to consider unknown objects for cruising common.cruise_obstacle_type.inside.car
bool flag to consider unknown objects for cruising common.cruise_obstacle_type.inside.truck
bool flag to consider unknown objects for cruising ... bool ... common.cruise_obstacle_type.outside.unknown
bool flag to consider unknown objects for cruising common.cruise_obstacle_type.outside.car
bool flag to consider unknown objects for cruising common.cruise_obstacle_type.outside.truck
bool flag to consider unknown objects for cruising ... bool ... behavior_determination.cruise.max_lat_margin
double maximum lateral margin for cruise obstacles behavior_determination.obstacle_velocity_threshold_from_cruise_to_stop
double maximum obstacle velocity for cruise obstacle inside the trajectory behavior_determination.cruise.outside_obstacle.obstacle_velocity_threshold
double maximum obstacle velocity for cruise obstacle outside the trajectory behavior_determination.cruise.outside_obstacle.ego_obstacle_overlap_time_threshold
double maximum overlap time of the collision between the ego and obstacle"},{"location":"planning/obstacle_cruise_planner/#determine-stop-vehicles","title":"Determine stop vehicles","text":"Among obstacles which are not for cruising, the obstacles meeting the following condition are determined as obstacles for stopping.
common.stop_obstacle_type.*
.behavior_determination.stop.max_lat_margin
.behavior_determination.obstacle_velocity_threshold_from_stop_to_cruise
.behavior_determination.crossing_obstacle.obstacle_velocity_threshold
common.stop_obstacle_type.unknown
bool flag to consider unknown objects for stopping common.stop_obstacle_type.car
bool flag to consider unknown objects for stopping common.stop_obstacle_type.truck
bool flag to consider unknown objects for stopping ... bool ... behavior_determination.stop.max_lat_margin
double maximum lateral margin for stop obstacles behavior_determination.crossing_obstacle.obstacle_velocity_threshold
double maximum crossing obstacle velocity to ignore behavior_determination.obstacle_velocity_threshold_from_stop_to_cruise
double maximum obstacle velocity for stop"},{"location":"planning/obstacle_cruise_planner/#determine-slow-down-vehicles","title":"Determine slow down vehicles","text":"Among obstacles which are not for cruising and stopping, the obstacles meeting the following condition are determined as obstacles for slowing down.
common.slow_down_obstacle_type.*
.behavior_determination.slow_down.max_lat_margin
.common.slow_down_obstacle_type.unknown
bool flag to consider unknown objects for slowing down common.slow_down_obstacle_type.car
bool flag to consider unknown objects for slowing down common.slow_down_obstacle_type.truck
bool flag to consider unknown objects for slowing down ... bool ... behavior_determination.slow_down.max_lat_margin
double maximum lateral margin for slow down obstacles"},{"location":"planning/obstacle_cruise_planner/#note","title":"NOTE","text":""},{"location":"planning/obstacle_cruise_planner/#1-crossing-obstacles","title":"*1: Crossing obstacles","text":"Crossing obstacle is the object whose orientation's yaw angle against the ego's trajectory is smaller than behavior_determination.crossing_obstacle.obstacle_traj_angle_threshold
.
behavior_determination.crossing_obstacle.obstacle_traj_angle_threshold
double maximum angle against the ego's trajectory to judge the obstacle is crossing the trajectory [rad]"},{"location":"planning/obstacle_cruise_planner/#2-enough-collision-time-margin","title":"*2: Enough collision time margin","text":"We predict the collision area and its time by the ego with a constant velocity motion and the obstacle with its predicted path. Then, we calculate a collision time margin which is the difference of the time when the ego will be inside the collision area and the obstacle will be inside the collision area. When this time margin is smaller than behavior_determination.stop.crossing_obstacle.collision_time_margin
, the margin is not enough.
behavior_determination.stop.crossing_obstacle.collision_time_margin
double maximum collision time margin of the ego and obstacle"},{"location":"planning/obstacle_cruise_planner/#stop-planning","title":"Stop planning","text":"Parameter Type Description common.min_strong_accel
double ego's minimum acceleration to stop [m/ss] common.safe_distance_margin
double distance with obstacles for stop [m] common.terminal_safe_distance_margin
double terminal_distance with obstacles for stop, which cannot be exceed safe distance margin [m] The role of the stop planning is keeping a safe distance with static vehicle objects or dynamic/static non vehicle objects.
The stop planning just inserts the stop point in the trajectory to keep a distance with obstacles. The safe distance is parameterized as common.safe_distance_margin
. When it stops at the end of the trajectory, and obstacle is on the same point, the safe distance becomes terminal_safe_distance_margin
.
When inserting the stop point, the required acceleration for the ego to stop in front of the stop point is calculated. If the acceleration is less than common.min_strong_accel
, the stop planning will be cancelled since this package does not assume a strong sudden brake for emergency.
common.safe_distance_margin
double minimum distance with obstacles for cruise [m] The role of the cruise planning is keeping a safe distance with dynamic vehicle objects with smoothed velocity transition. This includes not only cruising a front vehicle, but also reacting a cut-in and cut-out vehicle.
The safe distance is calculated dynamically based on the Responsibility-Sensitive Safety (RSS) by the following equation.
\\[ d_{rss} = v_{ego} t_{idling} + \\frac{1}{2} a_{ego} t_{idling}^2 + \\frac{v_{ego}^2}{2 a_{ego}} - \\frac{v_{obstacle}^2}{2 a_{obstacle}}, \\]assuming that \\(d_{rss}\\) is the calculated safe distance, \\(t_{idling}\\) is the idling time for the ego to detect the front vehicle's deceleration, \\(v_{ego}\\) is the ego's current velocity, \\(v_{obstacle}\\) is the front obstacle's current velocity, \\(a_{ego}\\) is the ego's acceleration, and \\(a_{obstacle}\\) is the obstacle's acceleration. These values are parameterized as follows. Other common values such as ego's minimum acceleration is defined in common.param.yaml
.
common.idling_time
double idling time for the ego to detect the front vehicle starting deceleration [s] common.min_ego_accel_for_rss
double ego's acceleration for RSS [m/ss] common.min_object_accel_for_rss
double front obstacle's acceleration for RSS [m/ss] The detailed formulation is as follows.
\\[ \\begin{align} d_{error} & = d - d_{rss} \\\\ d_{normalized} & = lpf(d_{error} / d_{obstacle}) \\\\ d_{quad, normalized} & = sign(d_{normalized}) *d_{normalized}*d_{normalized} \\\\ v_{pid} & = pid(d_{quad, normalized}) \\\\ v_{add} & = v_{pid} > 0 ? v_{pid}* w_{acc} : v_{pid} \\\\ v_{target} & = max(v_{ego} + v_{add}, v_{min, cruise}) \\end{align} \\] Variable Descriptiond
actual distance to obstacle d_{rss}
ideal distance to obstacle based on RSS v_{min, cruise}
min_cruise_target_vel
w_{acc}
output_ratio_during_accel
lpf(val)
apply low-pass filter to val
pid(val)
apply pid to val
"},{"location":"planning/obstacle_cruise_planner/#slow-down-planning","title":"Slow down planning","text":"Parameter Type Description slow_down.labels
vector(string) A vector of labels for customizing obstacle-label-based slow down behavior. Each label represents an obstacle type that will be treated differently when applying slow down. The possible labels are (\"default\" (Mandatory), \"unknown\",\"car\",\"truck\",\"bus\",\"trailer\",\"motorcycle\",\"bicycle\" or \"pedestrian\") slow_down.default.static.min_lat_velocity
double minimum velocity to linearly calculate slow down velocity [m]. Note: This default value will be used when the detected obstacle label does not match any of the slow_down.labels and the obstacle is considered to be static, or not moving slow_down.default.static.max_lat_velocity
double maximum velocity to linearly calculate slow down velocity [m]. Note: This default value will be used when the detected obstacle label does not match any of the slow_down.labels and the obstacle is considered to be static, or not moving slow_down.default.static.min_lat_margin
double minimum lateral margin to linearly calculate slow down velocity [m]. Note: This default value will be used when the detected obstacle label does not match any of the slow_down.labels and the obstacle is considered to be static, or not moving slow_down.default.static.max_lat_margin
double maximum lateral margin to linearly calculate slow down velocity [m]. Note: This default value will be used when the detected obstacle label does not match any of the slow_down.labels and the obstacle is considered to be static, or not moving slow_down.default.moving.min_lat_velocity
double minimum velocity to linearly calculate slow down velocity [m]. Note: This default value will be used when the detected obstacle label does not match any of the slow_down.labels and the obstacle is considered to be moving slow_down.default.moving.max_lat_velocity
double maximum velocity to linearly calculate slow down velocity [m]. Note: This default value will be used when the detected obstacle label does not match any of the slow_down.labels and the obstacle is considered to be moving slow_down.default.moving.min_lat_margin
double minimum lateral margin to linearly calculate slow down velocity [m]. Note: This default value will be used when the detected obstacle label does not match any of the slow_down.labels and the obstacle is considered to be moving slow_down.default.moving.max_lat_margin
double maximum lateral margin to linearly calculate slow down velocity [m]. Note: This default value will be used when the detected obstacle label does not match any of the slow_down.labels and the obstacle is considered to be moving (optional) slow_down.\"label\".(static & moving).min_lat_velocity
double minimum velocity to linearly calculate slow down velocity [m]. Note: only for obstacles specified in slow_down.labels
. Requires a static
and a moving
value (optional) slow_down.\"label\".(static & moving).max_lat_velocity
double maximum velocity to linearly calculate slow down velocity [m]. Note: only for obstacles specified in slow_down.labels
. Requires a static
and a moving
value (optional) slow_down.\"label\".(static & moving).min_lat_margin
double minimum lateral margin to linearly calculate slow down velocity [m]. Note: only for obstacles specified in slow_down.labels
. Requires a static
and a moving
value (optional) slow_down.\"label\".(static & moving).max_lat_margin
double maximum lateral margin to linearly calculate slow down velocity [m]. Note: only for obstacles specified in slow_down.labels
. Requires a static
and a moving
value The role of the slow down planning is inserting slow down velocity in the trajectory where the trajectory points are close to the obstacles. The parameters can be customized depending on the obstacle type (see slow_down.labels
), making it possible to adjust the slow down behavior depending if the obstacle is a pedestrian, bicycle, car, etc. Each obstacle type has a static
and a moving
parameter set, so it is possible to customize the slow down response of the ego vehicle according to the obstacle type and if it is moving or not. If an obstacle is determined to be moving, the corresponding moving
set of parameters will be used to compute the vehicle slow down, otherwise, the static
parameters will be used. The static
and moving
separation is useful for customizing the ego vehicle slow down behavior to, for example, slow down more significantly when passing stopped vehicles that might cause occlusion or that might suddenly open its doors.
An obstacle is classified as static
if its total speed is less than the moving_object_speed_threshold
parameter. Furthermore, a hysteresis based approach is used to avoid chattering, it uses the moving_object_hysteresis_range
parameter range and the obstacle's previous state (moving
or static
) to determine if the obstacle is moving or not. In other words, if an obstacle was previously classified as static
, it will not change its classification to moving
unless its total speed is greater than moving_object_speed_threshold
+ moving_object_hysteresis_range
. Likewise, an obstacle previously classified as moving
, will only change to static
if its speed is lower than moving_object_speed_threshold
- moving_object_hysteresis_range
.
The closest point on the obstacle to the ego's trajectory is calculated. Then, the slow down velocity is calculated by linear interpolation with the distance between the point and trajectory as follows.
Variable Descriptionv_{out}
calculated velocity for slow down v_{min}
slow_down.min_lat_velocity
v_{max}
slow_down.max_lat_velocity
l_{min}
slow_down.min_lat_margin
l_{max}
slow_down.max_lat_margin
l'_{max}
behavior_determination.slow_down.max_lat_margin
The calculated velocity is inserted in the trajectory where the obstacle is inside the area with behavior_determination.slow_down.max_lat_margin
.
Successive functions consist of obstacle_cruise_planner
as follows.
Various algorithms for stop and cruise planning will be implemented, and one of them is designated depending on the use cases. The core algorithm implementation generateTrajectory
depends on the designated algorithm.
Currently, only a PID-based planner is supported. Each planner will be explained in the following.
Parameter Type Descriptioncommon.planning_method
string cruise and stop planning algorithm, selected from \"pid_base\""},{"location":"planning/obstacle_cruise_planner/#pid-based-planner","title":"PID-based planner","text":""},{"location":"planning/obstacle_cruise_planner/#stop-planning_1","title":"Stop planning","text":"In the pid_based_planner
namespace,
obstacle_velocity_threshold_from_cruise_to_stop
double obstacle velocity threshold to be stopped from cruised [m/s] Only one obstacle is targeted for the stop planning. It is the obstacle among obstacle candidates whose velocity is less than obstacle_velocity_threshold_from_cruise_to_stop
, and which is the nearest to the ego along the trajectory. A stop point is inserted keepingcommon.safe_distance_margin
distance between the ego and obstacle.
Note that, as explained in the stop planning design, a stop planning which requires a strong acceleration (less than common.min_strong_accel
) will be canceled.
In the pid_based_planner
namespace,
kp
double p gain for pid control [-] ki
double i gain for pid control [-] kd
double d gain for pid control [-] output_ratio_during_accel
double The output velocity will be multiplied by the ratio during acceleration to follow the front vehicle. [-] vel_to_acc_weight
double target acceleration is target velocity * vel_to_acc_weight
[-] min_cruise_target_vel
double minimum target velocity during cruise [m/s] In order to keep the safe distance, the target velocity and acceleration is calculated and sent as an external velocity limit to the velocity smoothing package (motion_velocity_smoother
by default). The target velocity and acceleration is respectively calculated with the PID controller according to the error between the reference safe distance and the actual distance.
under construction
"},{"location":"planning/obstacle_cruise_planner/#minor-functions","title":"Minor functions","text":""},{"location":"planning/obstacle_cruise_planner/#prioritization-of-behavior-modules-stop-point","title":"Prioritization of behavior module's stop point","text":"When stopping for a pedestrian walking on the crosswalk, the behavior module inserts the zero velocity in the trajectory in front of the crosswalk. Also obstacle_cruise_planner
's stop planning also works, and the ego may not reach the behavior module's stop point since the safe distance defined in obstacle_cruise_planner
may be longer than the behavior module's safe distance. To resolve this non-alignment of the stop point between the behavior module and obstacle_cruise_planner
, common.min_behavior_stop_margin
is defined. In the case of the crosswalk described above, obstacle_cruise_planner
inserts the stop point with a distance common.min_behavior_stop_margin
at minimum between the ego and obstacle.
common.min_behavior_stop_margin
double minimum stop margin when stopping with the behavior module enabled [m]"},{"location":"planning/obstacle_cruise_planner/#a-function-to-keep-the-closest-stop-obstacle-in-target-obstacles","title":"A function to keep the closest stop obstacle in target obstacles","text":"In order to keep the closest stop obstacle in the target obstacles, we check whether it is disappeared or not from the target obstacles in the checkConsistency
function. If the previous closest stop obstacle is remove from the lists, we keep it in the lists for stop_obstacle_hold_time_threshold
seconds. Note that if a new stop obstacle appears and the previous closest obstacle removes from the lists, we do not add it to the target obstacles again.
behavior_determination.stop_obstacle_hold_time_threshold
double maximum time for holding closest stop obstacle [s]"},{"location":"planning/obstacle_cruise_planner/#how-to-debug","title":"How To Debug","text":"How to debug can be seen here.
"},{"location":"planning/obstacle_cruise_planner/#known-limits","title":"Known Limits","text":"rough_detection_area
a small value.motion_velocity_smoother
by default) whether or not the ego realizes the designated target speed. If the velocity smoothing package is updated, please take care of the vehicle's behavior as much as possible.Green polygons which is a detection area is visualized by detection_polygons
in the ~/debug/marker
topic. To determine each behavior (cruise, stop, and slow down), if behavior_determination.*.max_lat_margin
is not zero, the polygons are expanded with the additional width.
Red points which are collision points with obstacle are visualized by *_collision_points
for each behavior in the ~/debug/marker
topic.
Orange sphere which is an obstacle for cruise is visualized by obstacles_to_cruise
in the ~/debug/marker
topic.
Orange wall which means a safe distance to cruise if the ego's front meets the wall is visualized in the ~/debug/cruise/virtual_wall
topic.
Red sphere which is an obstacle for stop is visualized by obstacles_to_stop
in the ~/debug/marker
topic.
Red wall which means a safe distance to stop if the ego's front meets the wall is visualized in the ~/virtual_wall
topic.
Yellow sphere which is an obstacle for slow_down is visualized by obstacles_to_slow_down
in the ~/debug/marker
topic.
Yellow wall which means a safe distance to slow_down if the ego's front meets the wall is visualized in the ~/debug/slow_down/virtual_wall
topic.
obstacle_stop_planner
has following modules
~/input/pointcloud
sensor_msgs::PointCloud2 obstacle pointcloud ~/input/trajectory
autoware_auto_planning_msgs::Trajectory trajectory ~/input/vector_map
autoware_auto_mapping_msgs::HADMapBin vector map ~/input/odometry
nav_msgs::Odometry vehicle velocity ~/input/dynamic_objects
autoware_auto_perception_msgs::PredictedObjects dynamic objects ~/input/expand_stop_range
tier4_planning_msgs::msg::ExpandStopRange expand stop range"},{"location":"planning/obstacle_stop_planner/#output-topics","title":"Output topics","text":"Name Type Description ~output/trajectory
autoware_auto_planning_msgs::Trajectory trajectory to be followed ~output/stop_reasons
tier4_planning_msgs::StopReasonArray reasons that cause the vehicle to stop"},{"location":"planning/obstacle_stop_planner/#common-parameter","title":"Common Parameter","text":"Parameter Type Description enable_slow_down
bool enable slow down planner [-] max_velocity
double max velocity [m/s] chattering_threshold
double even if the obstacle disappears, the stop judgment continues for chattering_threshold [s] enable_z_axis_obstacle_filtering
bool filter obstacles in z axis (height) [-] z_axis_filtering_buffer
double additional buffer for z axis filtering [m] use_predicted_objects
bool whether to use predicted objects for collision and slowdown detection [-] predicted_object_filtering_threshold
double threshold for filtering predicted objects [valid only publish_obstacle_polygon true] [m] publish_obstacle_polygon
bool if use_predicted_objects is true, node publishes collision polygon [-]"},{"location":"planning/obstacle_stop_planner/#obstacle-stop-planner_1","title":"Obstacle Stop Planner","text":""},{"location":"planning/obstacle_stop_planner/#role","title":"Role","text":"This module inserts the stop point before the obstacle with margin. In nominal case, the margin is the sum of baselink_to_front
and max_longitudinal_margin
. The baselink_to_front
means the distance between baselink
( center of rear-wheel axis) and front of the car. The detection area is generated along the processed trajectory as following figure. (This module cut off the input trajectory behind the ego position and decimates the trajectory points for reducing computational costs.)
parameters for obstacle stop planner
target for obstacle stop planner
If another stop point has already been inserted by other modules within max_longitudinal_margin
, the margin is the sum of baselink_to_front
and min_longitudinal_margin
. This feature exists to avoid stopping unnaturally position. (For example, the ego stops unnaturally far away from stop line of crosswalk that pedestrians cross to without this feature.)
minimum longitudinal margin
The module searches the obstacle pointcloud within detection area. When the pointcloud is found, Adaptive Cruise Controller
modules starts to work. only when Adaptive Cruise Controller
modules does not insert target velocity, the stop point is inserted to the trajectory. The stop point means the point with 0 velocity.
If it needs X meters (e.g. 0.5 meters) to stop once the vehicle starts moving due to the poor vehicle control performance, the vehicle goes over the stopping position that should be strictly observed when the vehicle starts to moving in order to approach the near stop point (e.g. 0.3 meters away).
This module has parameter hold_stop_margin_distance
in order to prevent from these redundant restart. If the vehicle is stopped within hold_stop_margin_distance
meters from stop point of the module, the module judges that the vehicle has already stopped for the module's stop point and plans to keep stopping current position even if the vehicle is stopped due to other factors.
parameters
outside the hold_stop_margin_distance
inside the hold_stop_margin_distance"},{"location":"planning/obstacle_stop_planner/#parameters","title":"Parameters","text":""},{"location":"planning/obstacle_stop_planner/#stop-position","title":"Stop position","text":"Parameter Type Description
max_longitudinal_margin
double margin between obstacle and the ego's front [m] max_longitudinal_margin_behind_goal
double margin between obstacle and the ego's front when the stop point is behind the goal[m] min_longitudinal_margin
double if any obstacle exists within max_longitudinal_margin
, this module set margin as the value of stop margin to min_longitudinal_margin
[m] hold_stop_margin_distance
double parameter for restart prevention (See above section) [m]"},{"location":"planning/obstacle_stop_planner/#obstacle-detection-area","title":"Obstacle detection area","text":"Parameter Type Description lateral_margin
double lateral margin from the vehicle footprint for collision obstacle detection area [m] step_length
double step length for pointcloud search range [m] enable_stop_behind_goal_for_obstacle
bool enabling extend trajectory after goal lane for obstacle detection"},{"location":"planning/obstacle_stop_planner/#flowchart","title":"Flowchart","text":""},{"location":"planning/obstacle_stop_planner/#slow-down-planner","title":"Slow Down Planner","text":""},{"location":"planning/obstacle_stop_planner/#role_1","title":"Role","text":"This module inserts the slow down section before the obstacle with forward margin and backward margin. The forward margin is the sum of baselink_to_front
and longitudinal_forward_margin
, and the backward margin is the sum of baselink_to_front
and longitudinal_backward_margin
. The ego keeps slow down velocity in slow down section. The velocity is calculated the following equation.
\\(v_{target} = v_{min} + \\frac{l_{ld} - l_{vw}/2}{l_{margin}} (v_{max} - v_{min} )\\)
min_slow_down_velocity
[m/s]max_slow_down_velocity
[m/s]lateral_margin
[m]The above equation means that the smaller the lateral deviation of the pointcloud, the lower the velocity of the slow down section.
parameters for slow down planner
target for slow down planner"},{"location":"planning/obstacle_stop_planner/#parameters_1","title":"Parameters","text":""},{"location":"planning/obstacle_stop_planner/#slow-down-section","title":"Slow down section","text":"Parameter Type Description
longitudinal_forward_margin
double margin between obstacle and the ego's front [m] longitudinal_backward_margin
double margin between obstacle and the ego's rear [m]"},{"location":"planning/obstacle_stop_planner/#obstacle-detection-area_1","title":"Obstacle detection area","text":"Parameter Type Description lateral_margin
double lateral margin from the vehicle footprint for slow down obstacle detection area [m]"},{"location":"planning/obstacle_stop_planner/#slow-down-target-velocity","title":"Slow down target velocity","text":"Parameter Type Description max_slow_down_velocity
double max slow down velocity [m/s] min_slow_down_velocity
double min slow down velocity [m/s]"},{"location":"planning/obstacle_stop_planner/#flowchart_1","title":"Flowchart","text":""},{"location":"planning/obstacle_stop_planner/#adaptive-cruise-controller","title":"Adaptive Cruise Controller","text":""},{"location":"planning/obstacle_stop_planner/#role_2","title":"Role","text":"Adaptive Cruise Controller
module embeds maximum velocity in trajectory when there is a dynamic point cloud on the trajectory. The value of maximum velocity depends on the own velocity, the velocity of the point cloud ( = velocity of the front car), and the distance to the point cloud (= distance to the front car).
adaptive_cruise_control.use_object_to_estimate_vel
bool use dynamic objects for estimating object velocity or not (valid only if osp.use_predicted_objects false) adaptive_cruise_control.use_pcl_to_estimate_vel
bool use raw pointclouds for estimating object velocity or not (valid only if osp.use_predicted_objects false) adaptive_cruise_control.consider_obj_velocity
bool consider forward vehicle velocity to calculate target velocity in adaptive cruise or not adaptive_cruise_control.obstacle_velocity_thresh_to_start_acc
double start adaptive cruise control when the velocity of the forward obstacle exceeds this value [m/s] adaptive_cruise_control.obstacle_velocity_thresh_to_stop_acc
double stop acc when the velocity of the forward obstacle falls below this value [m/s] adaptive_cruise_control.emergency_stop_acceleration
double supposed minimum acceleration (deceleration) in emergency stop [m/ss] adaptive_cruise_control.emergency_stop_idling_time
double supposed idling time to start emergency stop [s] adaptive_cruise_control.min_dist_stop
double minimum distance of emergency stop [m] adaptive_cruise_control.obstacle_emergency_stop_acceleration
double supposed minimum acceleration (deceleration) in emergency stop [m/ss] adaptive_cruise_control.max_standard_acceleration
double supposed maximum acceleration in active cruise control [m/ss] adaptive_cruise_control.min_standard_acceleration
double supposed minimum acceleration (deceleration) in active cruise control [m/ss] adaptive_cruise_control.standard_idling_time
double supposed idling time to react object in active cruise control [s] adaptive_cruise_control.min_dist_standard
double minimum distance in active cruise control [m] adaptive_cruise_control.obstacle_min_standard_acceleration
double supposed minimum acceleration of forward obstacle [m/ss] adaptive_cruise_control.margin_rate_to_change_vel
double rate of margin distance to insert target velocity [-] adaptive_cruise_control.use_time_compensation_to_calc_distance
bool use time-compensation to calculate distance to forward vehicle adaptive_cruise_control.p_coefficient_positive
double coefficient P in PID control (used when target dist -current_dist >=0) [-] adaptive_cruise_control.p_coefficient_negative
double coefficient P in PID control (used when target dist -current_dist <0) [-] adaptive_cruise_control.d_coefficient_positive
double coefficient D in PID control (used when delta_dist >=0) [-] adaptive_cruise_control.d_coefficient_negative
double coefficient D in PID control (used when delta_dist <0) [-] adaptive_cruise_control.object_polygon_length_margin
double The distance to extend the polygon length the object in pointcloud-object matching [m] adaptive_cruise_control.object_polygon_width_margin
double The distance to extend the polygon width the object in pointcloud-object matching [m] adaptive_cruise_control.valid_estimated_vel_diff_time
double Maximum time difference treated as continuous points in speed estimation using a point cloud [s] adaptive_cruise_control.valid_vel_que_time
double Time width of information used for speed estimation in speed estimation using a point cloud [s] adaptive_cruise_control.valid_estimated_vel_max
double Maximum value of valid speed estimation results in speed estimation using a point cloud [m/s] adaptive_cruise_control.valid_estimated_vel_min
double Minimum value of valid speed estimation results in speed estimation using a point cloud [m/s] adaptive_cruise_control.thresh_vel_to_stop
double Embed a stop line if the maximum speed calculated by ACC is lower than this speed [m/s] adaptive_cruise_control.lowpass_gain_of_upper_velocity
double Lowpass-gain of target velocity adaptive_cruise_control.use_rough_velocity_estimation:
bool Use rough estimated velocity if the velocity estimation is failed (valid only if osp.use_predicted_objects false) adaptive_cruise_control.rough_velocity_rate
double In the rough velocity estimation, the velocity of front car is estimated as self current velocity * this value"},{"location":"planning/obstacle_stop_planner/#flowchart_2","title":"Flowchart","text":"(*1) The target vehicle point is calculated as a closest obstacle PointCloud from ego along the trajectory.
(*2) The sources of velocity estimation can be changed by the following ROS parameters.
adaptive_cruise_control.use_object_to_estimate_vel
adaptive_cruise_control.use_pcl_to_estimate_vel
This module works only when the target point is found in the detection area of the Obstacle stop planner
module.
The first process of this module is to estimate the velocity of the target vehicle point. The velocity estimation uses the velocity information of dynamic objects or the travel distance of the target vehicle point from the previous step. The dynamic object information is primal, and the travel distance estimation is used as a backup in case of the perception failure. If the target vehicle point is contained in the bounding box of a dynamic object geometrically, the velocity of the dynamic object is used as the target point velocity. Otherwise, the target point velocity is calculated by the travel distance of the target point from the previous step; that is (current_position - previous_position) / dt
. Note that this travel distance based estimation fails when the target point is detected in the first time (it mainly happens in the cut-in situation). To improve the stability of the estimation, the median of the calculation result for several steps is used.
If the calculated velocity is within the threshold range, it is used as the target point velocity.
Only when the estimation is succeeded and the estimated velocity exceeds the value of obstacle_stop_velocity_thresh_*
, the distance to the pointcloud from self-position is calculated. For prevent chattering in the mode transition, obstacle_velocity_thresh_to_start_acc
is used for the threshold to start adaptive cruise, and obstacle_velocity_thresh_to_stop_acc
is used for the threshold to stop adaptive cruise. When the calculated distance value exceeds the emergency distance \\(d\\_{emergency}\\) calculated by emergency_stop parameters, target velocity to insert is calculated.
The emergency distance \\(d\\_{emergency}\\) is calculated as follows.
\\(d_{emergency} = d_{margin_{emergency}} + t_{idling_{emergency}} \\cdot v_{ego} + (-\\frac{v_{ego}^2}{2 \\cdot a_{ego_ {emergency}}}) - (-\\frac{v_{obj}^2}{2 \\cdot a_{obj_{emergency}}})\\)
min_dist_stop
emergency_stop_idling_time
emergency_stop_acceleration
obstacle_emergency_stop_acceleration
The target velocity is determined to keep the distance to the obstacle pointcloud from own vehicle at the standard distance \\(d\\_{standard}\\) calculated as following. Therefore, if the distance to the obstacle pointcloud is longer than standard distance, The target velocity becomes higher than the current velocity, and vice versa. For keeping the distance, a PID controller is used.
\\(d_{standard} = d_{margin_{standard}} + t_{idling_{standard}} \\cdot v_{ego} + (-\\frac{v_{ego}^2}{2 \\cdot a_{ego_ {standard}}}) - (-\\frac{v_{obj}^2}{2 \\cdot a_{obj_{standard}}})\\)
min_dist_stop
standard_stop_idling_time
min_standard_acceleration
obstacle_min_standard_acceleration
If the target velocity exceeds the value of thresh_vel_to_stop
, the target velocity is embedded in the trajectory.
Adaptive Cruise Controller
module. If the velocity planning module is updated, please take care of the vehicle's behavior as much as possible and always be ready for overriding.Adaptive Cruise Controller
is depend on object tracking module. Please note that if the object-tracking fails or the tracking result is incorrect, it the possibility that the vehicle behaves dangerously.This node limits the velocity when driving in the direction of an obstacle. For example, it allows to reduce the velocity when driving close to a guard rail in a curve.
Without this node With this node"},{"location":"planning/obstacle_velocity_limiter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"Using a parameter min_ttc
(minimum time to collision), the node set velocity limits such that no collision with an obstacle would occur, even without new control inputs for a duration of min_ttc
.
To achieve this, the motion of the ego vehicle is simulated forward in time at each point of the trajectory to create a corresponding footprint. If the footprint collides with some obstacle, the velocity at the trajectory point is reduced such that the new simulated footprint do not have any collision.
"},{"location":"planning/obstacle_velocity_limiter/#simulated-motion-footprint-and-collision-distance","title":"Simulated Motion, Footprint, and Collision Distance","text":"The motion of the ego vehicle is simulated at each trajectory point using the heading
, velocity
, and steering
defined at the point. Footprints are then constructed from these simulations and checked for collision. If a collision is found, the distance from the trajectory point is used to calculate the adjusted velocity that would produce a collision-free footprint. Parameter simulation.distance_method
allow to switch between an exact distance calculation and a less expensive approximation using a simple euclidean distance.
Two models can be selected with parameter simulation.model
for simulating the motion of the vehicle: a simple particle model and a more complicated bicycle model.
The particle model uses the constant heading and velocity of the vehicle at a trajectory point to simulate the future motion. The simulated forward motion corresponds to a straight line and the footprint to a rectangle.
"},{"location":"planning/obstacle_velocity_limiter/#footprint","title":"Footprint","text":"The rectangle footprint is built from 2 lines parallel to the simulated forward motion and at a distance of half the vehicle width.
"},{"location":"planning/obstacle_velocity_limiter/#distance","title":"Distance","text":"When a collision point is found within the footprint, the distance is calculated as described in the following figure.
"},{"location":"planning/obstacle_velocity_limiter/#bicycle-model","title":"Bicycle Model","text":"The bicycle model uses the constant heading, velocity, and steering of the vehicle at a trajectory point to simulate the future motion. The simulated forward motion corresponds to an arc around the circle of curvature associated with the steering. Uncertainty in the steering can be introduced with the simulation.steering_offset
parameter which will generate a range of motion from a left-most to a right-most steering. This results in 3 curved lines starting from the same trajectory point. A parameter simulation.nb_points
is used to adjust the precision of these lines, with a minimum of 2
resulting in straight lines and higher values increasing the precision of the curves.
By default, the steering values contained in the trajectory message are used. Parameter trajectory_preprocessing.calculate_steering_angles
allows to recalculate these values when set to true
.
The footprint of the bicycle model is created from lines parallel to the left and right simulated motion at a distance of half the vehicle width. In addition, the two points on the left and right of the end point of the central simulated motion are used to complete the polygon.
"},{"location":"planning/obstacle_velocity_limiter/#distance_1","title":"Distance","text":"The distance to a collision point is calculated by finding the curvature circle passing through the trajectory point and the collision point.
"},{"location":"planning/obstacle_velocity_limiter/#obstacle-detection","title":"Obstacle Detection","text":"Obstacles are represented as points or linestrings (i.e., sequence of points) around the obstacles and are constructed from an occupancy grid, a pointcloud, or the lanelet map. The lanelet map is always checked for obstacles but the other source is switched using parameter obstacles.dynamic_source
.
To efficiently find obstacles intersecting with a footprint, they are stored in a R-tree. Two trees are used, one for the obstacle points, and one for the obstacle linestrings (which are decomposed into segments to simplify the R-tree).
"},{"location":"planning/obstacle_velocity_limiter/#obstacle-masks","title":"Obstacle masks","text":""},{"location":"planning/obstacle_velocity_limiter/#dynamic-obstacles","title":"Dynamic obstacles","text":"Moving obstacles such as other cars should not be considered by this module. These obstacles are detected by the perception modules and represented as polygons. Obstacles inside these polygons are ignored.
Only dynamic obstacles with a velocity above parameter obstacles.dynamic_obstacles_min_vel
are removed.
To deal with delays and precision errors, the polygons can be enlarged with parameter obstacles.dynamic_obstacles_buffer
.
Obstacles that are not inside any forward simulated footprint are ignored if parameter obstacles.filter_envelope
is set to true. The safety envelope polygon is built from all the footprints and used as a positive mask on the occupancy grid or pointcloud.
This option can reduce the total number of obstacles which reduces the cost of collision detection. However, the cost of masking the envelope is usually too high to be interesting.
"},{"location":"planning/obstacle_velocity_limiter/#obstacles-on-the-ego-path","title":"Obstacles on the ego path","text":"If parameter obstacles.ignore_obstacles_on_path
is set to true
, a polygon mask is built from the trajectory and the vehicle dimension. Any obstacle in this polygon is ignored.
The size of the polygon can be increased using parameter obstacles.ignore_extra_distance
which is added to the vehicle lateral offset.
This option is a bit expensive and should only be used in case of noisy dynamic obstacles where obstacles are wrongly detected on the ego path, causing unwanted velocity limits.
"},{"location":"planning/obstacle_velocity_limiter/#lanelet-map","title":"Lanelet Map","text":"Information about static obstacles can be stored in the Lanelet map using the value of the type
tag of linestrings. If any linestring has a type
with one of the value from parameter obstacles.static_map_tags
, then it will be used as an obstacle.
Obstacles from the lanelet map are not impacted by the masks.
"},{"location":"planning/obstacle_velocity_limiter/#occupancy-grid","title":"Occupancy Grid","text":"Masking is performed by iterating through the cells inside each polygon mask using the grid_map_utils::PolygonIterator
function. A threshold is then applied to only keep cells with an occupancy value above parameter obstacles.occupancy_grid_threshold
. Finally, the image is converted to an image and obstacle linestrings are extracted using the opencv function findContour
.
Masking is performed using the pcl::CropHull
function. Points from the pointcloud are then directly used as obstacles.
If a collision is found, the velocity at the trajectory point is adjusted such that the resulting footprint would no longer collide with an obstacle: \\(velocity = \\frac{dist\\_to\\_collision}{min\\_ttc}\\)
To prevent sudden deceleration of the ego vehicle, the parameter max_deceleration
limits the deceleration relative to the current ego velocity. For a trajectory point occurring at a duration t
in the future (calculated from the original velocity profile), the adjusted velocity cannot be set lower than \\(v_{current} - t * max\\_deceleration\\).
Furthermore, a parameter min_adjusted_velocity
provides a lower bound on the modified velocity.
The node only modifies part of the input trajectory, starting from the current ego position. Parameter trajectory_preprocessing.start_distance
is used to adjust how far ahead of the ego position the velocities will start being modified. Parameters trajectory_preprocessing.max_length
and trajectory_preprocessing.max_duration
are used to control how much of the trajectory will see its velocity adjusted.
To reduce computation cost at the cost of precision, the trajectory can be downsampled using parameter trajectory_preprocessing.downsample_factor
. For example a value of 1
means all trajectory points will be evaluated while a value of 10
means only 1/10th of the points will be evaluated.
~/input/trajectory
autoware_auto_planning_msgs/Trajectory
Reference trajectory ~/input/occupancy_grid
nav_msgs/OccupancyGrid
Occupancy grid with obstacle information ~/input/obstacle_pointcloud
sensor_msgs/PointCloud2
Pointcloud containing only obstacle points ~/input/dynamic_obstacles
autoware_auto_perception_msgs/PredictedObjects
Dynamic objects ~/input/odometry
nav_msgs/Odometry
Odometry used to retrieve the current ego velocity ~/input/map
autoware_auto_mapping_msgs/HADMapBin
Vector map used to retrieve static obstacles"},{"location":"planning/obstacle_velocity_limiter/#outputs","title":"Outputs","text":"Name Type Description ~/output/trajectory
autoware_auto_planning_msgs/Trajectory
Trajectory with adjusted velocities ~/output/debug_markers
visualization_msgs/MarkerArray
Debug markers (envelopes, obstacle polygons) ~/output/runtime_microseconds
tier4_debug_msgs/Float64
Time taken to calculate the trajectory (in microseconds)"},{"location":"planning/obstacle_velocity_limiter/#parameters","title":"Parameters","text":"Name Type Description min_ttc
float [s] required minimum time with no collision at each point of the trajectory assuming constant heading and velocity. distance_buffer
float [m] required distance buffer with the obstacles. min_adjusted_velocity
float [m/s] minimum adjusted velocity this node can set. max_deceleration
float [m/s\u00b2] maximum deceleration an adjusted velocity can cause. trajectory_preprocessing.start_distance
float [m] controls from which part of the trajectory (relative to the current ego pose) the velocity is adjusted. trajectory_preprocessing.max_length
float [m] controls the maximum length (starting from the start_distance
) where the velocity is adjusted. trajectory_preprocessing.max_distance
float [s] controls the maximum duration (measured from the start_distance
) where the velocity is adjusted. trajectory_preprocessing.downsample_factor
int trajectory downsampling factor to allow tradeoff between precision and performance. trajectory_preprocessing.calculate_steering_angle
bool if true, the steering angles of the trajectory message are not used but are recalculated. simulation.model
string model to use for forward simulation. Either \"particle\" or \"bicycle\". simulation.distance_method
string method to use for calculating distance to collision. Either \"exact\" or \"approximation\". simulation.steering_offset
float offset around the steering used by the bicycle model. simulation.nb_points
int number of points used to simulate motion with the bicycle model. obstacles.dynamic_source
string source of dynamic obstacle used for collision checking. Can be \"occupancy_grid\", \"point_cloud\", or \"static_only\" (no dynamic obstacle). obstacles.occupancy_grid_threshold
int value in the occupancy grid above which a cell is considered an obstacle. obstacles.dynamic_obstacles_buffer
float buffer around dynamic obstacles used when masking an obstacle in order to prevent noise. obstacles.dynamic_obstacles_min_vel
float velocity above which to mask a dynamic obstacle. obstacles.static_map_tags
string list linestring of the lanelet map with this tags are used as obstacles. obstacles.filter_envelope
bool wether to use the safety envelope to filter the dynamic obstacles source."},{"location":"planning/obstacle_velocity_limiter/#assumptions-known-limits","title":"Assumptions / Known limits","text":"The velocity profile produced by this node is not meant to be a realistic velocity profile and can contain sudden jumps of velocity with no regard for acceleration and jerk. This velocity profile is meant to be used as an upper bound on the actual velocity of the vehicle.
"},{"location":"planning/obstacle_velocity_limiter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":"The critical case for this node is when an obstacle is falsely detected very close to the trajectory such that the corresponding velocity suddenly becomes very low. This can cause a sudden brake and two mechanisms can be used to mitigate these errors.
Parameter min_adjusted_velocity
allow to set a minimum to the adjusted velocity, preventing the node to slow down the vehicle too much. Parameter max_deceleration
allow to set a maximum deceleration (relative to the current ego velocity) that the adjusted velocity would incur.
This package contains code to smooth a path or trajectory.
"},{"location":"planning/path_smoother/#features","title":"Features","text":""},{"location":"planning/path_smoother/#elastic-band","title":"Elastic Band","text":"More details about the elastic band can be found here.
"},{"location":"planning/path_smoother/docs/eb/","title":"Elastic band","text":""},{"location":"planning/path_smoother/docs/eb/#elastic-band","title":"Elastic band","text":""},{"location":"planning/path_smoother/docs/eb/#abstract","title":"Abstract","text":"Elastic band smooths the input path. Since the latter optimization (model predictive trajectory) is calculated on the frenet frame, path smoothing is applied here so that the latter optimization will be stable.
Note that this smoothing process does not consider collision checking. Therefore the output path may have a collision with road boundaries or obstacles.
"},{"location":"planning/path_smoother/docs/eb/#flowchart","title":"Flowchart","text":""},{"location":"planning/path_smoother/docs/eb/#general-parameters","title":"General parameters","text":"Parameter Type Descriptioneb.common.num_points
int points for elastic band optimization eb.common.delta_arc_length
double delta arc length for elastic band optimization"},{"location":"planning/path_smoother/docs/eb/#parameters-for-optimization","title":"Parameters for optimization","text":"Parameter Type Description eb.option.enable_warm_start
bool flag to use warm start eb.weight.smooth_weight
double weight for smoothing eb.weight.lat_error_weight
double weight for minimizing the lateral error"},{"location":"planning/path_smoother/docs/eb/#parameters-for-validation","title":"Parameters for validation","text":"Parameter Type Description eb.option.enable_optimization_validation
bool flag to validate optimization eb.validation.max_error
double max lateral error by optimization"},{"location":"planning/path_smoother/docs/eb/#formulation","title":"Formulation","text":""},{"location":"planning/path_smoother/docs/eb/#objective-function","title":"Objective function","text":"We formulate a quadratic problem minimizing the diagonal length of the rhombus on each point generated by the current point and its previous and next points, shown as the red vector's length.
Assuming that \\(k\\)'th point is \\(\\boldsymbol{p}_k = (x_k, y_k)\\), the objective function is as follows.
\\[ \\begin{align} \\ J & = \\min \\sum_{k=1}^{n-2} ||(\\boldsymbol{p}_{k+1} - \\boldsymbol{p}_{k}) - (\\boldsymbol{p}_{k} - \\boldsymbol{p}_{k-1})||^2 \\\\ \\ & = \\min \\sum_{k=1}^{n-2} ||\\boldsymbol{p}_{k+1} - 2 \\boldsymbol{p}_{k} + \\boldsymbol{p}_{k-1}||^2 \\\\ \\ & = \\min \\sum_{k=1}^{n-2} \\{(x_{k+1} - x_k + x_{k-1})^2 + (y_{k+1} - y_k + y_{k-1})^2\\} \\\\ \\ & = \\min \\begin{pmatrix} \\ x_0 \\\\ \\ x_1 \\\\ \\ x_2 \\\\ \\vdots \\\\ \\ x_{n-3}\\\\ \\ x_{n-2} \\\\ \\ x_{n-1} \\\\ \\ y_0 \\\\ \\ y_1 \\\\ \\ y_2 \\\\ \\vdots \\\\ \\ y_{n-3}\\\\ \\ y_{n-2} \\\\ \\ y_{n-1} \\\\ \\end{pmatrix}^T \\begin{pmatrix} 1 & -2 & 1 & 0 & \\dots& \\\\ -2 & 5 & -4 & 1 & 0 &\\dots \\\\ 1 & -4 & 6 & -4 & 1 & \\\\ 0 & 1 & -4 & 6 & -4 & \\\\ \\vdots & 0 & \\ddots&\\ddots& \\ddots \\\\ & \\vdots & & & \\\\ & & & 1 & -4 & 6 & -4 & 1 \\\\ & & & & 1 & -4 & 5 & -2 \\\\ & & & & & 1 & -2 & 1& \\\\ & & & & & & & &1 & -2 & 1 & 0 & \\dots& \\\\ & & & & & & & &-2 & 5 & -4 & 1 & 0 &\\dots \\\\ & & & & & & & &1 & -4 & 6 & -4 & 1 & \\\\ & & & & & & & &0 & 1 & -4 & 6 & -4 & \\\\ & & & & & & & &\\vdots & 0 & \\ddots&\\ddots& \\ddots \\\\ & & & & & & & & & \\vdots & & & \\\\ & & & & & & & & & & & 1 & -4 & 6 & -4 & 1 \\\\ & & & & & & & & & & & & 1 & -4 & 5 & -2 \\\\ & & & & & & & & & & & & & 1 & -2 & 1& \\\\ \\end{pmatrix} \\begin{pmatrix} \\ x_0 \\\\ \\ x_1 \\\\ \\ x_2 \\\\ \\vdots \\\\ \\ x_{n-3}\\\\ \\ x_{n-2} \\\\ \\ x_{n-1} \\\\ \\ y_0 \\\\ \\ y_1 \\\\ \\ y_2 \\\\ \\vdots \\\\ \\ y_{n-3}\\\\ \\ y_{n-2} \\\\ \\ y_{n-1} \\\\ \\end{pmatrix} \\end{align} \\]"},{"location":"planning/path_smoother/docs/eb/#constraint","title":"Constraint","text":"The distance that each point can move is limited so that the path will not changed a lot but will be smoother. In detail, the longitudinal distance that each point can move is zero, and the lateral distance is parameterized as eb.clearance.clearance_for_fix
, eb.clearance.clearance_for_joint
and eb.clearance.clearance_for_smooth
.
The following figure describes how to constrain the lateral distance to move. The red line is where the point can move. The points for the upper and lower bound are described as \\((x_k^u, y_k^u)\\) and \\((x_k^l, y_k^l)\\), respectively.
Based on the line equation whose slope angle is \\(\\theta_k\\) and that passes through \\((x_k, y_k)\\), \\((x_k^u, y_k^u)\\) and \\((x_k^l, y_k^l)\\), the lateral constraint is formulated as follows.
\\[ C_k^l \\leq C_k \\leq C_k^u \\]In addition, the beginning point is fixed and the end point as well if the end point is considered as the goal. This constraint can be applied with the upper equation by changing the distance that each point can move.
"},{"location":"planning/path_smoother/docs/eb/#debug","title":"Debug","text":"This package contains several planning-related debug tools.
The trajectory_analyzer
visualizes the information (speed, curvature, yaw, etc) along the trajectory. This feature would be helpful for purposes such as \"investigating the reason why the vehicle decelerates here\". This feature employs the OSS PlotJuggler.
This is to visualize stop factor and reason. see the details
"},{"location":"planning/planning_debug_tools/#how-to-use","title":"How to use","text":"please launch the analyzer node
ros2 launch planning_debug_tools trajectory_analyzer.launch.xml\n
and visualize the analyzed data on the plot juggler following below.
"},{"location":"planning/planning_debug_tools/#setup-plotjuggler","title":"setup PlotJuggler","text":"For the first time, please add the following code to reactive script and save it as the picture below! (Looking for the way to automatically load the configuration file...)
You can customize what you plot by editing this code.
in Global code
behavior_path = '/planning/scenario_planning/lane_driving/behavior_planning/path_with_lane_id/debug_info'\nbehavior_velocity = '/planning/scenario_planning/lane_driving/behavior_planning/path/debug_info'\nmotion_avoid = '/planning/scenario_planning/lane_driving/motion_planning/obstacle_avoidance_planner/trajectory/debug_info'\nmotion_smoother_latacc = '/planning/scenario_planning/motion_velocity_smoother/debug/trajectory_lateral_acc_filtered/debug_info'\nmotion_smoother = '/planning/scenario_planning/trajectory/debug_info'\n
in function(tracker_time)
PlotCurvatureOverArclength('k_behavior_path', behavior_path, tracker_time)\nPlotCurvatureOverArclength('k_behavior_velocity', behavior_velocity, tracker_time)\nPlotCurvatureOverArclength('k_motion_avoid', motion_avoid, tracker_time)\nPlotCurvatureOverArclength('k_motion_smoother', motion_smoother, tracker_time)\n\nPlotVelocityOverArclength('v_behavior_path', behavior_path, tracker_time)\nPlotVelocityOverArclength('v_behavior_velocity', behavior_velocity, tracker_time)\nPlotVelocityOverArclength('v_motion_avoid', motion_avoid, tracker_time)\nPlotVelocityOverArclength('v_motion_smoother_latacc', motion_smoother_latacc, tracker_time)\nPlotVelocityOverArclength('v_motion_smoother', motion_smoother, tracker_time)\n\nPlotAccelerationOverArclength('a_behavior_path', behavior_path, tracker_time)\nPlotAccelerationOverArclength('a_behavior_velocity', behavior_velocity, tracker_time)\nPlotAccelerationOverArclength('a_motion_avoid', motion_avoid, tracker_time)\nPlotAccelerationOverArclength('a_motion_smoother_latacc', motion_smoother_latacc, tracker_time)\nPlotAccelerationOverArclength('a_motion_smoother', motion_smoother, tracker_time)\n\nPlotYawOverArclength('yaw_behavior_path', behavior_path, tracker_time)\nPlotYawOverArclength('yaw_behavior_velocity', behavior_velocity, tracker_time)\nPlotYawOverArclength('yaw_motion_avoid', motion_avoid, tracker_time)\nPlotYawOverArclength('yaw_motion_smoother_latacc', motion_smoother_latacc, tracker_time)\nPlotYawOverArclength('yaw_motion_smoother', motion_smoother, tracker_time)\n\nPlotCurrentVelocity('localization_kinematic_state', '/localization/kinematic_state', tracker_time)\n
in Function Library
function PlotValue(name, path, timestamp, value)\n new_series = ScatterXY.new(name)\n index = 0\n while(true) do\n series_k = TimeseriesView.find( string.format( \"%s/\"..value..\".%d\", path, index) )\n series_s = TimeseriesView.find( string.format( \"%s/arclength.%d\", path, index) )\n series_size = TimeseriesView.find( string.format( \"%s/size\", path) )\n\n if series_k == nil or series_s == nil then break end\n\n k = series_k:atTime(timestamp)\n s = series_s:atTime(timestamp)\n size = series_size:atTime(timestamp)\n\n if index >= size then break end\n\n new_series:push_back(s,k)\n index = index+1\n end\nend\n\nfunction PlotCurvatureOverArclength(name, path, timestamp)\n PlotValue(name, path, timestamp,\"curvature\")\nend\n\nfunction PlotVelocityOverArclength(name, path, timestamp)\n PlotValue(name, path, timestamp,\"velocity\")\nend\n\nfunction PlotAccelerationOverArclength(name, path, timestamp)\n PlotValue(name, path, timestamp,\"acceleration\")\nend\n\nfunction PlotYawOverArclength(name, path, timestamp)\n PlotValue(name, path, timestamp,\"yaw\")\nend\n\nfunction PlotCurrentVelocity(name, kinematics_name, timestamp)\n new_series = ScatterXY.new(name)\n series_v = TimeseriesView.find( string.format( \"%s/twist/twist/linear/x\", kinematics_name))\n if series_v == nil then\n print(\"error\")\n return\n end\n v = series_v:atTime(timestamp)\n new_series:push_back(0.0, v)\nend\n
Then, run the plot juggler.
"},{"location":"planning/planning_debug_tools/#how-to-customize-the-plot","title":"How to customize the plot","text":"Add Path/PathWithLaneIds/Trajectory topics you want to plot in the trajectory_analyzer.launch.xml
, then the analyzed topics for these messages will be published with TrajectoryDebugINfo.msg
type. You can then visualize these data by editing the reactive script on the PlotJuggler.
The version of the plotJuggler must be > 3.5.0
This node prints the velocity information indicated by planning/control modules on a terminal. For trajectories calculated by planning modules, the target velocity on the trajectory point which is closest to the ego vehicle is printed. For control commands calculated by control modules, the target velocity and acceleration is directly printed. This feature would be helpful for purposes such as \"investigating the reason why the vehicle does not move\".
You can launch by
ros2 run planning_debug_tools closest_velocity_checker.py\n
"},{"location":"planning/planning_debug_tools/#trajectory-visualizer","title":"Trajectory visualizer","text":"The old version of the trajectory analyzer. It is written in Python and more flexible, but very slow.
"},{"location":"planning/planning_debug_tools/#for-other-use-case-experimental","title":"For other use case (experimental)","text":"To see behavior velocity planner's internal plath with lane id add below example value to behavior velocity analyzer and set is_publish_debug_path: true
crosswalk ='/planning/scenario_planning/lane_driving/behavior_planning/behavior_velocity_planner/debug/path_with_lane_id/crosswalk/debug_info'\nintersection ='/planning/scenario_planning/lane_driving/behavior_planning/behavior_velocity_planner/debug/path_with_lane_id/intersection/debug_info'\ntraffic_light ='/planning/scenario_planning/lane_driving/behavior_planning/behavior_velocity_planner/debug/path_with_lane_id/traffic_light/debug_info'\nmerge_from_private ='/planning/scenario_planning/lane_driving/behavior_planning/behavior_velocity_planner/debug/path_with_lane_id/merge_from_private/debug_info'\nocclusion_spot ='/planning/scenario_planning/lane_driving/behavior_planning/behavior_velocity_planner/debug/path_with_lane_id/occlusion_spot/debug_info'\n
PlotVelocityOverArclength('v_crosswalk', crosswalk, tracker_time)\nPlotVelocityOverArclength('v_intersection', intersection, tracker_time)\nPlotVelocityOverArclength('v_merge_from_private', merge_from_private, tracker_time)\nPlotVelocityOverArclength('v_traffic_light', traffic_light, tracker_time)\nPlotVelocityOverArclength('v_occlusion', occlusion_spot, tracker_time)\n\nPlotYawOverArclength('yaw_crosswalk', crosswalk, tracker_time)\nPlotYawOverArclength('yaw_intersection', intersection, tracker_time)\nPlotYawOverArclength('yaw_merge_from_private', merge_from_private, tracker_time)\nPlotYawOverArclength('yaw_traffic_light', traffic_light, tracker_time)\nPlotYawOverArclength('yaw_occlusion', occlusion_spot, tracker_time)\n\nPlotCurrentVelocity('localization_kinematic_state', '/localization/kinematic_state', tracker_time)\n
"},{"location":"planning/planning_debug_tools/#perception-reproducer","title":"Perception reproducer","text":"This script can overlay the perception results from the rosbag on the planning simulator synchronized with the simulator's ego pose.
In detail, the ego pose in the rosbag which is closest to the current ego pose in the simulator is calculated. The perception results at the timestamp of the closest ego pose is extracted, and published.
"},{"location":"planning/planning_debug_tools/#how-to-use_1","title":"How to use","text":"First, launch the planning simulator, and put the ego pose. Then, run the script according to the following command.
By designating a rosbag, perception reproducer can be launched.
ros2 run planning_debug_tools perception_reproducer.py -b <bag-file>\n
You can designate multiple rosbags in the directory.
ros2 run planning_debug_tools perception_reproducer.py -b <dir-to-bag-files>\n
Instead of publishing predicted objects, you can publish detected/tracked objects by designating -d
or -t
, respectively.
A part of the feature is under development.
This script can overlay the perception results from the rosbag on the planning simulator.
In detail, this script publishes the data at a certain timestamp from the rosbag. The timestamp will increase according to the real time without any operation. By using the GUI, you can modify the timestamp by pausing, changing the rate or going back into the past.
"},{"location":"planning/planning_debug_tools/#how-to-use_2","title":"How to use","text":"First, launch the planning simulator, and put the ego pose. Then, run the script according to the following command.
By designating a rosbag, perception replayer can be launched. The GUI is launched as well with which a timestamp of rosbag can be managed.
ros2 run planning_debug_tools perception_replayer.py -b <bag-file>\n
You can designate multiple rosbags in the directory.
ros2 run planning_debug_tools perception_replayer.py -b <dir-to-bag-files>\n
Instead of publishing predicted objects, you can publish detected/tracked objects by designating -d
or -t
, respectively.
The purpose of the Processing Time Subscriber is to monitor and visualize the processing times of various ROS 2 topics in a system. By providing a real-time terminal-based visualization, users can easily confirm the processing time performance as in the picture below.
You can run the program by the following command.
ros2 run planning_debug_tools processing_time_checker.py -f <update-hz> -m <max-bar-time>\n
This program subscribes to ROS 2 topics that have a suffix of processing_time_ms
.
The program allows users to customize two parameters via command-line arguments:
By adjusting these parameters, users can tailor the display to their specific monitoring needs.
"},{"location":"planning/planning_debug_tools/#logging-level-updater","title":"Logging Level Updater","text":"The purpose of the Logging Level Updater is to update the logging level of the planning modules via ROS 2 service. Users can easily update the logging level for debugging.
ros2 run planning_debug_tools update_logger_level.sh <module-name> <logger-level>\n
<logger-level>
will be DEBUG
, INFO
, WARN
, or ERROR
.
When you have a typo of the planning module, the script will show the available modules.
"},{"location":"planning/planning_debug_tools/doc-stop-reason-visualizer/","title":"Doc stop reason visualizer","text":""},{"location":"planning/planning_debug_tools/doc-stop-reason-visualizer/#stop_reason_visualizer","title":"stop_reason_visualizer","text":"This module is to visualize stop factor quickly without selecting correct debug markers. This is supposed to use with virtual wall marker like below.
"},{"location":"planning/planning_debug_tools/doc-stop-reason-visualizer/#how-to-use","title":"How to use","text":"Run this node.
ros2 run planning_debug_tools stop_reason_visualizer_exe\n
Add stop reason debug marker from rviz.
Note: ros2 process can be sometimes deleted only from killall stop_reason_visualizer_exe
Reference
"},{"location":"planning/planning_test_utils/","title":"Planning Interface Test Manager","text":""},{"location":"planning/planning_test_utils/#planning-interface-test-manager","title":"Planning Interface Test Manager","text":""},{"location":"planning/planning_test_utils/#background","title":"Background","text":"In each node of the planning module, when exceptional input, such as unusual routes or significantly deviated ego-position, is given, the node may not be prepared for such input and could crash. As a result, debugging node crashes can be time-consuming. For example, if an empty trajectory is given as input and it was not anticipated during implementation, the node might crash due to the unaddressed exceptional input when changes are merged, during scenario testing or while the system is running on an actual vehicle.
"},{"location":"planning/planning_test_utils/#purpose","title":"Purpose","text":"The purpose is to provide a utility for implementing tests to ensure that node operates correctly when receiving exceptional input. By utilizing this utility and implementing tests for exceptional input, the purpose is to reduce bugs that are only discovered when actually running the system, by requiring measures for exceptional input before merging PRs.
"},{"location":"planning/planning_test_utils/#features","title":"Features","text":""},{"location":"planning/planning_test_utils/#confirmation-of-normal-operation","title":"Confirmation of normal operation","text":"For the test target node, confirm that the node operates correctly and publishes the required messages for subsequent nodes. To do this, test_node publish the necessary messages and confirm that the node's output is being published.
"},{"location":"planning/planning_test_utils/#robustness-confirmation-for-special-inputs","title":"Robustness confirmation for special inputs","text":"After confirming normal operation, ensure that the test target node does not crash when given exceptional input. To do this, provide exceptional input from the test_node and confirm that the node does not crash.
(WIP)
"},{"location":"planning/planning_test_utils/#usage","title":"Usage","text":"TEST(PlanningModuleInterfaceTest, NodeTestWithExceptionTrajectory)\n{\nrclcpp::init(0, nullptr);\n\n// instantiate test_manager with PlanningInterfaceTestManager type\nauto test_manager = std::make_shared<planning_test_utils::PlanningInterfaceTestManager>();\n\n// get package directories for necessary configuration files\nconst auto planning_test_utils_dir =\nament_index_cpp::get_package_share_directory(\"planning_test_utils\");\nconst auto target_node_dir =\nament_index_cpp::get_package_share_directory(\"target_node\");\n\n// set arguments to get the config file\nnode_options.arguments(\n{\"--ros-args\", \"--params-file\",\nplanning_test_utils_dir + \"/config/test_vehicle_info.param.yaml\", \"--params-file\",\nplanning_validator_dir + \"/config/planning_validator.param.yaml\"});\n\n// instantiate the TargetNode with node_options\nauto test_target_node = std::make_shared<TargetNode>(node_options);\n\n// publish the necessary topics from test_manager second argument is topic name\ntest_manager->publishOdometry(test_target_node, \"/localization/kinematic_state\");\ntest_manager->publishMaxVelocity(\ntest_target_node, \"motion_velocity_smoother/input/external_velocity_limit_mps\");\n\n// set scenario_selector's input topic name(this topic is changed to test node)\ntest_manager->setTrajectoryInputTopicName(\"input/parking/trajectory\");\n\n// test with normal trajectory\nASSERT_NO_THROW(test_manager->testWithNominalTrajectory(test_target_node));\n\n// make sure target_node is running\nEXPECT_GE(test_manager->getReceivedTopicNum(), 1);\n\n// test with trajectory input with empty/one point/overlapping point\nASSERT_NO_THROW(test_manager->testWithAbnormalTrajectory(test_target_node));\n\n// shutdown ROS context\nrclcpp::shutdown();\n}\n
"},{"location":"planning/planning_test_utils/#implemented-tests","title":"Implemented tests","text":"Node Test name exceptional input output Exceptional input pattern planning_validator NodeTestWithExceptionTrajectory trajectory trajectory Empty, single point, path with duplicate points motion_velocity_smoother NodeTestWithExceptionTrajectory trajectory trajectory Empty, single point, path with duplicate points obstacle_cruise_planner NodeTestWithExceptionTrajectory trajectory trajectory Empty, single point, path with duplicate points obstacle_stop_planner NodeTestWithExceptionTrajectory trajectory trajectory Empty, single point, path with duplicate points obstacle_velocity_limiter NodeTestWithExceptionTrajectory trajectory trajectory Empty, single point, path with duplicate points obstacle_avoidance_planner NodeTestWithExceptionTrajectory trajectory trajectory Empty, single point, path with duplicate points scenario_selector NodeTestWithExceptionTrajectoryLaneDrivingMode NodeTestWithExceptionTrajectoryParkingMode trajectory scenario Empty, single point, path with duplicate points for scenarios:LANEDRIVING and PARKING freespace_planner NodeTestWithExceptionRoute route trajectory Empty route behavior_path_planner NodeTestWithExceptionRoute NodeTestWithOffTrackEgoPose route route odometry Empty route Off-lane ego-position behavior_velocity_planner NodeTestWithExceptionPathWithLaneID path_with_lane_id path Empty path"},{"location":"planning/planning_test_utils/#important-notes","title":"Important Notes","text":"During test execution, when launching a node, parameters are loaded from the parameter file within each package. Therefore, when adding parameters, it is necessary to add the required parameters to the parameter file in the target node package. This is to prevent the node from being unable to launch if there are missing parameters when retrieving them from the parameter file during node launch.
"},{"location":"planning/planning_test_utils/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":"(WIP)
"},{"location":"planning/planning_topic_converter/","title":"Planning Topic Converter","text":""},{"location":"planning/planning_topic_converter/#planning-topic-converter","title":"Planning Topic Converter","text":""},{"location":"planning/planning_topic_converter/#purpose","title":"Purpose","text":"This package provides tools that convert topic type among types are defined in https://github.com/tier4/autoware_auto_msgs.
"},{"location":"planning/planning_topic_converter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"planning/planning_topic_converter/#usage-example","title":"Usage example","text":"The tools in this package are provided as composable ROS 2 component nodes, so that they can be spawned into an existing process, launched from launch files, or invoked from the command line.
<load_composable_node target=\"container_name\">\n<composable_node pkg=\"planning_topic_converter\" plugin=\"planning_topic_converter::PathToTrajectory\" name=\"path_to_trajectory_converter\" namespace=\"\">\n<!-- params -->\n<param name=\"input_topic\" value=\"foo\"/>\n<param name=\"output_topic\" value=\"bar\"/>\n<!-- composable node config -->\n<extra_arg name=\"use_intra_process_comms\" value=\"false\"/>\n</composable_node>\n</load_composable_node>\n
"},{"location":"planning/planning_topic_converter/#parameters","title":"Parameters","text":"Name Type Description input_topic
string input topic name. output_topic
string output topic name."},{"location":"planning/planning_topic_converter/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"planning/planning_topic_converter/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":""},{"location":"planning/planning_validator/","title":"Planning Validator","text":""},{"location":"planning/planning_validator/#planning-validator","title":"Planning Validator","text":"The planning_validator
is a module that checks the validity of a trajectory before it is published. The status of the validation can be viewed in the /diagnostics
and /validation_status
topics. When an invalid trajectory is detected, the planning_validator
will process the trajectory following the selected option: \"0. publish the trajectory as it is\", \"1. stop publishing the trajectory\", \"2. publish the last validated trajectory\".
The following features are supported for trajectory validation and can have thresholds set by parameters:
The following features are to be implemented.
The planning_validator
takes in the following inputs:
~/input/kinematics
nav_msgs/Odometry ego pose and twist ~/input/trajectory
autoware_auto_planning_msgs/Trajectory target trajectory to be validated in this node"},{"location":"planning/planning_validator/#outputs","title":"Outputs","text":"It outputs the following:
Name Type Description~/output/trajectory
autoware_auto_planning_msgs/Trajectory validated trajectory ~/output/validation_status
planning_validator/PlanningValidatorStatus validator status to inform the reason why the trajectory is valid/invalid /diagnostics
diagnostic_msgs/DiagnosticStatus diagnostics to report errors"},{"location":"planning/planning_validator/#parameters","title":"Parameters","text":"The following parameters can be set for the planning_validator
:
invalid_trajectory_handling_type
int set the operation when the invalid trajectory is detected. 0: publish the trajectory even if it is invalid, 1: stop publishing the trajectory, 2: publish the last validated trajectory. 0 publish_diag
bool the Diag will be set to ERROR when the number of consecutive invalid trajectory exceeds this threshold. (For example, threshold = 1 means, even if the trajectory is invalid, the Diag will not be ERROR if the next trajectory is valid.) true diag_error_count_threshold
int if true, diagnostics msg is published. true display_on_terminal
bool show error msg on terminal true"},{"location":"planning/planning_validator/#algorithm-parameters","title":"Algorithm parameters","text":""},{"location":"planning/planning_validator/#thresholds","title":"Thresholds","text":"The input trajectory is detected as invalid if the index exceeds the following thresholds.
Name Type Description Default valuethresholds.interval
double invalid threshold of the distance of two neighboring trajectory points [m] 100.0 thresholds.relative_angle
double invalid threshold of the relative angle of two neighboring trajectory points [rad] 2.0 thresholds.curvature
double invalid threshold of the curvature in each trajectory point [1/m] 1.0 thresholds.lateral_acc
double invalid threshold of the lateral acceleration in each trajectory point [m/ss] 9.8 thresholds.longitudinal_max_acc
double invalid threshold of the maximum longitudinal acceleration in each trajectory point [m/ss] 9.8 thresholds.longitudinal_min_acc
double invalid threshold of the minimum longitudinal deceleration in each trajectory point [m/ss] -9.8 thresholds.steering
double invalid threshold of the steering angle in each trajectory point [rad] 1.414 thresholds.steering_rate
double invalid threshold of the steering angle rate in each trajectory point [rad/s] 10.0 thresholds.velocity_deviation
double invalid threshold of the velocity deviation between the ego velocity and the trajectory point closest to ego [m/s] 100.0 thresholds.distance_deviation
double invalid threshold of the distance deviation between the ego position and the trajectory point closest to ego [m] 100.0"},{"location":"planning/route_handler/","title":"route handler","text":""},{"location":"planning/route_handler/#route-handler","title":"route handler","text":"route_handler
is a library for calculating driving route on the lanelet map.
RTC Interface is an interface to publish the decision status of behavior planning modules and receive execution command from external of an autonomous driving system.
"},{"location":"planning/rtc_interface/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"planning/rtc_interface/#usage-example","title":"Usage example","text":"// Generate instance (in this example, \"intersection\" is selected)\nrtc_interface::RTCInterface rtc_interface(node, \"intersection\");\n\n// Generate UUID\nconst unique_identifier_msgs::msg::UUID uuid = generateUUID(getModuleId());\n\n// Repeat while module is running\nwhile (...) {\n// Get safety status of the module corresponding to the module id\nconst bool safe = ...\n\n// Get distance to the object corresponding to the module id\nconst double start_distance = ...\nconst double finish_distance = ...\n\n// Get time stamp\nconst rclcpp::Time stamp = ...\n\n// Update status\nrtc_interface.updateCooperateStatus(uuid, safe, start_distance, finish_distance, stamp);\n\nif (rtc_interface.isActivated(uuid)) {\n// Execute planning\n} else {\n// Stop planning\n}\n// Get time stamp\nconst rclcpp::Time stamp = ...\n\n// Publish status topic\nrtc_interface.publishCooperateStatus(stamp);\n}\n\n// Remove the status from array\nrtc_interface.removeCooperateStatus(uuid);\n
"},{"location":"planning/rtc_interface/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"planning/rtc_interface/#rtcinterface-constructor","title":"RTCInterface (Constructor)","text":"rtc_interface::RTCInterface(rclcpp::Node & node, const std::string & name);\n
"},{"location":"planning/rtc_interface/#description","title":"Description","text":"A constructor for rtc_interface::RTCInterface
.
node
: Node calling this interfacename
: Name of cooperate status array topic and cooperate commands service~/{name}/cooperate_status
~/{name}/cooperate_commands
An instance of RTCInterface
rtc_interface::publishCooperateStatus(const rclcpp::Time & stamp)\n
"},{"location":"planning/rtc_interface/#description_1","title":"Description","text":"Publish registered cooperate status.
"},{"location":"planning/rtc_interface/#input_1","title":"Input","text":"stamp
: Time stampNothing
"},{"location":"planning/rtc_interface/#updatecooperatestatus","title":"updateCooperateStatus","text":"rtc_interface::updateCooperateStatus(const unique_identifier_msgs::msg::UUID & uuid, const bool safe, const double start_distance, const double finish_distance, const rclcpp::Time & stamp)\n
"},{"location":"planning/rtc_interface/#description_2","title":"Description","text":"Update cooperate status corresponding to uuid
. If cooperate status corresponding to uuid
is not registered yet, add new cooperate status.
uuid
: UUID for requesting modulesafe
: Safety status of requesting modulestart_distance
: Distance to the start object from ego vehiclefinish_distance
: Distance to the finish object from ego vehiclestamp
: Time stampNothing
"},{"location":"planning/rtc_interface/#removecooperatestatus","title":"removeCooperateStatus","text":"rtc_interface::removeCooperateStatus(const unique_identifier_msgs::msg::UUID & uuid)\n
"},{"location":"planning/rtc_interface/#description_3","title":"Description","text":"Remove cooperate status corresponding to uuid
from registered statuses.
uuid
: UUID for expired moduleNothing
"},{"location":"planning/rtc_interface/#clearcooperatestatus","title":"clearCooperateStatus","text":"rtc_interface::clearCooperateStatus()\n
"},{"location":"planning/rtc_interface/#description_4","title":"Description","text":"Remove all cooperate statuses.
"},{"location":"planning/rtc_interface/#input_4","title":"Input","text":"Nothing
"},{"location":"planning/rtc_interface/#output_4","title":"Output","text":"Nothing
"},{"location":"planning/rtc_interface/#isactivated","title":"isActivated","text":"rtc_interface::isActivated(const unique_identifier_msgs::msg::UUID & uuid)\n
"},{"location":"planning/rtc_interface/#description_5","title":"Description","text":"Return received command status corresponding to uuid
.
uuid
: UUID for checking moduleIf auto mode is enabled, return based on the safety status. If not, if received command is ACTIVATED
, return true
. If not, return false
.
rtc_interface::isRegistered(const unique_identifier_msgs::msg::UUID & uuid)\n
"},{"location":"planning/rtc_interface/#description_6","title":"Description","text":"Return true
if uuid
is registered.
uuid
: UUID for checking moduleIf uuid
is registered, return true
. If not, return false
.
The current issue for RTC commands is that service is not recorded to rosbag, so it's very hard to analyze what was happened exactly. So this package makes it possible to replay rtc commands service from rosbag rtc status topic to resolve that issue.
"},{"location":"planning/rtc_replayer/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"planning/rtc_replayer/#input","title":"Input","text":"Name Type Description/debug/rtc_status
tier4_rtc_msgs::msg::CooperateStatusArray CooperateStatusArray that is recorded in rosbag"},{"location":"planning/rtc_replayer/#output","title":"Output","text":"Name Type Description /api/external/set/rtc_commands
tier4_rtc_msgs::msg::CooperateCommands CooperateCommands that is replayed by this package"},{"location":"planning/rtc_replayer/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"planning/rtc_replayer/#assumptions-known-limits","title":"Assumptions / Known limits","text":"This package can't replay CooperateCommands correctly if CooperateStatusArray is not stable. And this replay is always later one step than actual however it will not affect much for behavior.
"},{"location":"planning/rtc_replayer/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":"tbd.
"},{"location":"planning/sampling_based_planner/bezier_sampler/","title":"B\u00e9zier sampler","text":""},{"location":"planning/sampling_based_planner/bezier_sampler/#bezier-sampler","title":"B\u00e9zier sampler","text":"Implementation of b\u00e9zier curves and their generation following the sampling strategy from https://ieeexplore.ieee.org/document/8932495
"},{"location":"planning/sampling_based_planner/frenet_planner/","title":"Frenet planner","text":""},{"location":"planning/sampling_based_planner/frenet_planner/#frenet-planner","title":"Frenet planner","text":"Trajectory generation in Frenet frame.
"},{"location":"planning/sampling_based_planner/frenet_planner/#description","title":"Description","text":"Original paper
"},{"location":"planning/sampling_based_planner/path_sampler/","title":"Path Sampler","text":""},{"location":"planning/sampling_based_planner/path_sampler/#path-sampler","title":"Path Sampler","text":""},{"location":"planning/sampling_based_planner/path_sampler/#purpose","title":"Purpose","text":"This package implements a node that uses sampling based planning to generate a drivable trajectory.
"},{"location":"planning/sampling_based_planner/path_sampler/#feature","title":"Feature","text":"This package is able to:
Note that the velocity is just taken over from the input path.
"},{"location":"planning/sampling_based_planner/path_sampler/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"planning/sampling_based_planner/path_sampler/#input","title":"input","text":"Name Type Description~/input/path
autoware_auto_planning_msgs/msg/Path Reference path and the corresponding drivable area ~/input/odometry
nav_msgs/msg/Odometry Current state of the ego vehicle ~/input/objects
autoware_auto_perception_msgs/msg/PredictedObjects objects to avoid"},{"location":"planning/sampling_based_planner/path_sampler/#output","title":"output","text":"Name Type Description ~/output/trajectory
autoware_auto_planning_msgs/msg/Trajectory generated trajectory that is feasible to drive and collision-free"},{"location":"planning/sampling_based_planner/path_sampler/#algorithm","title":"Algorithm","text":"Sampling based planning is decomposed into 3 successive steps:
Candidate trajectories are generated based on the current ego state and some target state. 2 sampling algorithms are currently implemented: sampling with b\u00e9zier curves or with polynomials in the frenet frame.
"},{"location":"planning/sampling_based_planner/path_sampler/#pruning","title":"Pruning","text":"The validity of each candidate trajectory is checked using a set of hard constraints.
Among the valid candidate trajectories, the best one is determined using a set of soft constraints (i.e., objective functions).
Each soft constraint is associated with a weight to allow tuning of the preferences.
"},{"location":"planning/sampling_based_planner/path_sampler/#limitations","title":"Limitations","text":"The quality of the candidates generated with polynomials in frenet frame greatly depend on the reference path. If the reference path is not smooth, the resulting candidates will probably be undriveable.
Failure to find a valid trajectory current results in a suddenly stopping trajectory.
"},{"location":"planning/sampling_based_planner/path_sampler/#comparison-with-the-obstacle_avoidance_planner","title":"Comparison with theobstacle_avoidance_planner
","text":"The obstacle_avoidance_planner
uses an optimization based approach, finding the optimal solution of a mathematical problem if it exists. When no solution can be found, it is often hard to identify the issue due to the intermediate mathematical representation of the problem.
In comparison, the sampling based approach cannot guarantee an optimal solution but is much more straightforward, making it easier to debug and tune.
"},{"location":"planning/sampling_based_planner/path_sampler/#how-to-tune-parameters","title":"How to Tune Parameters","text":"The sampling based planner mostly offers a trade-off between the consistent quality of the trajectory and the computation time. To guarantee that a good trajectory is found requires generating many candidates which linearly increases the computation time.
TODO
"},{"location":"planning/sampling_based_planner/path_sampler/#drivability-in-narrow-roads","title":"Drivability in narrow roads","text":""},{"location":"planning/sampling_based_planner/path_sampler/#computation-time","title":"Computation time","text":""},{"location":"planning/sampling_based_planner/path_sampler/#robustness","title":"Robustness","text":""},{"location":"planning/sampling_based_planner/path_sampler/#other-options","title":"Other options","text":""},{"location":"planning/sampling_based_planner/path_sampler/#how-to-debug","title":"How To Debug","text":"TODO
"},{"location":"planning/sampling_based_planner/sampler_common/","title":"Sampler Common","text":""},{"location":"planning/sampling_based_planner/sampler_common/#sampler-common","title":"Sampler Common","text":"Common functions for sampling based planners. This includes classes for representing paths and trajectories, hard and soft constraints, conversion between cartesian and frenet frames, ...
"},{"location":"planning/scenario_selector/","title":"scenario_selector","text":""},{"location":"planning/scenario_selector/#scenario_selector","title":"scenario_selector","text":""},{"location":"planning/scenario_selector/#scenario_selector_node","title":"scenario_selector_node","text":"scenario_selector_node
is a node that switches trajectories from each scenario.
~input/lane_driving/trajectory
autoware_auto_planning_msgs::Trajectory trajectory of LaneDriving scenario ~input/parking/trajectory
autoware_auto_planning_msgs::Trajectory trajectory of Parking scenario ~input/lanelet_map
autoware_auto_mapping_msgs::HADMapBin ~input/route
autoware_planning_msgs::LaneletRoute route and goal pose ~input/odometry
nav_msgs::Odometry for checking whether vehicle is stopped is_parking_completed
bool (implemented as rosparam) whether all split trajectory of Parking are published"},{"location":"planning/scenario_selector/#output-topics","title":"Output topics","text":"Name Type Description ~output/scenario
tier4_planning_msgs::Scenario current scenario and scenarios to be activated ~output/trajectory
autoware_auto_planning_msgs::Trajectory trajectory to be followed"},{"location":"planning/scenario_selector/#output-tfs","title":"Output TFs","text":"None
"},{"location":"planning/scenario_selector/#how-to-launch","title":"How to launch","text":"scenario_selector.launch
or add args when executing roslaunch
roslaunch scenario_selector scenario_selector.launch
roslaunch scenario_selector dummy_scenario_selector_{scenario_name}.launch
This package statically calculates the centerline satisfying path footprints inside the drivable area.
On narrow-road driving, the default centerline, which is the middle line between lanelets' right and left boundaries, often causes path footprints outside the drivable area. To make path footprints inside the drivable area, we use online path shape optimization by the obstacle_avoidance_planner package.
Instead of online path shape optimization, we introduce static centerline optimization. With this static centerline optimization, we have following advantages.
There are two interfaces to communicate with the centerline optimizer.
"},{"location":"planning/static_centerline_optimizer/#vector-map-builder-interface","title":"Vector Map Builder Interface","text":"Note: This function of Vector Map Builder has not been released. Please wait for a while. Currently there is no documentation about Vector Map Builder's operation for this function.
The optimized centerline can be generated from Vector Map Builder's operation.
We can run
with the following command by designating <vehicle_model>
ros2 launch static_centerline_optimizer run_planning_server.launch.xml vehicle_model:=<vehicle-model>\n
FYI, port ID of the http server is 4010 by default.
"},{"location":"planning/static_centerline_optimizer/#command-line-interface","title":"Command Line Interface","text":"The optimized centerline can be generated from the command line interface by designating
<input-osm-path>
<output-osm-path>
(not mandatory)<start-lanelet-id>
<end-lanelet-id>
<vehicle-model>
ros2 launch static_centerline_optimizer static_centerline_optimizer.launch.xml run_backgrond:=false lanelet2_input_file_path:=<input-osm-path> lanelet2_output_file_path:=<output-osm-path> start_lanelet_id:=<start-lane-id> end_lanelet_id:=<end-lane-id> vehicle_model:=<vehicle-model>\n
The default output map path containing the optimized centerline locates /tmp/lanelet2_map.osm
. If you want to change the output map path, you can remap the path by designating <output-osm-path>
.
When launching the path planning server, rviz is launched as well as follows.
Sometimes the optimized centerline footprints are close to the lanes' boundaries. We can check how close they are with unsafe footprints
marker as follows.
Footprints' color depends on its distance to the boundaries, and text expresses its distance.
By default, footprints' color is
This module subscribes required data (ego-pose, obstacles, etc), and publishes zero velocity limit to keep stopping if any of stop conditions are satisfied.
"},{"location":"planning/surround_obstacle_checker/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"planning/surround_obstacle_checker/#flow-chart","title":"Flow chart","text":""},{"location":"planning/surround_obstacle_checker/#algorithms","title":"Algorithms","text":""},{"location":"planning/surround_obstacle_checker/#check-data","title":"Check data","text":"Check that surround_obstacle_checker
receives no ground pointcloud, dynamic objects and current velocity data.
Calculate distance between ego vehicle and the nearest object. In this function, it calculates the minimum distance between the polygon of ego vehicle and all points in pointclouds and the polygons of dynamic objects.
"},{"location":"planning/surround_obstacle_checker/#stop-requirement","title":"Stop requirement","text":"If it satisfies all following conditions, it plans stopping.
State::PASS
, the distance is less than surround_check_distance
State::STOP
, the distance is less than surround_check_recover_distance
state_clear_time
To prevent chattering, surround_obstacle_checker
manages two states. As mentioned in stop condition section, it prevents chattering by changing threshold to find surround obstacle depending on the states.
State::PASS
: Stop planning is releasedState::STOP
\uff1aWhile stop planning/perception/obstacle_segmentation/pointcloud
sensor_msgs::msg::PointCloud2
Pointcloud of obstacles which the ego-vehicle should stop or avoid /perception/object_recognition/objects
autoware_auto_perception_msgs::msg::PredictedObjects
Dynamic objects /localization/kinematic_state
nav_msgs::msg::Odometry
Current twist /tf
tf2_msgs::msg::TFMessage
TF /tf_static
tf2_msgs::msg::TFMessage
TF static"},{"location":"planning/surround_obstacle_checker/#output","title":"Output","text":"Name Type Description ~/output/velocity_limit_clear_command
tier4_planning_msgs::msg::VelocityLimitClearCommand
Velocity limit clear command ~/output/max_velocity
tier4_planning_msgs::msg::VelocityLimit
Velocity limit command ~/output/no_start_reason
diagnostic_msgs::msg::DiagnosticStatus
No start reason ~/output/stop_reasons
tier4_planning_msgs::msg::StopReasonArray
Stop reasons ~/debug/marker
visualization_msgs::msg::MarkerArray
Marker for visualization ~/debug/footprint
geometry_msgs::msg::PolygonStamped
Ego vehicle base footprint for visualization ~/debug/footprint_offset
geometry_msgs::msg::PolygonStamped
Ego vehicle footprint with surround_check_distance
offset for visualization ~/debug/footprint_recover_offset
geometry_msgs::msg::PolygonStamped
Ego vehicle footprint with surround_check_recover_distance
offset for visualization"},{"location":"planning/surround_obstacle_checker/#parameters","title":"Parameters","text":"Name Type Description Default Range pointcloud.enable_check boolean enable to check surrounding pointcloud false N/A pointcloud.surround_check_front_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 pointcloud.surround_check_side_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 pointcloud.surround_check_back_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 unknown.enable_check boolean enable to check surrounding unknown objects true N/A unknown.surround_check_front_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 unknown.surround_check_side_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 unknown.surround_check_back_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 car.enable_check boolean enable to check surrounding car true N/A car.surround_check_front_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 car.surround_check_side_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 car.surround_check_back_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 truck.enable_check boolean enable to check surrounding truck true N/A truck.surround_check_front_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 truck.surround_check_side_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 truck.surround_check_back_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 bus.enable_check boolean enable to check surrounding bus true N/A bus.surround_check_front_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 bus.surround_check_side_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 bus.surround_check_back_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 trailer.enable_check boolean enable to check surrounding trailer true N/A trailer.surround_check_front_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 trailer.surround_check_side_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 trailer.surround_check_back_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 motorcycle.enable_check boolean enable to check surrounding motorcycle true N/A motorcycle.surround_check_front_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 motorcycle.surround_check_side_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 motorcycle.surround_check_back_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 bicycle.enable_check boolean enable to check surrounding bicycle true N/A bicycle.surround_check_front_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 bicycle.surround_check_side_distance float f objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 bicycle.surround_check_back_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 pedestrian.enable_check boolean enable to check surrounding pedestrian true N/A pedestrian.surround_check_front_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 pedestrian.surround_check_side_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 pedestrian.surround_check_back_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 surround_check_hysteresis_distance float If no object exists in this hysteresis distance added to the above distance, transit to \"non-surrounding-obstacle\" status [m] 0.3 \u22650.0 state_clear_time float Threshold to clear stop state [s] 2.0 \u22650.0 stop_state_ego_speed float Threshold to check ego vehicle stopped [m/s] 0.1 \u22650.0 publish_debug_footprints boolean Publish vehicle footprint & footprints with surround_check_distance and surround_check_recover_distance offsets. true N/A debug_footprint_label string select the label for debug footprint car ['pointcloud', 'unknown', 'car', 'truck', 'bus', 'trailer', 'motorcycle', 'bicycle', 'pedestrian'] Name Type Description Default value enable_check
bool
Indicates whether each object is considered in the obstacle check target. true
for objects; false
for point clouds surround_check_front_distance
bool
If there are objects or point clouds within this distance in front, transition to the \"exist-surrounding-obstacle\" status [m]. 0.5 surround_check_side_distance
double
If there are objects or point clouds within this side distance, transition to the \"exist-surrounding-obstacle\" status [m]. 0.5 surround_check_back_distance
double
If there are objects or point clouds within this back distance, transition to the \"exist-surrounding-obstacle\" status [m]. 0.5 surround_check_hysteresis_distance
double
If no object exists within surround_check_xxx_distance
plus this additional distance, transition to the \"non-surrounding-obstacle\" status [m]. 0.3 state_clear_time
double
Threshold to clear stop state [s] 2.0 stop_state_ego_speed
double
Threshold to check ego vehicle stopped [m/s] 0.1 stop_state_entry_duration_time
double
Threshold to check ego vehicle stopped [s] 0.1 publish_debug_footprints
bool
Publish vehicle footprint with/without offsets true
"},{"location":"planning/surround_obstacle_checker/#assumptions-known-limits","title":"Assumptions / Known limits","text":"To perform stop planning, it is necessary to get obstacle pointclouds data. Hence, it does not plan stopping if the obstacle is in blind spot.
"},{"location":"planning/surround_obstacle_checker/surround_obstacle_checker-design.ja/","title":"Surround Obstacle Checker","text":""},{"location":"planning/surround_obstacle_checker/surround_obstacle_checker-design.ja/#surround-obstacle-checker","title":"Surround Obstacle Checker","text":""},{"location":"planning/surround_obstacle_checker/surround_obstacle_checker-design.ja/#purpose","title":"Purpose","text":"surround_obstacle_checker
\u306f\u3001\u81ea\u8eca\u304c\u505c\u8eca\u4e2d\u3001\u81ea\u8eca\u306e\u5468\u56f2\u306b\u969c\u5bb3\u7269\u304c\u5b58\u5728\u3059\u308b\u5834\u5408\u306b\u767a\u9032\u3057\u306a\u3044\u3088\u3046\u306b\u505c\u6b62\u8a08\u753b\u3092\u884c\u3046\u30e2\u30b8\u30e5\u30fc\u30eb\u3067\u3042\u308b\u3002
\u70b9\u7fa4\u3001\u52d5\u7684\u7269\u4f53\u3001\u81ea\u8eca\u901f\u5ea6\u306e\u30c7\u30fc\u30bf\u304c\u53d6\u5f97\u3067\u304d\u3066\u3044\u308b\u304b\u3069\u3046\u304b\u3092\u78ba\u8a8d\u3059\u308b\u3002
"},{"location":"planning/surround_obstacle_checker/surround_obstacle_checker-design.ja/#get-distance-to-nearest-object","title":"Get distance to nearest object","text":"\u81ea\u8eca\u3068\u6700\u8fd1\u508d\u306e\u969c\u5bb3\u7269\u3068\u306e\u8ddd\u96e2\u3092\u8a08\u7b97\u3059\u308b\u3002 \u3053\u3053\u3067\u306f\u3001\u81ea\u8eca\u306e\u30dd\u30ea\u30b4\u30f3\u3092\u8a08\u7b97\u3057\u3001\u70b9\u7fa4\u306e\u5404\u70b9\u304a\u3088\u3073\u5404\u52d5\u7684\u7269\u4f53\u306e\u30dd\u30ea\u30b4\u30f3\u3068\u306e\u8ddd\u96e2\u3092\u305d\u308c\u305e\u308c\u8a08\u7b97\u3059\u308b\u3053\u3068\u3067\u6700\u8fd1\u508d\u306e\u969c\u5bb3\u7269\u3068\u306e\u8ddd\u96e2\u3092\u6c42\u3081\u308b\u3002
"},{"location":"planning/surround_obstacle_checker/surround_obstacle_checker-design.ja/#stop-condition","title":"Stop condition","text":"\u6b21\u306e\u6761\u4ef6\u3092\u3059\u3079\u3066\u6e80\u305f\u3059\u3068\u304d\u3001\u81ea\u8eca\u306f\u505c\u6b62\u8a08\u753b\u3092\u884c\u3046\u3002
State::PASS
\u306e\u3068\u304d\u3001surround_check_distance
\u672a\u6e80\u3067\u3042\u308bState::STOP
\u306e\u3068\u304d\u3001surround_check_recover_distance
\u4ee5\u4e0b\u3067\u3042\u308bstate_clear_time
\u4ee5\u4e0b\u3067\u3042\u308b\u3053\u3068\u30c1\u30e3\u30bf\u30ea\u30f3\u30b0\u9632\u6b62\u306e\u305f\u3081\u3001surround_obstacle_checker
\u3067\u306f\u72b6\u614b\u3092\u7ba1\u7406\u3057\u3066\u3044\u308b\u3002 Stop condition \u306e\u9805\u3067\u8ff0\u3079\u305f\u3088\u3046\u306b\u3001\u72b6\u614b\u306b\u3088\u3063\u3066\u969c\u5bb3\u7269\u5224\u5b9a\u306e\u3057\u304d\u3044\u5024\u3092\u5909\u66f4\u3059\u308b\u3053\u3068\u3067\u30c1\u30e3\u30bf\u30ea\u30f3\u30b0\u3092\u9632\u6b62\u3057\u3066\u3044\u308b\u3002
State::PASS
\uff1a\u505c\u6b62\u8a08\u753b\u89e3\u9664\u4e2dState::STOP
\uff1a\u505c\u6b62\u8a08\u753b\u4e2d/perception/obstacle_segmentation/pointcloud
sensor_msgs::msg::PointCloud2
Pointcloud of obstacles which the ego-vehicle should stop or avoid /perception/object_recognition/objects
autoware_auto_perception_msgs::msg::PredictedObjects
Dynamic objects /localization/kinematic_state
nav_msgs::msg::Odometry
Current twist /tf
tf2_msgs::msg::TFMessage
TF /tf_static
tf2_msgs::msg::TFMessage
TF static"},{"location":"planning/surround_obstacle_checker/surround_obstacle_checker-design.ja/#output","title":"Output","text":"Name Type Description ~/output/velocity_limit_clear_command
tier4_planning_msgs::msg::VelocityLimitClearCommand
Velocity limit clear command ~/output/max_velocity
tier4_planning_msgs::msg::VelocityLimit
Velocity limit command ~/output/no_start_reason
diagnostic_msgs::msg::DiagnosticStatus
No start reason ~/output/stop_reasons
tier4_planning_msgs::msg::StopReasonArray
Stop reasons ~/debug/marker
visualization_msgs::msg::MarkerArray
Marker for visualization"},{"location":"planning/surround_obstacle_checker/surround_obstacle_checker-design.ja/#parameters","title":"Parameters","text":"Name Type Description Default value use_pointcloud
bool
Use pointcloud as obstacle check true
use_dynamic_object
bool
Use dynamic object as obstacle check true
surround_check_distance
double
If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status [m] 0.5 surround_check_recover_distance
double
If no object exists in this distance, transit to \"non-surrounding-obstacle\" status [m] 0.8 state_clear_time
double
Threshold to clear stop state [s] 2.0 stop_state_ego_speed
double
Threshold to check ego vehicle stopped [m/s] 0.1 stop_state_entry_duration_time
double
Threshold to check ego vehicle stopped [s] 0.1"},{"location":"planning/surround_obstacle_checker/surround_obstacle_checker-design.ja/#assumptions-known-limits","title":"Assumptions / Known limits","text":"\u3053\u306e\u6a5f\u80fd\u304c\u52d5\u4f5c\u3059\u308b\u305f\u3081\u306b\u306f\u969c\u5bb3\u7269\u70b9\u7fa4\u306e\u89b3\u6e2c\u304c\u5fc5\u8981\u306a\u305f\u3081\u3001\u969c\u5bb3\u7269\u304c\u6b7b\u89d2\u306b\u5165\u3063\u3066\u3044\u308b\u5834\u5408\u306f\u505c\u6b62\u8a08\u753b\u3092\u884c\u308f\u306a\u3044\u3002
"},{"location":"sensing/gnss_poser/","title":"gnss_poser","text":""},{"location":"sensing/gnss_poser/#gnss_poser","title":"gnss_poser","text":""},{"location":"sensing/gnss_poser/#purpose","title":"Purpose","text":"The gnss_poser
is a node that subscribes gnss sensing messages and calculates vehicle pose with covariance.
This node subscribes to NavSatFix to publish the pose of base_link. The data in NavSatFix represents the antenna's position. Therefore, it performs a coordinate transformation using the tf from base_link
to the antenna's position. The frame_id of the antenna's position refers to NavSatFix's header.frame_id
. (Note that header.frame_id
in NavSatFix indicates the antenna's frame_id, not the Earth or reference ellipsoid. See also NavSatFix definition.)
If the transformation from base_link
to the antenna cannot be obtained, it outputs the pose of the antenna position without performing coordinate transformation.
/map/map_projector_info
tier4_map_msgs::msg::MapProjectorInfo
map projection info ~/input/fix
sensor_msgs::msg::NavSatFix
gnss status message ~/input/autoware_orientation
autoware_sensing_msgs::msg::GnssInsOrientationStamped
orientation click here for more details"},{"location":"sensing/gnss_poser/#output","title":"Output","text":"Name Type Description ~/output/pose
geometry_msgs::msg::PoseStamped
vehicle pose calculated from gnss sensing data ~/output/gnss_pose_cov
geometry_msgs::msg::PoseWithCovarianceStamped
vehicle pose with covariance calculated from gnss sensing data ~/output/gnss_fixed
tier4_debug_msgs::msg::BoolStamped
gnss fix status"},{"location":"sensing/gnss_poser/#parameters","title":"Parameters","text":""},{"location":"sensing/gnss_poser/#core-parameters","title":"Core Parameters","text":"Name Type Description Default Range base_frame string frame id for base_frame base_link N/A gnss_base_frame string frame id for gnss_base_frame gnss_base_link N/A map_frame string frame id for map_frame map N/A use_gnss_ins_orientation boolean use Gnss-Ins orientation true N/A gnss_pose_pub_method integer 0: Instant Value 1: Average Value 2: Median Value. If 0 is chosen buffer_epoch parameter loses affect. 0 \u22650\u22642 buff_epoch integer Buffer epoch 1 \u22650"},{"location":"sensing/gnss_poser/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/gnss_poser/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/gnss_poser/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/gnss_poser/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"sensing/gnss_poser/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/image_diagnostics/","title":"image_diagnostics","text":""},{"location":"sensing/image_diagnostics/#image_diagnostics","title":"image_diagnostics","text":""},{"location":"sensing/image_diagnostics/#purpose","title":"Purpose","text":"The image_diagnostics
is a node that check the status of the input raw image.
Below figure shows the flowchart of image diagnostics node. Each image is divided into small blocks for block state assessment.
Each small image block state is assessed as below figure.
After all image's blocks state are evaluated, the whole image status is summarized as below.
"},{"location":"sensing/image_diagnostics/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"sensing/image_diagnostics/#input","title":"Input","text":"Name Type Descriptioninput/raw_image
sensor_msgs::msg::Image
raw image"},{"location":"sensing/image_diagnostics/#output","title":"Output","text":"Name Type Description image_diag/debug/gray_image
sensor_msgs::msg::Image
gray image image_diag/debug/dft_image
sensor_msgs::msg::Image
discrete Fourier transformation image image_diag/debug/diag_block_image
sensor_msgs::msg::Image
each block state colorization image_diag/image_state_diag
tier4_debug_msgs::msg::Int32Stamped
image diagnostics status value /diagnostics
diagnostic_msgs::msg::DiagnosticArray
diagnostics"},{"location":"sensing/image_diagnostics/#parameters","title":"Parameters","text":""},{"location":"sensing/image_diagnostics/#assumptions-known-limits","title":"Assumptions / Known limits","text":"The image_transport_decompressor
is a node that decompresses images.
~/input/compressed_image
sensor_msgs::msg::CompressedImage
compressed image"},{"location":"sensing/image_transport_decompressor/#output","title":"Output","text":"Name Type Description ~/output/raw_image
sensor_msgs::msg::Image
decompressed image"},{"location":"sensing/image_transport_decompressor/#parameters","title":"Parameters","text":""},{"location":"sensing/image_transport_decompressor/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/image_transport_decompressor/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/image_transport_decompressor/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/image_transport_decompressor/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"sensing/image_transport_decompressor/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/imu_corrector/","title":"imu_corrector","text":""},{"location":"sensing/imu_corrector/#imu_corrector","title":"imu_corrector","text":""},{"location":"sensing/imu_corrector/#imu_corrector_1","title":"imu_corrector","text":"imu_corrector_node
is a node that correct imu data.
Mathematically, we assume the following equation:
\\[ \\tilde{\\omega}(t) = \\omega(t) + b(t) + n(t) \\]where \\(\\tilde{\\omega}\\) denotes observed angular velocity, \\(\\omega\\) denotes true angular velocity, \\(b\\) denotes an offset, and \\(n\\) denotes a gaussian noise. We also assume that \\(n\\sim\\mathcal{N}(0, \\sigma^2)\\).
"},{"location":"sensing/imu_corrector/#input","title":"Input","text":"Name Type Description~input
sensor_msgs::msg::Imu
raw imu data"},{"location":"sensing/imu_corrector/#output","title":"Output","text":"Name Type Description ~output
sensor_msgs::msg::Imu
corrected imu data"},{"location":"sensing/imu_corrector/#parameters","title":"Parameters","text":"Name Type Description angular_velocity_offset_x
double roll rate offset in imu_link [rad/s] angular_velocity_offset_y
double pitch rate offset imu_link [rad/s] angular_velocity_offset_z
double yaw rate offset imu_link [rad/s] angular_velocity_stddev_xx
double roll rate standard deviation imu_link [rad/s] angular_velocity_stddev_yy
double pitch rate standard deviation imu_link [rad/s] angular_velocity_stddev_zz
double yaw rate standard deviation imu_link [rad/s] acceleration_stddev
double acceleration standard deviation imu_link [m/s^2]"},{"location":"sensing/imu_corrector/#gyro_bias_estimator","title":"gyro_bias_estimator","text":"gyro_bias_validator
is a node that validates the bias of the gyroscope. It subscribes to the sensor_msgs::msg::Imu
topic and validate if the bias of the gyroscope is within the specified range.
Note that the node calculates bias from the gyroscope data by averaging the data only when the vehicle is stopped.
"},{"location":"sensing/imu_corrector/#input_1","title":"Input","text":"Name Type Description~/input/imu_raw
sensor_msgs::msg::Imu
raw imu data ~/input/pose
geometry_msgs::msg::PoseWithCovarianceStamped
ndt pose Note that the input pose is assumed to be accurate enough. For example when using NDT, we assume that the NDT is appropriately converged.
Currently, it is possible to use methods other than NDT as a pose_source
for Autoware, but less accurate methods are not suitable for IMU bias estimation.
In the future, with careful implementation for pose errors, the IMU bias estimated by NDT could potentially be used not only for validation but also for online calibration.
"},{"location":"sensing/imu_corrector/#output_1","title":"Output","text":"Name Type Description~/output/gyro_bias
geometry_msgs::msg::Vector3Stamped
bias of the gyroscope [rad/s]"},{"location":"sensing/imu_corrector/#parameters_1","title":"Parameters","text":"Note that this node also uses angular_velocity_offset_x
, angular_velocity_offset_y
, angular_velocity_offset_z
parameters from imu_corrector.param.yaml
.
gyro_bias_threshold
double threshold of the bias of the gyroscope [rad/s] timer_callback_interval_sec
double seconds about the timer callback function [sec] diagnostics_updater_interval_sec
double period of the diagnostics updater [sec] straight_motion_ang_vel_upper_limit
double upper limit of yaw angular velocity, beyond which motion is not considered straight [rad/s]"},{"location":"sensing/livox/livox_tag_filter/","title":"livox_tag_filter","text":""},{"location":"sensing/livox/livox_tag_filter/#livox_tag_filter","title":"livox_tag_filter","text":""},{"location":"sensing/livox/livox_tag_filter/#purpose","title":"Purpose","text":"The livox_tag_filter
is a node that removes noise from pointcloud by using the following tags:
~/input
sensor_msgs::msg::PointCloud2
reference points"},{"location":"sensing/livox/livox_tag_filter/#output","title":"Output","text":"Name Type Description ~/output
sensor_msgs::msg::PointCloud2
filtered points"},{"location":"sensing/livox/livox_tag_filter/#parameters","title":"Parameters","text":""},{"location":"sensing/livox/livox_tag_filter/#node-parameters","title":"Node Parameters","text":"Name Type Description ignore_tags
vector ignored tags (See the following table)"},{"location":"sensing/livox/livox_tag_filter/#tag-parameters","title":"Tag Parameters","text":"Bit Description Options 0~1 Point property based on spatial position 00: Normal 01: High confidence level of the noise 10: Moderate confidence level of the noise 11: Low confidence level of the noise 2~3 Point property based on intensity 00: Normal 01: High confidence level of the noise 10: Moderate confidence level of the noise 11: Reserved 4~5 Return number 00: return 0 01: return 1 10: return 2 11: return 3 6~7 Reserved You can download more detail description about the livox from external link [1].
"},{"location":"sensing/livox/livox_tag_filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/livox/livox_tag_filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/livox/livox_tag_filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/livox/livox_tag_filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":"[1] https://www.livoxtech.com/downloads
"},{"location":"sensing/livox/livox_tag_filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/pointcloud_preprocessor/","title":"pointcloud_preprocessor","text":""},{"location":"sensing/pointcloud_preprocessor/#pointcloud_preprocessor","title":"pointcloud_preprocessor","text":""},{"location":"sensing/pointcloud_preprocessor/#purpose","title":"Purpose","text":"The pointcloud_preprocessor
is a package that includes the following filters:
Detail description of each filter's algorithm is in the following links.
Filter Name Description Detail concatenate_data subscribe multiple pointclouds and concatenate them into a pointcloud link crop_box_filter remove points within a given box link distortion_corrector compensate pointcloud distortion caused by ego vehicle's movement during 1 scan link downsample_filter downsampling input pointcloud link outlier_filter remove points caused by hardware problems, rain drops and small insects as a noise link passthrough_filter remove points on the outside of a range in given field (e.g. x, y, z, intensity) link pointcloud_accumulator accumulate pointclouds for a given amount of time link vector_map_filter remove points on the outside of lane by using vector map link vector_map_inside_area_filter remove points inside of vector map area that has given type by parameter link"},{"location":"sensing/pointcloud_preprocessor/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"sensing/pointcloud_preprocessor/#input","title":"Input","text":"Name Type Description~/input/points
sensor_msgs::msg::PointCloud2
reference points ~/input/indices
pcl_msgs::msg::Indices
reference indices"},{"location":"sensing/pointcloud_preprocessor/#output","title":"Output","text":"Name Type Description ~/output/points
sensor_msgs::msg::PointCloud2
filtered points"},{"location":"sensing/pointcloud_preprocessor/#parameters","title":"Parameters","text":""},{"location":"sensing/pointcloud_preprocessor/#node-parameters","title":"Node Parameters","text":"Name Type Default Value Description input_frame
string \" \" input frame id output_frame
string \" \" output frame id max_queue_size
int 5 max queue size of input/output topics use_indices
bool false flag to use pointcloud indices latched_indices
bool false flag to latch pointcloud indices approximate_sync
bool false flag to use approximate sync option"},{"location":"sensing/pointcloud_preprocessor/#assumptions-known-limits","title":"Assumptions / Known limits","text":"pointcloud_preprocessor::Filter
is implemented based on pcl_perception [1] because of this issue.
[1] https://github.com/ros-perception/perception_pcl/blob/ros2/pcl_ros/src/pcl_ros/filters/filter.cpp
"},{"location":"sensing/pointcloud_preprocessor/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/pointcloud_preprocessor/docs/blockage_diag/","title":"blockage_diag","text":""},{"location":"sensing/pointcloud_preprocessor/docs/blockage_diag/#blockage_diag","title":"blockage_diag","text":""},{"location":"sensing/pointcloud_preprocessor/docs/blockage_diag/#purpose","title":"Purpose","text":"To ensure the performance of LiDAR and safety for autonomous driving, the abnormal condition diagnostics feature is needed. LiDAR blockage is abnormal condition of LiDAR when some unwanted objects stitch to and block the light pulses and return signal. This node's purpose is to detect the existing of blockage on LiDAR and its related size and location.
"},{"location":"sensing/pointcloud_preprocessor/docs/blockage_diag/#inner-workings-algorithmsblockage-detection","title":"Inner-workings / Algorithms(Blockage detection)","text":"This node bases on the no-return region and its location to decide if it is a blockage.
The logic is showed as below
"},{"location":"sensing/pointcloud_preprocessor/docs/blockage_diag/#inner-workings-algorithmsdust-detection","title":"Inner-workings /Algorithms(Dust detection)","text":"About dust detection, morphological processing is implemented. If the lidar's ray cannot be acquired due to dust in the lidar area where the point cloud is considered to return from the ground, black pixels appear as noise in the depth image. The area of noise is found by erosion and dilation these black pixels.
"},{"location":"sensing/pointcloud_preprocessor/docs/blockage_diag/#inputs-outputs","title":"Inputs / Outputs","text":"This implementation inherits pointcloud_preprocessor::Filter
class, please refer README.
~/input/pointcloud_raw_ex
sensor_msgs::msg::PointCloud2
The raw point cloud data is used to detect the no-return region"},{"location":"sensing/pointcloud_preprocessor/docs/blockage_diag/#output","title":"Output","text":"Name Type Description ~/output/blockage_diag/debug/blockage_mask_image
sensor_msgs::msg::Image
The mask image of detected blockage ~/output/blockage_diag/debug/ground_blockage_ratio
tier4_debug_msgs::msg::Float32Stamped
The area ratio of blockage region in ground region ~/output/blockage_diag/debug/sky_blockage_ratio
tier4_debug_msgs::msg::Float32Stamped
The area ratio of blockage region in sky region ~/output/blockage_diag/debug/lidar_depth_map
sensor_msgs::msg::Image
The depth map image of input point cloud ~/output/blockage_diag/debug/single_frame_dust_mask
sensor_msgs::msg::Image
The mask image of detected dusty area in latest single frame ~/output/blockage_diag/debug/multi_frame_dust_mask
sensor_msgs::msg::Image
The mask image of continuous detected dusty area ~/output/blockage_diag/debug/blockage_dust_merged_image
sensor_msgs::msg::Image
The merged image of blockage detection(red) and multi frame dusty area detection(yellow) results ~/output/blockage_diag/debug/ground_dust_ratio
tier4_debug_msgs::msg::Float32Stamped
The ratio of dusty area divided by area where ray usually returns from the ground."},{"location":"sensing/pointcloud_preprocessor/docs/blockage_diag/#parameters","title":"Parameters","text":"Name Type Description blockage_ratio_threshold
float The threshold of blockage area ratio.If the blockage value exceeds this threshold, the diagnostic state will be set to ERROR. blockage_count_threshold
float The threshold of number continuous blockage frames horizontal_ring_id
int The id of horizontal ring of the LiDAR angle_range
vector The effective range of LiDAR vertical_bins
int The LiDAR channel number model
string The LiDAR model blockage_buffering_frames
int The number of buffering about blockage detection [range:1-200] blockage_buffering_interval
int The interval of buffering about blockage detection dust_ratio_threshold
float The threshold of dusty area ratio dust_count_threshold
int The threshold of number continuous frames include dusty area dust_kernel_size
int The kernel size of morphology processing in dusty area detection dust_buffering_frames
int The number of buffering about dusty area detection [range:1-200] dust_buffering_interval
int The interval of buffering about dusty area detection"},{"location":"sensing/pointcloud_preprocessor/docs/blockage_diag/#assumptions-known-limits","title":"Assumptions / Known limits","text":"Many self-driving cars combine multiple LiDARs to expand the sensing range. Therefore, a function to combine a plurality of point clouds is required.
To combine multiple sensor data with a similar timestamp, the message_filters is often used in the ROS-based system, but this requires the assumption that all inputs can be received. Since safety must be strongly considered in autonomous driving, the point clouds concatenate node must be designed so that even if one sensor fails, the remaining sensor information can be output.
"},{"location":"sensing/pointcloud_preprocessor/docs/concatenate-data/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"The figure below represents the reception time of each sensor data and how it is combined in the case.
"},{"location":"sensing/pointcloud_preprocessor/docs/concatenate-data/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"sensing/pointcloud_preprocessor/docs/concatenate-data/#input","title":"Input","text":"Name Type Description~/input/twist
geometry_msgs::msg::TwistWithCovarianceStamped
The vehicle odometry is used to interpolate the timestamp of each sensor data"},{"location":"sensing/pointcloud_preprocessor/docs/concatenate-data/#output","title":"Output","text":"Name Type Description ~/output/points
sensor_msgs::msg::Pointcloud2
concatenated point clouds"},{"location":"sensing/pointcloud_preprocessor/docs/concatenate-data/#parameters","title":"Parameters","text":"Name Type Default Value Description input/points
vector of string [] input topic names that type must be sensor_msgs::msg::Pointcloud2
input_frame
string \"\" input frame id output_frame
string \"\" output frame id max_queue_size
int 5 max queue size of input/output topics"},{"location":"sensing/pointcloud_preprocessor/docs/concatenate-data/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description timeout_sec
double 0.1 tolerance of time to publish next pointcloud [s]When this time limit is exceeded, the filter concatenates and publishes pointcloud, even if not all the point clouds are subscribed. input_offset
vector of double [] This parameter can control waiting time for each input sensor pointcloud [s]. You must to set the same length of offsets with input pointclouds numbers. For its tuning, please see actual usage page. publish_synchronized_pointcloud
bool false If true, publish the time synchronized pointclouds. All input pointclouds are transformed and then re-published as message named <original_msg_name>_synchronized
. input_twist_topic_type
std::string twist Topic type for twist. Currently support twist
or odom
."},{"location":"sensing/pointcloud_preprocessor/docs/concatenate-data/#actual-usage","title":"Actual Usage","text":"For the example of actual usage of this node, please refer to the preprocessor.launch.py file.
"},{"location":"sensing/pointcloud_preprocessor/docs/concatenate-data/#how-to-tuning-timeout_sec-and-input_offset","title":"How to tuning timeout_sec and input_offset","text":"The values in timeout_sec
and input_offset
are used in the timer_callback to control concatenation timings.
timeout_sec
timeout_sec
- input_offset
timeout_sec
timeout sec for default timer To avoid mis-concatenation, at least this value must be shorter than sampling time. input_offset
timeout extension when a pointcloud comes to buffer. The amount of waiting time will be timeout_sec
- input_offset
. So, you will need to set larger value for the last-coming pointcloud and smaller for fore-coming."},{"location":"sensing/pointcloud_preprocessor/docs/concatenate-data/#node-separation-options-for-future","title":"Node separation options for future","text":"Since the pointcloud concatenation has two process, \"time synchronization\" and \"pointcloud concatenation\", it is possible to separate these processes.
In the future, Nodes will be completely separated in order to achieve node loosely coupled nature, but currently both nodes can be selected for backward compatibility (See this PR).
"},{"location":"sensing/pointcloud_preprocessor/docs/concatenate-data/#assumptions-known-limits","title":"Assumptions / Known limits","text":"It is necessary to assume that the vehicle odometry value exists, the sensor data and odometry timestamp are correct, and the TF from base_link
to sensor_frame
is also correct.
The crop_box_filter
is a node that removes points with in a given box region. This filter is used to remove the points that hit the vehicle itself.
pcl::CropBox
is used, which filters all points inside a given box.
This implementation inherit pointcloud_preprocessor::Filter
class, please refer README.
This implementation inherit pointcloud_preprocessor::Filter
class, please refer README.
min_x
double -1.0 x-coordinate minimum value for crop range max_x
double 1.0 x-coordinate maximum value for crop range min_y
double -1.0 y-coordinate minimum value for crop range max_y
double 1.0 y-coordinate maximum value for crop range min_z
double -1.0 z-coordinate minimum value for crop range max_z
double 1.0 z-coordinate maximum value for crop range"},{"location":"sensing/pointcloud_preprocessor/docs/crop-box-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/pointcloud_preprocessor/docs/crop-box-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/pointcloud_preprocessor/docs/crop-box-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/pointcloud_preprocessor/docs/crop-box-filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"sensing/pointcloud_preprocessor/docs/crop-box-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/pointcloud_preprocessor/docs/distortion-corrector/","title":"distortion_corrector","text":""},{"location":"sensing/pointcloud_preprocessor/docs/distortion-corrector/#distortion_corrector","title":"distortion_corrector","text":""},{"location":"sensing/pointcloud_preprocessor/docs/distortion-corrector/#purpose","title":"Purpose","text":"The distortion_corrector
is a node that compensates pointcloud distortion caused by ego vehicle's movement during 1 scan.
Since the LiDAR sensor scans by rotating an internal laser, the resulting point cloud will be distorted if the ego-vehicle moves during a single scan (as shown by the figure below). The node corrects this by interpolating sensor data using odometry of ego-vehicle.
"},{"location":"sensing/pointcloud_preprocessor/docs/distortion-corrector/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"The offset equation is given by $ TimeOffset = (55.296 \\mu s SequenceIndex) + (2.304 \\mu s DataPointIndex) $
To calculate the exact point time, add the TimeOffset to the timestamp. $ ExactPointTime = TimeStamp + TimeOffset $
"},{"location":"sensing/pointcloud_preprocessor/docs/distortion-corrector/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"sensing/pointcloud_preprocessor/docs/distortion-corrector/#input","title":"Input","text":"Name Type Description~/input/points
sensor_msgs::msg::PointCloud2
reference points ~/input/twist
geometry_msgs::msg::TwistWithCovarianceStamped
twist ~/input/imu
sensor_msgs::msg::Imu
imu data"},{"location":"sensing/pointcloud_preprocessor/docs/distortion-corrector/#output","title":"Output","text":"Name Type Description ~/output/points
sensor_msgs::msg::PointCloud2
filtered points"},{"location":"sensing/pointcloud_preprocessor/docs/distortion-corrector/#parameters","title":"Parameters","text":""},{"location":"sensing/pointcloud_preprocessor/docs/distortion-corrector/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description timestamp_field_name
string \"time_stamp\" time stamp field name use_imu
bool true use gyroscope for yaw rate if true, else use vehicle status"},{"location":"sensing/pointcloud_preprocessor/docs/distortion-corrector/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/pointcloud_preprocessor/docs/downsample-filter/","title":"downsample_filter","text":""},{"location":"sensing/pointcloud_preprocessor/docs/downsample-filter/#downsample_filter","title":"downsample_filter","text":""},{"location":"sensing/pointcloud_preprocessor/docs/downsample-filter/#purpose","title":"Purpose","text":"The downsample_filter
is a node that reduces the number of points.
pcl::VoxelGridNearestCentroid
is used. The algorithm is described in tier4_pcl_extensions
pcl::RandomSample
is used, which points are sampled with uniform probability.
pcl::VoxelGrid
is used, which points in each voxel are approximated with their centroid.
These implementations inherit pointcloud_preprocessor::Filter
class, please refer README.
These implementations inherit pointcloud_preprocessor::Filter
class, please refer README.
voxel_size_x
double 0.3 voxel size x [m] voxel_size_y
double 0.3 voxel size y [m] voxel_size_z
double 0.1 voxel size z [m]"},{"location":"sensing/pointcloud_preprocessor/docs/downsample-filter/#random-downsample-filter_1","title":"Random Downsample Filter","text":"Name Type Default Value Description sample_num
int 1500 number of indices to be sampled"},{"location":"sensing/pointcloud_preprocessor/docs/downsample-filter/#voxel-grid-downsample-filter_1","title":"Voxel Grid Downsample Filter","text":"Name Type Default Value Description voxel_size_x
double 0.3 voxel size x [m] voxel_size_y
double 0.3 voxel size y [m] voxel_size_z
double 0.1 voxel size z [m]"},{"location":"sensing/pointcloud_preprocessor/docs/downsample-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/pointcloud_preprocessor/docs/downsample-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/pointcloud_preprocessor/docs/downsample-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/pointcloud_preprocessor/docs/downsample-filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"sensing/pointcloud_preprocessor/docs/downsample-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/pointcloud_preprocessor/docs/dual-return-outlier-filter/","title":"dual_return_outlier_filter","text":""},{"location":"sensing/pointcloud_preprocessor/docs/dual-return-outlier-filter/#dual_return_outlier_filter","title":"dual_return_outlier_filter","text":""},{"location":"sensing/pointcloud_preprocessor/docs/dual-return-outlier-filter/#purpose","title":"Purpose","text":"The purpose is to remove point cloud noise such as fog and rain and publish visibility as a diagnostic topic.
"},{"location":"sensing/pointcloud_preprocessor/docs/dual-return-outlier-filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"This node can remove rain and fog by considering the light reflected from the object in two stages according to the attenuation factor. The dual_return_outlier_filter
is named because it removes noise using data that contains two types of return values separated by attenuation factor, as shown in the figure below.
Therefore, in order to use this node, the sensor driver must publish custom data including return_type
. please refer to PointXYZIRADRT data structure.
Another feature of this node is that it publishes visibility as a diagnostic topic. With this function, for example, in heavy rain, the sensing module can notify that the processing performance has reached its limit, which can lead to ensuring the safety of the vehicle.
In some complicated road scenes where normal objects also reflect the light in two stages, for instance plants, leaves, some plastic net etc, the visibility faces some drop in fine weather condition. To deal with that, optional settings of a region of interest (ROI) are added.
Fixed_xyz_ROI
mode: Visibility estimation based on the weak points in a fixed cuboid surrounding region of ego-vehicle, defined by x, y, z in base_link perspective.Fixed_azimuth_ROI
mode: Visibility estimation based on the weak points in a fixed surrounding region of ego-vehicle, defined by azimuth and distance of LiDAR perspective.When select 2 fixed ROI modes, due to the range of weak points is shrink, the sensitivity of visibility is decrease so that a trade of between weak_first_local_noise_threshold
and visibility_threshold
is needed.
The figure below describe how the node works.
The below picture shows the ROI options.
"},{"location":"sensing/pointcloud_preprocessor/docs/dual-return-outlier-filter/#inputs-outputs","title":"Inputs / Outputs","text":"This implementation inherits pointcloud_preprocessor::Filter
class, please refer README.
/dual_return_outlier_filter/frequency_image
sensor_msgs::msg::Image
The histogram image that represent visibility /dual_return_outlier_filter/visibility
tier4_debug_msgs::msg::Float32Stamped
A representation of visibility with a value from 0 to 1 /dual_return_outlier_filter/pointcloud_noise
sensor_msgs::msg::Pointcloud2
The pointcloud removed as noise"},{"location":"sensing/pointcloud_preprocessor/docs/dual-return-outlier-filter/#parameters","title":"Parameters","text":""},{"location":"sensing/pointcloud_preprocessor/docs/dual-return-outlier-filter/#node-parameters","title":"Node Parameters","text":"This implementation inherits pointcloud_preprocessor::Filter
class, please refer README.
vertical_bins
int The number of vertical bin for visibility histogram max_azimuth_diff
float Threshold for ring_outlier_filter weak_first_distance_ratio
double Threshold for ring_outlier_filter general_distance_ratio
double Threshold for ring_outlier_filter weak_first_local_noise_threshold
int The parameter for determining whether it is noise visibility_error_threshold
float When the percentage of white pixels in the binary histogram falls below this parameter the diagnostic status becomes ERR visibility_warn_threshold
float When the percentage of white pixels in the binary histogram falls below this parameter the diagnostic status becomes WARN roi_mode
string The name of ROI mode for switching min_azimuth_deg
float The left limit of azimuth for Fixed_azimuth_ROI
mode max_azimuth_deg
float The right limit of azimuth for Fixed_azimuth_ROI
mode max_distance
float The limit distance for for Fixed_azimuth_ROI
mode x_max
float Maximum of x for Fixed_xyz_ROI
mode x_min
float Minimum of x for Fixed_xyz_ROI
mode y_max
float Maximum of y for Fixed_xyz_ROI
mode y_min
float Minimum of y for Fixed_xyz_ROI
mode z_max
float Maximum of z for Fixed_xyz_ROI
mode z_min
float Minimum of z for Fixed_xyz_ROI
mode"},{"location":"sensing/pointcloud_preprocessor/docs/dual-return-outlier-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":"Not recommended for use as it is under development. Input data must be PointXYZIRADRT
type data including return_type
.
The outlier_filter
is a package for filtering outlier of points.
The passthrough_filter
is a node that removes points on the outside of a range in a given field (e.g. x, y, z, intensity, ring, etc).
~/input/points
sensor_msgs::msg::PointCloud2
reference points ~/input/indices
pcl_msgs::msg::Indices
reference indices"},{"location":"sensing/pointcloud_preprocessor/docs/passthrough-filter/#output","title":"Output","text":"Name Type Description ~/output/points
sensor_msgs::msg::PointCloud2
filtered points"},{"location":"sensing/pointcloud_preprocessor/docs/passthrough-filter/#parameters","title":"Parameters","text":""},{"location":"sensing/pointcloud_preprocessor/docs/passthrough-filter/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description filter_limit_min
int 0 minimum allowed field value filter_limit_max
int 127 maximum allowed field value filter_field_name
string \"ring\" filtering field name keep_organized
bool false flag to keep indices structure filter_limit_negative
bool false flag to return whether the data is inside limit or not"},{"location":"sensing/pointcloud_preprocessor/docs/passthrough-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/pointcloud_preprocessor/docs/passthrough-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/pointcloud_preprocessor/docs/passthrough-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/pointcloud_preprocessor/docs/passthrough-filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"sensing/pointcloud_preprocessor/docs/passthrough-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/pointcloud_preprocessor/docs/pointcloud-accumulator/","title":"pointcloud_accumulator","text":""},{"location":"sensing/pointcloud_preprocessor/docs/pointcloud-accumulator/#pointcloud_accumulator","title":"pointcloud_accumulator","text":""},{"location":"sensing/pointcloud_preprocessor/docs/pointcloud-accumulator/#purpose","title":"Purpose","text":"The pointcloud_accumulator
is a node that accumulates pointclouds for a given amount of time.
~/input/points
sensor_msgs::msg::PointCloud2
reference points"},{"location":"sensing/pointcloud_preprocessor/docs/pointcloud-accumulator/#output","title":"Output","text":"Name Type Description ~/output/points
sensor_msgs::msg::PointCloud2
filtered points"},{"location":"sensing/pointcloud_preprocessor/docs/pointcloud-accumulator/#parameters","title":"Parameters","text":""},{"location":"sensing/pointcloud_preprocessor/docs/pointcloud-accumulator/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description accumulation_time_sec
double 2.0 accumulation period [s] pointcloud_buffer_size
int 50 buffer size"},{"location":"sensing/pointcloud_preprocessor/docs/pointcloud-accumulator/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/pointcloud_preprocessor/docs/pointcloud-accumulator/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/pointcloud_preprocessor/docs/pointcloud-accumulator/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/pointcloud_preprocessor/docs/pointcloud-accumulator/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"sensing/pointcloud_preprocessor/docs/pointcloud-accumulator/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/pointcloud_preprocessor/docs/radius-search-2d-outlier-filter/","title":"radius_search_2d_outlier_filter","text":""},{"location":"sensing/pointcloud_preprocessor/docs/radius-search-2d-outlier-filter/#radius_search_2d_outlier_filter","title":"radius_search_2d_outlier_filter","text":""},{"location":"sensing/pointcloud_preprocessor/docs/radius-search-2d-outlier-filter/#purpose","title":"Purpose","text":"The purpose is to remove point cloud noise such as insects and rain.
"},{"location":"sensing/pointcloud_preprocessor/docs/radius-search-2d-outlier-filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"RadiusOutlierRemoval filter which removes all indices in its input cloud that don\u2019t have at least some number of neighbors within a certain range.
The description above is quoted from [1]. pcl::search::KdTree
[2] is used to implement this package.
This implementation inherits pointcloud_preprocessor::Filter
class, please refer README.
This implementation inherits pointcloud_preprocessor::Filter
class, please refer README.
min_neighbors
int If points in the circle centered on reference point is less than min_neighbors
, a reference point is judged as outlier search_radius
double Searching number of points included in search_radius
"},{"location":"sensing/pointcloud_preprocessor/docs/radius-search-2d-outlier-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":"Since the method is to count the number of points contained in the cylinder with the direction of gravity as the direction of the cylinder axis, it is a prerequisite that the ground has been removed.
"},{"location":"sensing/pointcloud_preprocessor/docs/radius-search-2d-outlier-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/pointcloud_preprocessor/docs/radius-search-2d-outlier-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/pointcloud_preprocessor/docs/radius-search-2d-outlier-filter/#referencesexternal-links","title":"References/External links","text":"[1] https://pcl.readthedocs.io/projects/tutorials/en/latest/remove_outliers.html
[2] https://pcl.readthedocs.io/projects/tutorials/en/latest/kdtree_search.html#kdtree-search
"},{"location":"sensing/pointcloud_preprocessor/docs/radius-search-2d-outlier-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/pointcloud_preprocessor/docs/ring-outlier-filter/","title":"ring_outlier_filter","text":""},{"location":"sensing/pointcloud_preprocessor/docs/ring-outlier-filter/#ring_outlier_filter","title":"ring_outlier_filter","text":""},{"location":"sensing/pointcloud_preprocessor/docs/ring-outlier-filter/#purpose","title":"Purpose","text":"The purpose is to remove point cloud noise such as insects and rain.
"},{"location":"sensing/pointcloud_preprocessor/docs/ring-outlier-filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"A method of operating scan in chronological order and removing noise based on the rate of change in the distance between points
"},{"location":"sensing/pointcloud_preprocessor/docs/ring-outlier-filter/#inputs-outputs","title":"Inputs / Outputs","text":"This implementation inherits pointcloud_preprocessor::Filter
class, please refer README.
This implementation inherits pointcloud_preprocessor::Filter
class, please refer README.
distance_ratio
double 1.03 object_length_threshold
double 0.1 num_points_threshold
int 4 max_rings_num
uint_16 128 max_points_num_per_ring
size_t 4000 Set this value large enough such that HFoV / resolution < max_points_num_per_ring
"},{"location":"sensing/pointcloud_preprocessor/docs/ring-outlier-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":"It is a prerequisite to input a scan point cloud in chronological order. In this repository it is defined as blow structure (please refer to PointXYZIRADT).
The vector_map_filter
is a node that removes points on the outside of lane by using vector map.
~/input/points
sensor_msgs::msg::PointCloud2
reference points ~/input/vector_map
autoware_auto_mapping_msgs::msg::HADMapBin
vector map"},{"location":"sensing/pointcloud_preprocessor/docs/vector-map-filter/#output","title":"Output","text":"Name Type Description ~/output/points
sensor_msgs::msg::PointCloud2
filtered points"},{"location":"sensing/pointcloud_preprocessor/docs/vector-map-filter/#parameters","title":"Parameters","text":""},{"location":"sensing/pointcloud_preprocessor/docs/vector-map-filter/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description voxel_size_x
double 0.04 voxel size voxel_size_y
double 0.04 voxel size"},{"location":"sensing/pointcloud_preprocessor/docs/vector-map-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/pointcloud_preprocessor/docs/vector-map-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/pointcloud_preprocessor/docs/vector-map-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/pointcloud_preprocessor/docs/vector-map-filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"sensing/pointcloud_preprocessor/docs/vector-map-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/pointcloud_preprocessor/docs/vector-map-inside-area-filter/","title":"vector_map_inside_area_filter","text":""},{"location":"sensing/pointcloud_preprocessor/docs/vector-map-inside-area-filter/#vector_map_inside_area_filter","title":"vector_map_inside_area_filter","text":""},{"location":"sensing/pointcloud_preprocessor/docs/vector-map-inside-area-filter/#purpose","title":"Purpose","text":"The vector_map_inside_area_filter
is a node that removes points inside the vector map area that has given type by parameter.
polygon_type
This implementation inherits pointcloud_preprocessor::Filter
class, so please see also README.
~/input
sensor_msgs::msg::PointCloud2
input points ~/input/vector_map
autoware_auto_mapping_msgs::msg::HADMapBin
vector map used for filtering points"},{"location":"sensing/pointcloud_preprocessor/docs/vector-map-inside-area-filter/#output","title":"Output","text":"Name Type Description ~/output
sensor_msgs::msg::PointCloud2
filtered points"},{"location":"sensing/pointcloud_preprocessor/docs/vector-map-inside-area-filter/#core-parameters","title":"Core Parameters","text":"Name Type Description polygon_type
string polygon type to be filtered"},{"location":"sensing/pointcloud_preprocessor/docs/vector-map-inside-area-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/pointcloud_preprocessor/docs/voxel-grid-outlier-filter/","title":"voxel_grid_outlier_filter","text":""},{"location":"sensing/pointcloud_preprocessor/docs/voxel-grid-outlier-filter/#voxel_grid_outlier_filter","title":"voxel_grid_outlier_filter","text":""},{"location":"sensing/pointcloud_preprocessor/docs/voxel-grid-outlier-filter/#purpose","title":"Purpose","text":"The purpose is to remove point cloud noise such as insects and rain.
"},{"location":"sensing/pointcloud_preprocessor/docs/voxel-grid-outlier-filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"Removing point cloud noise based on the number of points existing within a voxel. The radius_search_2d_outlier_filter is better for accuracy, but this method has the advantage of low calculation cost.
"},{"location":"sensing/pointcloud_preprocessor/docs/voxel-grid-outlier-filter/#inputs-outputs","title":"Inputs / Outputs","text":"This implementation inherits pointcloud_preprocessor::Filter
class, please refer README.
This implementation inherits pointcloud_preprocessor::Filter
class, please refer README.
voxel_size_x
double 0.3 the voxel size along x-axis [m] voxel_size_y
double 0.3 the voxel size along y-axis [m] voxel_size_z
double 0.1 the voxel size along z-axis [m] voxel_points_threshold
int 2 the minimum number of points in each voxel"},{"location":"sensing/pointcloud_preprocessor/docs/voxel-grid-outlier-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/pointcloud_preprocessor/docs/voxel-grid-outlier-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/pointcloud_preprocessor/docs/voxel-grid-outlier-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/pointcloud_preprocessor/docs/voxel-grid-outlier-filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"sensing/pointcloud_preprocessor/docs/voxel-grid-outlier-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/radar_scan_to_pointcloud2/","title":"radar_scan_to_pointcloud2","text":""},{"location":"sensing/radar_scan_to_pointcloud2/#radar_scan_to_pointcloud2","title":"radar_scan_to_pointcloud2","text":""},{"location":"sensing/radar_scan_to_pointcloud2/#radar_scan_to_pointcloud2_node","title":"radar_scan_to_pointcloud2_node","text":"radar_msgs::msg::RadarScan
to sensor_msgs::msg::PointCloud2
true
. publish_doppler_pointcloud bool Whether publish radar pointcloud whose intensity is doppler velocity. Default is false
."},{"location":"sensing/radar_scan_to_pointcloud2/#how-to-launch","title":"How to launch","text":"ros2 launch radar_scan_to_pointcloud2 radar_scan_to_pointcloud2.launch.xml\n
"},{"location":"sensing/radar_static_pointcloud_filter/","title":"radar_static_pointcloud_filter","text":""},{"location":"sensing/radar_static_pointcloud_filter/#radar_static_pointcloud_filter","title":"radar_static_pointcloud_filter","text":""},{"location":"sensing/radar_static_pointcloud_filter/#radar_static_pointcloud_filter_node","title":"radar_static_pointcloud_filter_node","text":"Extract static/dynamic radar pointcloud by using doppler velocity and ego motion. Calculation cost is O(n). n
is the number of radar pointcloud.
ros2 launch radar_static_pointcloud_filter radar_static_pointcloud_filter.launch\n
"},{"location":"sensing/radar_static_pointcloud_filter/#algorithm","title":"Algorithm","text":""},{"location":"sensing/radar_threshold_filter/","title":"radar_threshold_filter","text":""},{"location":"sensing/radar_threshold_filter/#radar_threshold_filter","title":"radar_threshold_filter","text":""},{"location":"sensing/radar_threshold_filter/#radar_threshold_filter_node","title":"radar_threshold_filter_node","text":"Remove noise from radar return by threshold.
Calculation cost is O(n). n
is the number of radar return.
ros2 launch radar_threshold_filter radar_threshold_filter.launch.xml\n
"},{"location":"sensing/radar_tracks_noise_filter/","title":"radar_tracks_noise_filter","text":""},{"location":"sensing/radar_tracks_noise_filter/#radar_tracks_noise_filter","title":"radar_tracks_noise_filter","text":"This package contains a radar object filter module for radar_msgs/msg/RadarTrack
. This package can filter noise objects in RadarTracks.
The core algorithm of this package is RadarTrackCrossingNoiseFilterNode::isNoise()
function. See the function and the parameters for details.
Radar can detect x-axis velocity as doppler velocity, but cannot detect y-axis velocity. Some radar can estimate y-axis velocity inside the device, but it sometimes lack precision. In y-axis threshold filter, if y-axis velocity of RadarTrack is more than velocity_y_threshold
, it treats as noise objects.
~/input/tracks
radar_msgs/msg/RadarTracks.msg 3D detected tracks."},{"location":"sensing/radar_tracks_noise_filter/#output","title":"Output","text":"Name Type Description ~/output/noise_tracks
radar_msgs/msg/RadarTracks.msg Noise objects ~/output/filtered_tracks
radar_msgs/msg/RadarTracks.msg Filtered objects"},{"location":"sensing/radar_tracks_noise_filter/#parameters","title":"Parameters","text":"Name Type Description Default value velocity_y_threshold
double Y-axis velocity threshold [m/s]. If y-axis velocity of RadarTrack is more than velocity_y_threshold
, it treats as noise objects. 7.0"},{"location":"sensing/tier4_pcl_extensions/","title":"tier4_pcl_extensions","text":""},{"location":"sensing/tier4_pcl_extensions/#tier4_pcl_extensions","title":"tier4_pcl_extensions","text":""},{"location":"sensing/tier4_pcl_extensions/#purpose","title":"Purpose","text":"The tier4_pcl_extensions
is a pcl extension library. The voxel grid filter in this package works with a different algorithm than the original one.
[1] https://pointclouds.org/documentation/tutorials/voxel_grid.html
"},{"location":"sensing/tier4_pcl_extensions/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/vehicle_velocity_converter/","title":"vehicle_velocity_converter","text":""},{"location":"sensing/vehicle_velocity_converter/#vehicle_velocity_converter","title":"vehicle_velocity_converter","text":""},{"location":"sensing/vehicle_velocity_converter/#purpose","title":"Purpose","text":"This package converts autoware_auto_vehicle_msgs::msg::VehicleReport message to geometry_msgs::msg::TwistWithCovarianceStamped for gyro odometer node.
"},{"location":"sensing/vehicle_velocity_converter/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"sensing/vehicle_velocity_converter/#input","title":"Input","text":"Name Type Descriptionvelocity_status
autoware_auto_vehicle_msgs::msg::VehicleReport
vehicle velocity"},{"location":"sensing/vehicle_velocity_converter/#output","title":"Output","text":"Name Type Description twist_with_covariance
geometry_msgs::msg::TwistWithCovarianceStamped
twist with covariance converted from VehicleReport"},{"location":"sensing/vehicle_velocity_converter/#parameters","title":"Parameters","text":"Name Type Description speed_scale_factor
double speed scale factor (ideal value is 1.0) frame_id
string frame id for output message velocity_stddev_xx
double standard deviation for vx angular_velocity_stddev_zz
double standard deviation for yaw rate"},{"location":"simulator/dummy_perception_publisher/","title":"dummy_perception_publisher","text":""},{"location":"simulator/dummy_perception_publisher/#dummy_perception_publisher","title":"dummy_perception_publisher","text":""},{"location":"simulator/dummy_perception_publisher/#purpose","title":"Purpose","text":"This node publishes the result of the dummy detection with the type of perception.
"},{"location":"simulator/dummy_perception_publisher/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"simulator/dummy_perception_publisher/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"simulator/dummy_perception_publisher/#input","title":"Input","text":"Name Type Description/tf
tf2_msgs/TFMessage
TF (self-pose) input/object
dummy_perception_publisher::msg::Object
dummy detection objects"},{"location":"simulator/dummy_perception_publisher/#output","title":"Output","text":"Name Type Description output/dynamic_object
tier4_perception_msgs::msg::DetectedObjectsWithFeature
dummy detection objects output/points_raw
sensor_msgs::msg::PointCloud2
point cloud of objects output/debug/ground_truth_objects
autoware_auto_perception_msgs::msg::TrackedObjects
ground truth objects"},{"location":"simulator/dummy_perception_publisher/#parameters","title":"Parameters","text":"Name Type Default Value Explanation visible_range
double 100.0 sensor visible range [m] detection_successful_rate
double 0.8 sensor detection rate. (min) 0.0 - 1.0(max) enable_ray_tracing
bool true if True, use ray tracking use_object_recognition
bool true if True, publish objects topic use_base_link_z
bool true if True, node uses z coordinate of ego base_link publish_ground_truth
bool false if True, publish ground truth objects use_fixed_random_seed
bool false if True, use fixed random seed random_seed
int 0 random seed"},{"location":"simulator/dummy_perception_publisher/#node-parameters","title":"Node Parameters","text":"None.
"},{"location":"simulator/dummy_perception_publisher/#core-parameters","title":"Core Parameters","text":"None.
"},{"location":"simulator/dummy_perception_publisher/#assumptions-known-limits","title":"Assumptions / Known limits","text":"TBD.
"},{"location":"simulator/fault_injection/","title":"fault_injection","text":""},{"location":"simulator/fault_injection/#fault_injection","title":"fault_injection","text":""},{"location":"simulator/fault_injection/#purpose","title":"Purpose","text":"This package is used to convert pseudo system faults from PSim to Diagnostics and notify Autoware. The component diagram is as follows:
"},{"location":"simulator/fault_injection/#test","title":"Test","text":"source install/setup.bash\ncd fault_injection\nlaunch_test test/test_fault_injection_node.test.py\n
"},{"location":"simulator/fault_injection/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"simulator/fault_injection/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"simulator/fault_injection/#input","title":"Input","text":"Name Type Description ~/input/simulation_events
tier4_simulation_msgs::msg::SimulationEvents
simulation events"},{"location":"simulator/fault_injection/#output","title":"Output","text":"None.
"},{"location":"simulator/fault_injection/#parameters","title":"Parameters","text":"None.
"},{"location":"simulator/fault_injection/#node-parameters","title":"Node Parameters","text":"None.
"},{"location":"simulator/fault_injection/#core-parameters","title":"Core Parameters","text":"None.
"},{"location":"simulator/fault_injection/#assumptions-known-limits","title":"Assumptions / Known limits","text":"TBD.
"},{"location":"simulator/simple_planning_simulator/","title":"simple_planning_simulator","text":""},{"location":"simulator/simple_planning_simulator/#simple_planning_simulator","title":"simple_planning_simulator","text":""},{"location":"simulator/simple_planning_simulator/#purpose-use-cases","title":"Purpose / Use cases","text":"This node simulates the vehicle motion for a vehicle command in 2D using a simple vehicle model.
"},{"location":"simulator/simple_planning_simulator/#design","title":"Design","text":"The purpose of this simulator is for the integration test of planning and control modules. This does not simulate sensing or perception, but is implemented in pure c++ only and works without GPU.
"},{"location":"simulator/simple_planning_simulator/#assumptions-known-limits","title":"Assumptions / Known limits","text":"geometry_msgs/msg/PoseWithCovarianceStamped
] : for initial poseautoware_auto_msgs/msg/AckermannControlCommand
] : target command to drive a vehicleautoware_auto_msgs/msg/AckermannControlCommand
] : manual target command to drive a vehicle (used when control_mode_request = Manual)autoware_auto_vehicle_msgs/msg/GearCommand
] : target gear command.autoware_auto_vehicle_msgs/msg/GearCommand
] : target gear command (used when control_mode_request = Manual)autoware_auto_vehicle_msgs/msg/TurnIndicatorsCommand
] : target turn indicator commandautoware_auto_vehicle_msgs/msg/HazardLightsCommand
] : target hazard lights commandtier4_vehicle_msgs::srv::ControlModeRequest
] : mode change for Auto/Manual drivingtf2_msgs/msg/TFMessage
] : simulated vehicle pose (base_link)nav_msgs/msg/Odometry
] : simulated vehicle pose and twistautoware_auto_vehicle_msgs/msg/SteeringReport
] : simulated steering angleautoware_auto_vehicle_msgs/msg/ControlModeReport
] : current control mode (Auto/Manual)autoware_auto_vehicle_msgs/msg/ControlModeReport
] : simulated gearautoware_auto_vehicle_msgs/msg/ControlModeReport
] : simulated turn indicator statusautoware_auto_vehicle_msgs/msg/ControlModeReport
] : simulated hazard lights statusinput/initialpose
topic is published. \"INITIAL_POSE_TOPIC\" add_measurement_noise bool If true, the Gaussian noise is added to the simulated results. true pos_noise_stddev double Standard deviation for position noise 0.01 rpy_noise_stddev double Standard deviation for Euler angle noise 0.0001 vel_noise_stddev double Standard deviation for longitudinal velocity noise 0.0 angvel_noise_stddev double Standard deviation for angular velocity noise 0.0 steer_noise_stddev double Standard deviation for steering angle noise 0.0001 measurement_steer_bias double Measurement bias for steering angle 0.0"},{"location":"simulator/simple_planning_simulator/#vehicle-model-parameters","title":"Vehicle Model Parameters","text":""},{"location":"simulator/simple_planning_simulator/#vehicle_model_type-options","title":"vehicle_model_type options","text":"IDEAL_STEER_VEL
IDEAL_STEER_ACC
IDEAL_STEER_ACC_GEARED
DELAY_STEER_VEL
DELAY_STEER_ACC
DELAY_STEER_ACC_GEARED
DELAY_STEER_MAP_ACC_GEARED
: applies 1D dynamics and time delay to the steering and acceleration commands. The simulated acceleration is determined by a value converted through the provided acceleration map. This model is valuable for an accurate simulation with acceleration deviations in a real vehicle.The IDEAL
model moves ideally as commanded, while the DELAY
model moves based on a 1st-order with time delay model. The STEER
means the model receives the steer command. The VEL
means the model receives the target velocity command, while the ACC
model receives the target acceleration command. The GEARED
suffix means that the motion will consider the gear command: the vehicle moves only one direction following the gear.
The table below shows which models correspond to what parameters. The model names are written in abbreviated form (e.g. IDEAL_STEER_VEL = I_ST_V).
Name Type Description I_ST_V I_ST_A I_ST_A_G D_ST_V D_ST_A D_ST_A_G D_ST_M_ACC_G Default value unit acc_time_delay double dead time for the acceleration input x x x x o o o 0.1 [s] steer_time_delay double dead time for the steering input x x x o o o o 0.24 [s] vel_time_delay double dead time for the velocity input x x x o x x x 0.25 [s] acc_time_constant double time constant of the 1st-order acceleration dynamics x x x x o o o 0.1 [s] steer_time_constant double time constant of the 1st-order steering dynamics x x x o o o o 0.27 [s] steer_dead_band double dead band for steering angle x x x o o o x 0.0 [rad] vel_time_constant double time constant of the 1st-order velocity dynamics x x x o x x x 0.5 [s] vel_lim double limit of velocity x x x o o o o 50.0 [m/s] vel_rate_lim double limit of acceleration x x x o o o o 7.0 [m/ss] steer_lim double limit of steering angle x x x o o o o 1.0 [rad] steer_rate_lim double limit of steering angle change rate x x x o o o o 5.0 [rad/s] debug_acc_scaling_factor double scaling factor for accel command x x x x o o x 1.0 [-] debug_steer_scaling_factor double scaling factor for steer command x x x x o o x 1.0 [-] acceleration_map_path string path to csv file for acceleration map which converts velocity and ideal acceleration to actual acceleration x x x x x x o - [-]The acceleration_map
is used only for DELAY_STEER_MAP_ACC_GEARED
and it shows the acceleration command on the vertical axis and the current velocity on the horizontal axis, with each cell representing the converted acceleration command that is actually used in the simulator's motion calculation. Values in between are linearly interpolated.
Example of acceleration_map.csv
default, 0.00, 1.39, 2.78, 4.17, 5.56, 6.94, 8.33, 9.72, 11.11, 12.50, 13.89, 15.28, 16.67\n-4.0, -4.40, -4.36, -4.38, -4.12, -4.20, -3.94, -3.98, -3.80, -3.77, -3.76, -3.59, -3.50, -3.40\n-3.5, -4.00, -3.91, -3.85, -3.64, -3.68, -3.55, -3.42, -3.24, -3.25, -3.00, -3.04, -2.93, -2.80\n-3.0, -3.40, -3.37, -3.33, -3.00, -3.00, -2.90, -2.88, -2.65, -2.43, -2.44, -2.43, -2.39, -2.30\n-2.5, -2.80, -2.72, -2.72, -2.62, -2.41, -2.43, -2.26, -2.18, -2.11, -2.03, -1.96, -1.91, -1.85\n-2.0, -2.30, -2.24, -2.12, -2.02, -1.92, -1.81, -1.67, -1.58, -1.51, -1.49, -1.40, -1.35, -1.30\n-1.5, -1.70, -1.61, -1.47, -1.46, -1.40, -1.37, -1.29, -1.24, -1.10, -0.99, -0.83, -0.80, -0.78\n-1.0, -1.30, -1.28, -1.10, -1.09, -1.04, -1.02, -0.98, -0.89, -0.82, -0.61, -0.52, -0.54, -0.56\n-0.8, -0.96, -0.90, -0.82, -0.74, -0.70, -0.65, -0.63, -0.59, -0.55, -0.44, -0.39, -0.39, -0.35\n-0.6, -0.77, -0.71, -0.67, -0.65, -0.58, -0.52, -0.51, -0.50, -0.40, -0.33, -0.30, -0.31, -0.30\n-0.4, -0.45, -0.40, -0.45, -0.44, -0.38, -0.35, -0.31, -0.30, -0.26, -0.30, -0.29, -0.31, -0.25\n-0.2, -0.24, -0.24, -0.25, -0.22, -0.23, -0.25, -0.27, -0.29, -0.24, -0.22, -0.17, -0.18, -0.12\n 0.0, 0.00, 0.00, -0.05, -0.05, -0.05, -0.05, -0.08, -0.08, -0.08, -0.08, -0.10, -0.10, -0.10\n 0.2, 0.16, 0.12, 0.02, 0.02, 0.00, 0.00, -0.05, -0.05, -0.05, -0.05, -0.08, -0.08, -0.08\n 0.4, 0.38, 0.30, 0.22, 0.25, 0.24, 0.23, 0.20, 0.16, 0.16, 0.14, 0.10, 0.05, 0.05\n 0.6, 0.52, 0.52, 0.51, 0.49, 0.43, 0.40, 0.35, 0.33, 0.33, 0.33, 0.32, 0.34, 0.34\n 0.8, 0.82, 0.81, 0.78, 0.68, 0.63, 0.56, 0.53, 0.48, 0.43, 0.41, 0.37, 0.38, 0.40\n 1.0, 1.00, 1.08, 1.01, 0.88, 0.76, 0.69, 0.66, 0.58, 0.54, 0.49, 0.45, 0.40, 0.40\n 1.5, 1.52, 1.50, 1.38, 1.26, 1.14, 1.03, 0.91, 0.82, 0.67, 0.61, 0.51, 0.41, 0.41\n 2.0, 1.80, 1.80, 1.64, 1.43, 1.25, 1.11, 0.96, 0.81, 0.70, 0.59, 0.51, 0.42, 0.42\n
Note: The steering/velocity/acceleration dynamics is modeled by a first order system with a deadtime in a delay model. The definition of the time constant is the time it takes for the step response to rise up to 63% of its final value. The deadtime is a delay in the response to a control input.
"},{"location":"simulator/simple_planning_simulator/#default-tf-configuration","title":"Default TF configuration","text":"Since the vehicle outputs odom
->base_link
tf, this simulator outputs the tf with the same frame_id configuration. In the simple_planning_simulator.launch.py, the node that outputs the map
->odom
tf, that usually estimated by the localization module (e.g. NDT), will be launched as well. Since the tf output by this simulator module is an ideal value, odom
->map
will always be 0.
Ego vehicle pitch angle is calculated in the following manner.
NOTE: driving against the line direction (as depicted in image's bottom row) is not supported and only shown for illustration purposes.
"},{"location":"simulator/simple_planning_simulator/#error-detection-and-handling","title":"Error detection and handling","text":"The only validation on inputs being done is testing for a valid vehicle model type.
"},{"location":"simulator/simple_planning_simulator/#security-considerations","title":"Security considerations","text":""},{"location":"simulator/simple_planning_simulator/#references-external-links","title":"References / External links","text":"This is originally developed in the Autoware.AI. See the link below.
https://github.com/Autoware-AI/simulation/tree/master/wf_simulator
"},{"location":"simulator/simple_planning_simulator/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":"This package is used to convert autoware_msgs
to autoware_auto_msgs
.
As we transition from autoware_auto_msgs
to autoware_msgs
, we wanted to provide flexibility and compatibility for users who are still using autoware_auto_msgs
.
This adapter package allows users to easily convert messages between the two formats.
"},{"location":"system/autoware_auto_msgs_adapter/#capabilities","title":"Capabilities","text":"The autoware_auto_msgs_adapter
package provides the following capabilities:
autoware_msgs
messages to autoware_auto_msgs
messages.Customize the adapter configuration by replicating and editing the autoware_auto_msgs_adapter_control.param.yaml
file located in the autoware_auto_msgs_adapter/config
directory. Example configuration:
/**:\nros__parameters:\nmsg_type_target: \"autoware_auto_control_msgs/msg/AckermannControlCommand\"\ntopic_name_source: \"/control/command/control_cmd\"\ntopic_name_target: \"/control/command/control_cmd_auto\"\n
Set the msg_type_target
parameter to the desired target message type from autoware_auto_msgs
.
Make sure that the msg_type_target
has the correspondence in either:
AutowareAutoMsgsAdapterNode::create_adapter_map()
method.(If this package is maintained correctly, they should match each other.)
Launch the adapter node by any of the following methods:
"},{"location":"system/autoware_auto_msgs_adapter/#ros2-launch","title":"ros2 launch
","text":"ros2 launch autoware_auto_msgs_adapter autoware_auto_msgs_adapter.launch.xml param_path:='full_path_to_param_file'\n
Make sure to set the param_path
argument to the full path of the parameter file.
Alternatively,
ros2 run
","text":"ros2 run autoware_auto_msgs_adapter autoware_auto_msgs_adapter_exe --ros-args --params-file 'full_path_to_param_file'\n
Make sure to set the param_path
argument to the full path of the parameter file.
The entry point for the adapter executable is created with RCLCPP_COMPONENTS_REGISTER_NODE
the autoware_auto_msgs_adapter_core.cpp.
This allows it to be launched as a component or as a standalone node.
In the AutowareAutoMsgsAdapterNode
constructor, the adapter is selected by the type string provided in the configuration file. The adapter is then initialized with the topic names provided.
The constructors of the adapters are responsible for creating the publisher and subscriber (which makes use of the conversion method).
"},{"location":"system/autoware_auto_msgs_adapter/#adding-a-new-message-pair","title":"Adding a new message pair","text":"To add a new message pair,
AutowareAutoMsgsAdapterNode::create_adapter_map()
method of the adapter node:definitions:autoware_auto_msgs_adapter:properties:msg_type_target:enum
section.CMakeLists.txt
file as it will automatically detect the new test file.Also make sure to test the new adapter with:
colcon test --event-handlers console_cohesion+ --packages-select autoware_auto_msgs_adapter\n
"},{"location":"system/bluetooth_monitor/","title":"bluetooth_monitor","text":""},{"location":"system/bluetooth_monitor/#macro-rendering-error","title":"Macro Rendering Error","text":"File: system/bluetooth_monitor/README.md
FileNotFoundError: [Errno 2] No such file or directory: 'system/bluetooth_monitor/schema/bluetooth_monitor.schema.json'
Traceback (most recent call last):\n File \"/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/mkdocs_macros/plugin.py\", line 527, in render\n return md_template.render(**page_variables)\n File \"/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/jinja2/environment.py\", line 1301, in render\n self.environment.handle_exception()\n File \"/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/jinja2/environment.py\", line 936, in handle_exception\n raise rewrite_traceback_stack(source=source)\n File \"<template>\", line 49, in top-level template code\n File \"/home/runner/work/autoware.universe/autoware.universe/mkdocs_macros.py\", line 68, in json_to_markdown\n with open(json_schema_file_path) as f:\nFileNotFoundError: [Errno 2] No such file or directory: 'system/bluetooth_monitor/schema/bluetooth_monitor.schema.json'\n
"},{"location":"system/component_state_monitor/","title":"component_state_monitor","text":""},{"location":"system/component_state_monitor/#component_state_monitor","title":"component_state_monitor","text":"The component state monitor checks the state of each component using topic state monitor. This is an implementation for backward compatibility with the AD service state monitor. It will be replaced in the future using a diagnostics tree.
"},{"location":"system/default_ad_api/","title":"default_ad_api","text":""},{"location":"system/default_ad_api/#default_ad_api","title":"default_ad_api","text":""},{"location":"system/default_ad_api/#features","title":"Features","text":"This package is a default implementation AD API.
This is a sample to call API using HTTP.
"},{"location":"system/default_ad_api/#guide-message-script","title":"Guide message script","text":"This is a debug script to check the conditions for transition to autonomous mode.
$ ros2 run default_ad_api guide.py\n\nThe vehicle pose is not estimated. Please set an initial pose or check GNSS.\nThe route is not set. Please set a goal pose.\nThe topic rate error is detected. Please check [control,planning] components.\nThe vehicle is ready. Please change the operation mode to autonomous.\nThe vehicle is driving autonomously.\nThe vehicle has reached the goal of the route. Please reset a route.\n
"},{"location":"system/default_ad_api/document/autoware-state/","title":"Autoware state compatibility","text":""},{"location":"system/default_ad_api/document/autoware-state/#autoware-state-compatibility","title":"Autoware state compatibility","text":""},{"location":"system/default_ad_api/document/autoware-state/#overview","title":"Overview","text":"Since /autoware/state
was so widely used, default_ad_api creates it from the states of AD API for backwards compatibility. The diagnostic checks that ad_service_state_monitor used to perform have been replaced by component_state_monitor. The service /autoware/shutdown
to change autoware state to finalizing is also supported for compatibility.
This is the correspondence between AD API states and autoware states. The launch state is the data that default_ad_api node holds internally.
"},{"location":"system/default_ad_api/document/fail-safe/","title":"Fail-safe API","text":""},{"location":"system/default_ad_api/document/fail-safe/#fail-safe-api","title":"Fail-safe API","text":""},{"location":"system/default_ad_api/document/fail-safe/#overview","title":"Overview","text":"The fail-safe API simply relays the MRM state. See the autoware-documentation for AD API specifications.
"},{"location":"system/default_ad_api/document/interface/","title":"Interface API","text":""},{"location":"system/default_ad_api/document/interface/#interface-api","title":"Interface API","text":""},{"location":"system/default_ad_api/document/interface/#overview","title":"Overview","text":"The interface API simply returns a version number. See the autoware-documentation for AD API specifications.
"},{"location":"system/default_ad_api/document/localization/","title":"Localization API","text":""},{"location":"system/default_ad_api/document/localization/#localization-api","title":"Localization API","text":""},{"location":"system/default_ad_api/document/localization/#overview","title":"Overview","text":"Unify the location initialization method to the service. The topic /initialpose
from rviz is now only subscribed to by adapter node and converted to API call. This API call is forwarded to the pose initializer node so it can centralize the state of pose initialization. For other nodes that require initialpose, pose initializer node publishes as /initialpose3d
. See the autoware-documentation for AD API specifications.
Provides a hook for when the vehicle starts. It is typically used for announcements that call attention to the surroundings. Add a pause function to the vehicle_cmd_gate, and API will control it based on vehicle stopped and start requested. See the autoware-documentation for AD API specifications.
"},{"location":"system/default_ad_api/document/motion/#states","title":"States","text":"The implementation has more detailed state transitions to manage pause state synchronization. The correspondence with the AD API state is as follows.
"},{"location":"system/default_ad_api/document/operation-mode/","title":"Operation mode API","text":""},{"location":"system/default_ad_api/document/operation-mode/#operation-mode-api","title":"Operation mode API","text":""},{"location":"system/default_ad_api/document/operation-mode/#overview","title":"Overview","text":"Introduce operation mode. It handles autoware engage, gate_mode, external_cmd_selector and control_mode abstractly. When the mode is changed, it will be in-transition state, and if the transition completion condition to that mode is not satisfied, it will be returned to the previous mode. Also, currently, the condition for mode change is only WaitingForEngage
in /autoware/state
, and the engage state is shared between modes. After introducing the operation mode, each mode will have a transition available flag. See the autoware-documentation for AD API specifications.
The operation mode has the following state transitions. Disabling autoware control and changing operation mode when autoware control is disabled can be done immediately. Otherwise, enabling autoware control and changing operation mode when autoware control is enabled causes the state will be transition state. If the mode change completion condition is not satisfied within the timeout in the transition state, it will return to the previous mode.
"},{"location":"system/default_ad_api/document/operation-mode/#compatibility","title":"Compatibility","text":"Ideally, vehicle_cmd_gate and external_cmd_selector should be merged so that the operation mode can be handled directly. However, currently the operation mode transition manager performs the following conversions to match the implementation. The transition manager monitors each topic in the previous interface and synchronizes the operation mode when it changes. When the operation mode is changed with the new interface, the transition manager disables synchronization and changes the operation mode using the previous interface.
"},{"location":"system/default_ad_api/document/routing/","title":"Routing API","text":""},{"location":"system/default_ad_api/document/routing/#routing-api","title":"Routing API","text":""},{"location":"system/default_ad_api/document/routing/#overview","title":"Overview","text":"Unify the route setting method to the service. This API supports two waypoint formats, poses and lanelet segments. The goal and checkpoint topics from rviz is only subscribed to by adapter node and converted to API call. This API call is forwarded to the mission planner node so it can centralize the state of routing. For other nodes that require route, mission planner node publishes as /planning/mission_planning/route
. See the autoware-documentation for AD API specifications.
This node makes it easy to use the localization AD API from RViz. When a initial pose topic is received, call the localization initialize API. This node depends on the map height fitter library. See here for more details.
Interface Local Name Global Name Description Subscription initialpose /initialpose The pose for localization initialization. Client - /api/localization/initialize The localization initialize API."},{"location":"system/default_ad_api_helpers/ad_api_adaptors/#routing_adaptor","title":"routing_adaptor","text":"This node makes it easy to use the routing AD API from RViz. When a goal pose topic is received, reset the waypoints and call the API. When a waypoint pose topic is received, append it to the end of the waypoints to call the API. The clear API is called automatically before setting the route.
Interface Local Name Global Name Description Subscription - /api/routing/state The state of the routing API. Subscription ~/input/fixed_goal /planning/mission_planning/goal The goal pose of route. Disable goal modification. Subscription ~/input/rough_goal /rviz/routing/rough_goal The goal pose of route. Enable goal modification. Subscription ~/input/reroute /rviz/routing/reroute The goal pose of reroute. Subscription ~/input/waypoint /planning/mission_planning/checkpoint The waypoint pose of route. Client - /api/routing/clear_route The route clear API. Client - /api/routing/set_route_points The route points set API. Client - /api/routing/change_route_points The route points change API."},{"location":"system/default_ad_api_helpers/automatic_pose_initializer/","title":"automatic_pose_initializer","text":""},{"location":"system/default_ad_api_helpers/automatic_pose_initializer/#automatic_pose_initializer","title":"automatic_pose_initializer","text":""},{"location":"system/default_ad_api_helpers/automatic_pose_initializer/#automatic_pose_initializer_1","title":"automatic_pose_initializer","text":"This node calls localization initialize API when the localization initialization state is uninitialized. Since the API uses GNSS pose when no pose is specified, initialization using GNSS can be performed automatically.
Interface Local Name Global Name Description Subscription - /api/localization/initialization_state The localization initialization state API. Client - /api/localization/initialize The localization initialize API."},{"location":"system/diagnostic_graph_aggregator/","title":"diagnostic_graph_aggregator","text":""},{"location":"system/diagnostic_graph_aggregator/#diagnostic_graph_aggregator","title":"diagnostic_graph_aggregator","text":""},{"location":"system/diagnostic_graph_aggregator/#overview","title":"Overview","text":"The diagnostic graph aggregator node subscribes to diagnostic array and publishes aggregated diagnostic graph. As shown in the diagram below, this node introduces extra diagnostic status for intermediate functional unit. Diagnostic status dependencies will be directed acyclic graph (DAG).
"},{"location":"system/diagnostic_graph_aggregator/#diagnostics-graph-message","title":"Diagnostics graph message","text":"The diagnostics graph that this node outputs is a combination of diagnostic status and connections between them. This graph consists of an array of diagnostic nodes, and each node has a status and links. This link contains an index indicating the position of the node in the graph. Therefore, the graph can be reconstructed from the array of nodes using links. The following is an example of a message representing the graph in the overview section.
"},{"location":"system/diagnostic_graph_aggregator/#operation-mode-availability","title":"Operation mode availability","text":"For MRM, this node publishes the status of the top-level functional units in the dedicated message. Therefore, the diagnostic graph must contain functional units with the following names. This feature breaks the generality of the graph and may be changed to a plugin or another node in the future.
/diagnostics
diagnostic_msgs/msg/DiagnosticArray
Diagnostics input. publisher /diagnostics_graph
tier4_system_msgs/msg/DiagnosticGraph
Diagnostics graph. publisher /system/operation_mode/availability
tier4_system_msgs/msg/OperationModeAvailability
mode availability."},{"location":"system/diagnostic_graph_aggregator/#parameters","title":"Parameters","text":"Parameter Name Data Type Description graph_file
string
Path of the config file. rate
double
Rate of aggregation and topic publication. input_qos_depth
uint
QoS depth of input array topic. graph_qos_depth
uint
QoS depth of output graph topic. use_operation_mode_availability
bool
Use operation mode availability publisher. use_debug_mode
bool
Use debug output to stdout."},{"location":"system/diagnostic_graph_aggregator/#examples","title":"Examples","text":"ros2 launch diagnostic_graph_aggregator example.launch.xml\n
"},{"location":"system/diagnostic_graph_aggregator/#graph-file-format","title":"Graph file format","text":"And is a node that is evaluated as the AND of the input nodes.
"},{"location":"system/diagnostic_graph_aggregator/doc/format/and/#format","title":"Format","text":"Name Type Required Description type string yes Specifyand
when using this object. name string yes Name of diagnostic status. list List<Diag|Unit> yes List of input node references."},{"location":"system/diagnostic_graph_aggregator/doc/format/diag/","title":"Diag","text":""},{"location":"system/diagnostic_graph_aggregator/doc/format/diag/#diag","title":"Diag","text":"Diag is a node that refers to a source diagnostics.
"},{"location":"system/diagnostic_graph_aggregator/doc/format/diag/#format","title":"Format","text":"Name Type Required Description type string yes Specifydiag
when using this object. diag string yes Name of diagnostic status."},{"location":"system/diagnostic_graph_aggregator/doc/format/graph-file/","title":"GraphFile","text":""},{"location":"system/diagnostic_graph_aggregator/doc/format/graph-file/#graphfile","title":"GraphFile","text":"GraphFile is the top level object that makes up the configuration file.
"},{"location":"system/diagnostic_graph_aggregator/doc/format/graph-file/#format","title":"Format","text":"Name Type Required Description files List<Path> no Paths of the files to include. nodes List<Node> no Nodes of the diagnostic graph."},{"location":"system/diagnostic_graph_aggregator/doc/format/node/","title":"Node","text":""},{"location":"system/diagnostic_graph_aggregator/doc/format/node/#node","title":"Node","text":"Node is a base object that makes up the diagnostic graph.
"},{"location":"system/diagnostic_graph_aggregator/doc/format/node/#format","title":"Format","text":"Name Type Required Description type string yes Node type. See derived objects for details."},{"location":"system/diagnostic_graph_aggregator/doc/format/or/","title":"Unit","text":""},{"location":"system/diagnostic_graph_aggregator/doc/format/or/#unit","title":"Unit","text":"Or is a node that is evaluated as the OR of the input nodes.
"},{"location":"system/diagnostic_graph_aggregator/doc/format/or/#format","title":"Format","text":"Name Type Required Description type string yes Specifyor
when using this object. name string yes Name of diagnostic status. list List<Diag|Unit> yes List of input node references."},{"location":"system/diagnostic_graph_aggregator/doc/format/path/","title":"Path","text":""},{"location":"system/diagnostic_graph_aggregator/doc/format/path/#path","title":"Path","text":"Path is an object that indicates the path of the file to include.
"},{"location":"system/diagnostic_graph_aggregator/doc/format/path/#format","title":"Format","text":"Name Type Required Description package string yes Package name. path string yes Relative path in the package."},{"location":"system/diagnostic_graph_aggregator/doc/format/unit/","title":"Unit","text":""},{"location":"system/diagnostic_graph_aggregator/doc/format/unit/#unit","title":"Unit","text":"Diag is a node that refers to a functional unit.
"},{"location":"system/diagnostic_graph_aggregator/doc/format/unit/#format","title":"Format","text":"Name Type Required Description type string yes Specifyunit
when using this object. name string yes Name of diagnostic status."},{"location":"system/dummy_diag_publisher/","title":"dummy_diag_publisher","text":""},{"location":"system/dummy_diag_publisher/#dummy_diag_publisher","title":"dummy_diag_publisher","text":""},{"location":"system/dummy_diag_publisher/#purpose","title":"Purpose","text":"This package outputs a dummy diagnostic data for debugging and developing.
"},{"location":"system/dummy_diag_publisher/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"system/dummy_diag_publisher/#outputs","title":"Outputs","text":"Name Type Description/diagnostics
diagnostic_msgs::msgs::DiagnosticArray
Diagnostics outputs"},{"location":"system/dummy_diag_publisher/#parameters","title":"Parameters","text":""},{"location":"system/dummy_diag_publisher/#node-parameters","title":"Node Parameters","text":"The parameter DIAGNOSTIC_NAME
must be a name that exists in the parameter YAML file. If the parameter status
is given from a command line, the parameter is_active
is automatically set to true
.
update_rate
int 10
Timer callback period [Hz] false DIAGNOSTIC_NAME.is_active
bool true
Force update or not true DIAGNOSTIC_NAME.status
string \"OK\"
diag status set by dummy diag publisher true"},{"location":"system/dummy_diag_publisher/#yaml-format-for-dummy_diag_publisher","title":"YAML format for dummy_diag_publisher","text":"If the value is default
, the default value will be set.
required_diags.DIAGNOSTIC_NAME.is_active
bool true
Force update or not required_diags.DIAGNOSTIC_NAME.status
string \"OK\"
diag status set by dummy diag publisher"},{"location":"system/dummy_diag_publisher/#assumptions-known-limits","title":"Assumptions / Known limits","text":"TBD.
"},{"location":"system/dummy_diag_publisher/#usage","title":"Usage","text":""},{"location":"system/dummy_diag_publisher/#launch","title":"launch","text":"ros2 launch dummy_diag_publisher dummy_diag_publisher.launch.xml\n
"},{"location":"system/dummy_diag_publisher/#reconfigure","title":"reconfigure","text":"ros2 param set /dummy_diag_publisher velodyne_connection.status \"Warn\"\nros2 param set /dummy_diag_publisher velodyne_connection.is_active true\n
"},{"location":"system/dummy_infrastructure/","title":"dummy_infrastructure","text":""},{"location":"system/dummy_infrastructure/#dummy_infrastructure","title":"dummy_infrastructure","text":"This is a debug node for infrastructure communication.
"},{"location":"system/dummy_infrastructure/#usage","title":"Usage","text":"ros2 launch dummy_infrastructure dummy_infrastructure.launch.xml\nros2 run rqt_reconfigure rqt_reconfigure\n
"},{"location":"system/dummy_infrastructure/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"system/dummy_infrastructure/#inputs","title":"Inputs","text":"Name Type Description ~/input/command_array
tier4_v2x_msgs::msg::InfrastructureCommandArray
Infrastructure command"},{"location":"system/dummy_infrastructure/#outputs","title":"Outputs","text":"Name Type Description ~/output/state_array
tier4_v2x_msgs::msg::VirtualTrafficLightStateArray
Virtual traffic light array"},{"location":"system/dummy_infrastructure/#parameters","title":"Parameters","text":""},{"location":"system/dummy_infrastructure/#node-parameters","title":"Node Parameters","text":"Name Type Default Value Explanation update_rate
int 10
Timer callback period [Hz] use_first_command
bool true
Consider instrument id or not instrument_id
string `` Used as command id approval
bool false
set approval filed to ros param is_finalized
bool false
Stop at stop_line if finalization isn't completed"},{"location":"system/dummy_infrastructure/#assumptions-known-limits","title":"Assumptions / Known limits","text":"TBD.
"},{"location":"system/duplicated_node_checker/","title":"Duplicated Node Checker","text":""},{"location":"system/duplicated_node_checker/#duplicated-node-checker","title":"Duplicated Node Checker","text":""},{"location":"system/duplicated_node_checker/#purpose","title":"Purpose","text":"This node monitors the ROS 2 environments and detect duplication of node names in the environment. The result is published as diagnostics.
"},{"location":"system/duplicated_node_checker/#standalone-startup","title":"Standalone Startup","text":"ros2 launch duplicated_node_checker duplicated_node_checker.launch.xml\n
"},{"location":"system/duplicated_node_checker/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"The types of topic status and corresponding diagnostic status are following.
Duplication status Diagnostic status DescriptionOK
OK No duplication is detected Duplicated Detected
ERROR Duplication is detected"},{"location":"system/duplicated_node_checker/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"system/duplicated_node_checker/#output","title":"Output","text":"Name Type Description /diagnostics
diagnostic_msgs/DiagnosticArray
Diagnostics outputs"},{"location":"system/duplicated_node_checker/#parameters","title":"Parameters","text":"Name Type Description Default Range update_rate float The scanning and update frequency of the checker. 10 >2"},{"location":"system/duplicated_node_checker/#assumptions-known-limits","title":"Assumptions / Known limits","text":"TBD.
"},{"location":"system/emergency_handler/","title":"emergency_handler","text":""},{"location":"system/emergency_handler/#emergency_handler","title":"emergency_handler","text":""},{"location":"system/emergency_handler/#purpose","title":"Purpose","text":"Emergency Handler is a node to select proper MRM from from system failure state contained in HazardStatus.
"},{"location":"system/emergency_handler/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"system/emergency_handler/#state-transitions","title":"State Transitions","text":""},{"location":"system/emergency_handler/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"system/emergency_handler/#input","title":"Input","text":"Name Type Description/system/emergency/hazard_status
autoware_auto_system_msgs::msg::HazardStatusStamped
Used to select proper MRM from system failure state contained in HazardStatus /control/vehicle_cmd
autoware_auto_control_msgs::msg::AckermannControlCommand
Used as reference when generate Emergency Control Command /localization/kinematic_state
nav_msgs::msg::Odometry
Used to decide whether vehicle is stopped or not /vehicle/status/control_mode
autoware_auto_vehicle_msgs::msg::ControlModeReport
Used to check vehicle mode: autonomous or manual /system/api/mrm/comfortable_stop/status
tier4_system_msgs::msg::MrmBehaviorStatus
Used to check if MRM comfortable stop operation is available /system/api/mrm/emergency_stop/status
tier4_system_msgs::msg::MrmBehaviorStatus
Used to check if MRM emergency stop operation is available"},{"location":"system/emergency_handler/#output","title":"Output","text":"Name Type Description /system/emergency/shift_cmd
autoware_auto_vehicle_msgs::msg::GearCommand
Required to execute proper MRM (send gear cmd) /system/emergency/hazard_cmd
autoware_auto_vehicle_msgs::msg::HazardLightsCommand
Required to execute proper MRM (send turn signal cmd) /api/fail_safe/mrm_state
autoware_adapi_v1_msgs::msg::MrmState
Inform MRM execution state and selected MRM behavior /system/api/mrm/comfortable_stop/operate
tier4_system_msgs::srv::OperateMrm
Execution order for MRM comfortable stop /system/api/mrm/emergency_stop/operate
tier4_system_msgs::srv::OperateMrm
Execution order for MRM emergency stop"},{"location":"system/emergency_handler/#parameters","title":"Parameters","text":"Name Type Description Default Range update_rate integer Timer callback period. 10 N/A timeout_hazard_status float If the input hazard_status
topic cannot be received for more than timeout_hazard_status
, vehicle will make an emergency stop. 0.5 N/A timeout_takeover_request float Transition to MRR_OPERATING if the time from the last takeover request exceeds timeout_takeover_request
. 10.0 N/A use_takeover_request boolean If this parameter is true, the handler will record the time and make take over request to the driver when emergency state occurs. false N/A use_parking_after_stopped boolean If this parameter is true, it will publish PARKING shift command. false N/A use_comfortable_stop boolean If this parameter is true, operate comfortable stop when latent faults occur. false N/A turning_hazard_on.emergency boolean If this parameter is true, hazard lamps will be turned on during emergency state. true N/A"},{"location":"system/emergency_handler/#assumptions-known-limits","title":"Assumptions / Known limits","text":"TBD.
"},{"location":"system/mrm_comfortable_stop_operator/","title":"mrm_comfortable_stop_operator","text":""},{"location":"system/mrm_comfortable_stop_operator/#mrm_comfortable_stop_operator","title":"mrm_comfortable_stop_operator","text":""},{"location":"system/mrm_comfortable_stop_operator/#purpose","title":"Purpose","text":"MRM comfortable stop operator is a node that generates comfortable stop commands according to the comfortable stop MRM order.
"},{"location":"system/mrm_comfortable_stop_operator/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"system/mrm_comfortable_stop_operator/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"system/mrm_comfortable_stop_operator/#input","title":"Input","text":"Name Type Description~/input/mrm/comfortable_stop/operate
tier4_system_msgs::srv::OperateMrm
MRM execution order"},{"location":"system/mrm_comfortable_stop_operator/#output","title":"Output","text":"Name Type Description ~/output/mrm/comfortable_stop/status
tier4_system_msgs::msg::MrmBehaviorStatus
MRM execution status ~/output/velocity_limit
tier4_planning_msgs::msg::VelocityLimit
Velocity limit command ~/output/velocity_limit/clear
tier4_planning_msgs::msg::VelocityLimitClearCommand
Velocity limit clear command"},{"location":"system/mrm_comfortable_stop_operator/#parameters","title":"Parameters","text":""},{"location":"system/mrm_comfortable_stop_operator/#node-parameters","title":"Node Parameters","text":"Name Type Default value Explanation update_rate int 10
Timer callback frequency [Hz]"},{"location":"system/mrm_comfortable_stop_operator/#core-parameters","title":"Core Parameters","text":"Name Type Default value Explanation min_acceleration double -1.0
Minimum acceleration for comfortable stop [m/s^2] max_jerk double 0.3
Maximum jerk for comfortable stop [m/s^3] min_jerk double -0.3
Minimum jerk for comfortable stop [m/s^3]"},{"location":"system/mrm_comfortable_stop_operator/#assumptions-known-limits","title":"Assumptions / Known limits","text":"TBD.
"},{"location":"system/mrm_emergency_stop_operator/","title":"mrm_emergency_stop_operator","text":""},{"location":"system/mrm_emergency_stop_operator/#mrm_emergency_stop_operator","title":"mrm_emergency_stop_operator","text":""},{"location":"system/mrm_emergency_stop_operator/#purpose","title":"Purpose","text":"MRM emergency stop operator is a node that generates emergency stop commands according to the emergency stop MRM order.
"},{"location":"system/mrm_emergency_stop_operator/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"system/mrm_emergency_stop_operator/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"system/mrm_emergency_stop_operator/#input","title":"Input","text":"Name Type Description~/input/mrm/emergency_stop/operate
tier4_system_msgs::srv::OperateMrm
MRM execution order ~/input/control/control_cmd
autoware_auto_control_msgs::msg::AckermannControlCommand
Control command output from the last node of the control component. Used for the initial value of the emergency stop command."},{"location":"system/mrm_emergency_stop_operator/#output","title":"Output","text":"Name Type Description ~/output/mrm/emergency_stop/status
tier4_system_msgs::msg::MrmBehaviorStatus
MRM execution status ~/output/mrm/emergency_stop/control_cmd
autoware_auto_control_msgs::msg::AckermannControlCommand
Emergency stop command"},{"location":"system/mrm_emergency_stop_operator/#parameters","title":"Parameters","text":""},{"location":"system/mrm_emergency_stop_operator/#node-parameters","title":"Node Parameters","text":"Name Type Default value Explanation update_rate int 30
Timer callback frequency [Hz]"},{"location":"system/mrm_emergency_stop_operator/#core-parameters","title":"Core Parameters","text":"Name Type Default value Explanation target_acceleration double -2.5
Target acceleration for emergency stop [m/s^2] target_jerk double -1.5
Target jerk for emergency stop [m/s^3]"},{"location":"system/mrm_emergency_stop_operator/#assumptions-known-limits","title":"Assumptions / Known limits","text":"TBD.
"},{"location":"system/system_error_monitor/","title":"system_error_monitor","text":""},{"location":"system/system_error_monitor/#system_error_monitor","title":"system_error_monitor","text":""},{"location":"system/system_error_monitor/#purpose","title":"Purpose","text":"Autoware Error Monitor has two main functions.
/diagnostics_agg
diagnostic_msgs::msg::DiagnosticArray
Diagnostic information aggregated based diagnostic_aggregator setting is used to /autoware/state
autoware_auto_system_msgs::msg::AutowareState
Required to ignore error during Route, Planning and Finalizing. /control/current_gate_mode
tier4_control_msgs::msg::GateMode
Required to select the appropriate module from autonomous_driving
or external_control
/vehicle/control_mode
autoware_auto_vehicle_msgs::msg::ControlModeReport
Required to not hold emergency during manual driving"},{"location":"system/system_error_monitor/#output","title":"Output","text":"Name Type Description /system/emergency/hazard_status
autoware_auto_system_msgs::msg::HazardStatusStamped
HazardStatus contains system hazard level, emergency hold status and failure details /diagnostics_err
diagnostic_msgs::msg::DiagnosticArray
This has the same contents as HazardStatus. This is used for visualization"},{"location":"system/system_error_monitor/#parameters","title":"Parameters","text":""},{"location":"system/system_error_monitor/#node-parameters","title":"Node Parameters","text":"Name Type Default Value Explanation ignore_missing_diagnostics
bool false
If this parameter is turned off, it will be ignored if required modules have not been received. add_leaf_diagnostics
bool true
Required to use children diagnostics. diag_timeout_sec
double 1.0
(sec) If required diagnostic is not received for a diag_timeout_sec
, the diagnostic state become STALE state. data_ready_timeout
double 30.0
If input topics required for system_error_monitor are not available for data_ready_timeout
seconds, autoware_state will translate to emergency state. data_heartbeat_timeout
double 1.0
If input topics required for system_error_monitor are not no longer subscribed for data_heartbeat_timeout
seconds, autoware_state will translate to emergency state."},{"location":"system/system_error_monitor/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Explanation hazard_recovery_timeout
double 5.0
The vehicle can recovery to normal driving if emergencies disappear during hazard_recovery_timeout
. use_emergency_hold
bool false
If it is false, the vehicle will return to normal as soon as emergencies disappear. use_emergency_hold_in_manual_driving
bool false
If this parameter is turned off, emergencies will be ignored during manual driving. emergency_hazard_level
int 2
If hazard_level is more than emergency_hazard_level, autoware state will translate to emergency state"},{"location":"system/system_error_monitor/#yaml-format-for-system_error_monitor","title":"YAML format for system_error_monitor","text":"The parameter key should be filled with the hierarchical diagnostics output by diagnostic_aggregator. Parameters prefixed with required_modules.autonomous_driving
are for autonomous driving. Parameters with the required_modules.remote_control
prefix are for remote control. If the value is default
, the default value will be set.
required_modules.autonomous_driving.DIAGNOSTIC_NAME.sf_at
string \"none\"
Diagnostic level where it becomes Safe Fault. Available options are \"none\"
, \"warn\"
, \"error\"
. required_modules.autonomous_driving.DIAGNOSTIC_NAME.lf_at
string \"warn\"
Diagnostic level where it becomes Latent Fault. Available options are \"none\"
, \"warn\"
, \"error\"
. required_modules.autonomous_driving.DIAGNOSTIC_NAME.spf_at
string \"error\"
Diagnostic level where it becomes Single Point Fault. Available options are \"none\"
, \"warn\"
, \"error\"
. required_modules.autonomous_driving.DIAGNOSTIC_NAME.auto_recovery
string \"true\"
Determines whether the system will automatically recover when it recovers from an error. required_modules.remote_control.DIAGNOSTIC_NAME.sf_at
string \"none\"
Diagnostic level where it becomes Safe Fault. Available options are \"none\"
, \"warn\"
, \"error\"
. required_modules.remote_control.DIAGNOSTIC_NAME.lf_at
string \"warn\"
Diagnostic level where it becomes Latent Fault. Available options are \"none\"
, \"warn\"
, \"error\"
. required_modules.remote_control.DIAGNOSTIC_NAME.spf_at
string \"error\"
Diagnostic level where it becomes Single Point Fault. Available options are \"none\"
, \"warn\"
, \"error\"
. required_modules.remote_control.DIAGNOSTIC_NAME.auto_recovery
string \"true\"
Determines whether the system will automatically recover when it recovers from an error."},{"location":"system/system_error_monitor/#assumptions-known-limits","title":"Assumptions / Known limits","text":"TBD.
"},{"location":"system/system_monitor/","title":"System Monitor for Autoware","text":""},{"location":"system/system_monitor/#system-monitor-for-autoware","title":"System Monitor for Autoware","text":"Further improvement of system monitor functionality for Autoware.
"},{"location":"system/system_monitor/#description","title":"Description","text":"This package provides the following nodes for monitoring system:
Use colcon build and launch in the same way as other packages.
colcon build\nsource install/setup.bash\nros2 launch system_monitor system_monitor.launch.xml\n
CPU and GPU monitoring method differs depending on platform. CMake automatically chooses source to be built according to build environment. If you build this package on intel platform, CPU monitor and GPU monitor which run on intel platform are built.
"},{"location":"system/system_monitor/#ros-topics-published-by-system-monitor","title":"ROS topics published by system monitor","text":"Every topic is published in 1 minute interval.
[Usage] \u2713\uff1aSupported, -\uff1aNot supported
Node Message Intel arm64(tegra) arm64(raspi) Notes CPU Monitor CPU Temperature \u2713 \u2713 \u2713 CPU Usage \u2713 \u2713 \u2713 CPU Load Average \u2713 \u2713 \u2713 CPU Thermal Throttling \u2713 - \u2713 CPU Frequency \u2713 \u2713 \u2713 Notification of frequency only, normally error not generated. HDD Monitor HDD Temperature \u2713 \u2713 \u2713 HDD PowerOnHours \u2713 \u2713 \u2713 HDD TotalDataWritten \u2713 \u2713 \u2713 HDD RecoveredError \u2713 \u2713 \u2713 HDD Usage \u2713 \u2713 \u2713 HDD ReadDataRate \u2713 \u2713 \u2713 HDD WriteDataRate \u2713 \u2713 \u2713 HDD ReadIOPS \u2713 \u2713 \u2713 HDD WriteIOPS \u2713 \u2713 \u2713 HDD Connection \u2713 \u2713 \u2713 Memory Monitor Memory Usage \u2713 \u2713 \u2713 Net Monitor Network Connection \u2713 \u2713 \u2713 Network Usage \u2713 \u2713 \u2713 Notification of usage only, normally error not generated. Network CRC Error \u2713 \u2713 \u2713 Warning occurs when the number of CRC errors in the period reaches the threshold value. The number of CRC errors that occur is the same as the value that can be confirmed with the ip command. IP Packet Reassembles Failed \u2713 \u2713 \u2713 NTP Monitor NTP Offset \u2713 \u2713 \u2713 Process Monitor Tasks Summary \u2713 \u2713 \u2713 High-load Proc[0-9] \u2713 \u2713 \u2713 High-mem Proc[0-9] \u2713 \u2713 \u2713 GPU Monitor GPU Temperature \u2713 \u2713 - GPU Usage \u2713 \u2713 - GPU Memory Usage \u2713 - - GPU Thermal Throttling \u2713 - - GPU Frequency \u2713 \u2713 - For Intel platform, monitor whether current GPU clock is supported by the GPU. Voltage Monitor CMOS Battery Status \u2713 - - Battery Health for RTC and BIOS -"},{"location":"system/system_monitor/#ros-parameters","title":"ROS parameters","text":"See ROS parameters.
"},{"location":"system/system_monitor/#notes","title":"Notes","text":""},{"location":"system/system_monitor/#cpu-monitor-for-intel-platform","title":"CPU monitor for intel platform","text":"Thermal throttling event can be monitored by reading contents of MSR(Model Specific Register), and accessing MSR is only allowed for root by default, so this package provides the following approach to minimize security risks as much as possible:
Create a user to run 'msr_reader'.
sudo adduser <username>\n
Load kernel module 'msr' into your target system. The path '/dev/cpu/CPUNUM/msr' appears.
sudo modprobe msr\n
Allow user to access MSR with read-only privilege using the Access Control List (ACL).
sudo setfacl -m u:<username>:r /dev/cpu/*/msr\n
Assign capability to 'msr_reader' since msr kernel module requires rawio capability.
sudo setcap cap_sys_rawio=ep install/system_monitor/lib/system_monitor/msr_reader\n
Run 'msr_reader' as the user you created, and run system_monitor as a generic user.
su <username>\ninstall/system_monitor/lib/system_monitor/msr_reader\n
msr_reader
"},{"location":"system/system_monitor/#hdd-monitor","title":"HDD Monitor","text":"Generally, S.M.A.R.T. information is used to monitor HDD temperature and life of HDD, and normally accessing disk device node is allowed for root user or disk group. As with the CPU monitor, this package provides an approach to minimize security risks as much as possible:
Create a user to run 'hdd_reader'.
sudo adduser <username>\n
Add user to the disk group.
sudo usermod -a -G disk <username>\n
Assign capabilities to 'hdd_reader' since SCSI kernel module requires rawio capability to send ATA PASS-THROUGH (12) command and NVMe kernel module requires admin capability to send Admin Command.
sudo setcap 'cap_sys_rawio=ep cap_sys_admin=ep' install/system_monitor/lib/system_monitor/hdd_reader\n
Run 'hdd_reader' as the user you created, and run system_monitor as a generic user.
su <username>\ninstall/system_monitor/lib/system_monitor/hdd_reader\n
hdd_reader
"},{"location":"system/system_monitor/#gpu-monitor-for-intel-platform","title":"GPU Monitor for intel platform","text":"Currently GPU monitor for intel platform only supports NVIDIA GPU whose information can be accessed by NVML API.
Also you need to install CUDA libraries. For installation instructions for CUDA 10.0, see NVIDIA CUDA Installation Guide for Linux.
"},{"location":"system/system_monitor/#voltage-monitor-for-cmos-battery","title":"Voltage monitor for CMOS Battery","text":"Some platforms have built-in batteries for the RTC and CMOS. This node determines the battery status from the result of executing cat /proc/driver/rtc. Also, if lm-sensors is installed, it is possible to use the results. However, the return value of sensors varies depending on the chipset, so it is necessary to set a string to extract the corresponding voltage. It is also necessary to set the voltage for warning and error. For example, if you want a warning when the voltage is less than 2.9V and an error when it is less than 2.7V. The execution result of sensors on the chipset nct6106 is as follows, and \"in7:\" is the voltage of the CMOS battery.
$ sensors\npch_cannonlake-virtual-0\nAdapter: Virtual device\ntemp1: +42.0\u00b0C\n\nnct6106-isa-0a10\nAdapter: ISA adapter\nin0: 728.00 mV (min = +0.00 V, max = +1.74 V)\nin1: 1.01 V (min = +0.00 V, max = +2.04 V)\nin2: 3.34 V (min = +0.00 V, max = +4.08 V)\nin3: 3.34 V (min = +0.00 V, max = +4.08 V)\nin4: 1.07 V (min = +0.00 V, max = +2.04 V)\nin5: 1.05 V (min = +0.00 V, max = +2.04 V)\nin6: 1.67 V (min = +0.00 V, max = +2.04 V)\nin7: 3.06 V (min = +0.00 V, max = +4.08 V)\nin8: 2.10 V (min = +0.00 V, max = +4.08 V)\nfan1: 2789 RPM (min = 0 RPM)\nfan2: 0 RPM (min = 0 RPM)\n
The setting value of voltage_monitor.param.yaml is as follows.
/**:\nros__parameters:\ncmos_battery_warn: 2.90\ncmos_battery_error: 2.70\ncmos_battery_label: \"in7:\"\n
The above values of 2.7V and 2.90V are hypothetical. Depending on the motherboard and chipset, the value may vary. However, if the voltage of the lithium battery drops below 2.7V, it is recommended to replace it. In the above example, the message output to the topic /diagnostics is as follows. If the voltage < 2.9V then:
name: /autoware/system/resource_monitoring/voltage/cmos_battery\n message: Warning\n hardware_id: ''\n values:\n - key: 'voltage_monitor: CMOS Battery Status'\n value: Low Battery\n
If the voltage < 2.7V then:
name: /autoware/system/resource_monitoring/voltage/cmos_battery\n message: Warning\n hardware_id: ''\n values:\n - key: 'voltage_monitor: CMOS Battery Status'\n value: Battery Died\n
If neither, then:
name: /autoware/system/resource_monitoring/voltage/cmos_battery\n message: OK\n hardware_id: ''\n values:\n - key: 'voltage_monitor: CMOS Battery Status'\n value: OK\n
If the CMOS battery voltage drops less than voltage_error or voltage_warn,It will be a warning. If the battery runs out, the RTC will stop working when the power is turned off. However, since the vehicle can run, it is not an error. The vehicle will stop when an error occurs, but there is no need to stop immediately. It can be determined by the value of \"Low Battery\" or \"Battery Died\".
"},{"location":"system/system_monitor/#uml-diagrams","title":"UML diagrams","text":"See Class diagrams. See Sequence diagrams.
"},{"location":"system/system_monitor/docs/class_diagrams/","title":"Class diagrams","text":""},{"location":"system/system_monitor/docs/class_diagrams/#class-diagrams","title":"Class diagrams","text":""},{"location":"system/system_monitor/docs/class_diagrams/#cpu-monitor","title":"CPU Monitor","text":""},{"location":"system/system_monitor/docs/class_diagrams/#hdd-monitor","title":"HDD Monitor","text":""},{"location":"system/system_monitor/docs/class_diagrams/#memory-monitor","title":"Memory Monitor","text":""},{"location":"system/system_monitor/docs/class_diagrams/#net-monitor","title":"Net Monitor","text":""},{"location":"system/system_monitor/docs/class_diagrams/#ntp-monitor","title":"NTP Monitor","text":""},{"location":"system/system_monitor/docs/class_diagrams/#process-monitor","title":"Process Monitor","text":""},{"location":"system/system_monitor/docs/class_diagrams/#gpu-monitor","title":"GPU Monitor","text":""},{"location":"system/system_monitor/docs/hdd_reader/","title":"hdd_reader","text":""},{"location":"system/system_monitor/docs/hdd_reader/#hdd_reader","title":"hdd_reader","text":""},{"location":"system/system_monitor/docs/hdd_reader/#name","title":"Name","text":"hdd_reader - Read S.M.A.R.T. information for monitoring HDD temperature and life of HDD
"},{"location":"system/system_monitor/docs/hdd_reader/#synopsis","title":"Synopsis","text":"hdd_reader [OPTION]
"},{"location":"system/system_monitor/docs/hdd_reader/#description","title":"Description","text":"Read S.M.A.R.T. information for monitoring HDD temperature and life of HDD. This runs as a daemon process and listens to a TCP/IP port (7635 by default).
Options: -h, --help \u00a0\u00a0\u00a0\u00a0Display help -p, --port # \u00a0\u00a0\u00a0\u00a0Port number to listen to
Exit status: Returns 0 if OK; non-zero otherwise.
"},{"location":"system/system_monitor/docs/hdd_reader/#notes","title":"Notes","text":"The 'hdd_reader' accesses minimal data enough to get Model number, Serial number, HDD temperature, and life of HDD. This is an approach to limit its functionality, however, the functionality can be expanded for further improvements and considerations in the future.
"},{"location":"system/system_monitor/docs/hdd_reader/#ata","title":"[ATA]","text":"Purpose Name Length Model number, Serial number IDENTIFY DEVICE data 256 words(512 bytes) HDD temperature, life of HDD SMART READ DATA 256 words(512 bytes)For details please see the documents below.
For details please see the documents below.
msr_reader - Read MSR register for monitoring thermal throttling event
"},{"location":"system/system_monitor/docs/msr_reader/#synopsis","title":"Synopsis","text":"msr_reader [OPTION]
"},{"location":"system/system_monitor/docs/msr_reader/#description","title":"Description","text":"Read MSR register for monitoring thermal throttling event. This runs as a daemon process and listens to a TCP/IP port (7634 by default).
Options: -h, --help \u00a0\u00a0\u00a0\u00a0Display help -p, --port # \u00a0\u00a0\u00a0\u00a0Port number to listen to
Exit status: Returns 0 if OK; non-zero otherwise.
"},{"location":"system/system_monitor/docs/msr_reader/#notes","title":"Notes","text":"The 'msr_reader' accesses minimal data enough to get thermal throttling event. This is an approach to limit its functionality, however, the functionality can be expanded for further improvements and considerations in the future.
Register Address Name Length 1B1H IA32_PACKAGE_THERM_STATUS 64bitFor details please see the documents below.
cpu_monitor:
Name Type Unit Default Notes temp_warn float DegC 90.0 Generates warning when CPU temperature reaches a specified value or higher. temp_error float DegC 95.0 Generates error when CPU temperature reaches a specified value or higher. usage_warn float %(1e-2) 0.90 Generates warning when CPU usage reaches a specified value or higher and last for usage_warn_count counts. usage_error float %(1e-2) 1.00 Generates error when CPU usage reaches a specified value or higher and last for usage_error_count counts. usage_warn_count int n/a 2 Generates warning when CPU usage reaches usage_warn value or higher and last for a specified counts. usage_error_count int n/a 2 Generates error when CPU usage reaches usage_error value or higher and last for a specified counts. load1_warn float %(1e-2) 0.90 Generates warning when load average 1min reaches a specified value or higher. load5_warn float %(1e-2) 0.80 Generates warning when load average 5min reaches a specified value or higher. msr_reader_port int n/a 7634 Port number to connect to msr_reader."},{"location":"system/system_monitor/docs/ros_parameters/#hdd-monitor","title":"HDD Monitor","text":"hdd_monitor:
\u00a0\u00a0disks:
Name Type Unit Default Notes name string n/a none The disk name to monitor temperature. (e.g. /dev/sda) temp_attribute_id int n/a 0xC2 S.M.A.R.T attribute ID of temperature. temp_warn float DegC 55.0 Generates warning when HDD temperature reaches a specified value or higher. temp_error float DegC 70.0 Generates error when HDD temperature reaches a specified value or higher. power_on_hours_attribute_id int n/a 0x09 S.M.A.R.T attribute ID of power-on hours. power_on_hours_warn int Hour 3000000 Generates warning when HDD power-on hours reaches a specified value or higher. total_data_written_attribute_id int n/a 0xF1 S.M.A.R.T attribute ID of total data written. total_data_written_warn int depends on device 4915200 Generates warning when HDD total data written reaches a specified value or higher. total_data_written_safety_factor int %(1e-2) 0.05 Safety factor of HDD total data written. recovered_error_attribute_id int n/a 0xC3 S.M.A.R.T attribute ID of recovered error. recovered_error_warn int n/a 1 Generates warning when HDD recovered error reaches a specified value or higher. read_data_rate_warn float MB/s 360.0 Generates warning when HDD read data rate reaches a specified value or higher. write_data_rate_warn float MB/s 103.5 Generates warning when HDD write data rate reaches a specified value or higher. read_iops_warn float IOPS 63360.0 Generates warning when HDD read IOPS reaches a specified value or higher. write_iops_warn float IOPS 24120.0 Generates warning when HDD write IOPS reaches a specified value or higher.hdd_monitor:
Name Type Unit Default Notes hdd_reader_port int n/a 7635 Port number to connect to hdd_reader. usage_warn float %(1e-2) 0.95 Generates warning when disk usage reaches a specified value or higher. usage_error float %(1e-2) 0.99 Generates error when disk usage reaches a specified value or higher."},{"location":"system/system_monitor/docs/ros_parameters/#memory-monitor","title":"Memory Monitor","text":"mem_monitor:
Name Type Unit Default Notes usage_warn float %(1e-2) 0.95 Generates warning when physical memory usage reaches a specified value or higher. usage_error float %(1e-2) 0.99 Generates error when physical memory usage reaches a specified value or higher."},{"location":"system/system_monitor/docs/ros_parameters/#net-monitor","title":"Net Monitor","text":"net_monitor:
Name Type Unit Default Notes devices list[string] n/a none The name of network interface to monitor. (e.g. eth0, * for all network interfaces) monitor_program string n/a greengrass program name to be monitored by nethogs name. crc_error_check_duration int sec 1 CRC error check duration. crc_error_count_threshold int n/a 1 Generates warning when count of CRC errors during CRC error check duration reaches a specified value or higher. reassembles_failed_check_duration int sec 1 IP packet reassembles failed check duration. reassembles_failed_check_count int n/a 1 Generates warning when count of IP packet reassembles failed during IP packet reassembles failed check duration reaches a specified value or higher."},{"location":"system/system_monitor/docs/ros_parameters/#ntp-monitor","title":"NTP Monitor","text":"ntp_monitor:
Name Type Unit Default Notes server string n/a ntp.ubuntu.com The name of NTP server to synchronize date and time. (e.g. ntp.nict.jp for Japan) offset_warn float sec 0.1 Generates warning when NTP offset reaches a specified value or higher. (default is 100ms) offset_error float sec 5.0 Generates warning when NTP offset reaches a specified value or higher. (default is 5sec)"},{"location":"system/system_monitor/docs/ros_parameters/#process-monitor","title":"Process Monitor","text":"process_monitor:
Name Type Unit Default Notes num_of_procs int n/a 5 The number of processes to generate High-load Proc[0-9] and High-mem Proc[0-9]."},{"location":"system/system_monitor/docs/ros_parameters/#gpu-monitor","title":"GPU Monitor","text":"gpu_monitor:
Name Type Unit Default Notes temp_warn float DegC 90.0 Generates warning when GPU temperature reaches a specified value or higher. temp_error float DegC 95.0 Generates error when GPU temperature reaches a specified value or higher. gpu_usage_warn float %(1e-2) 0.90 Generates warning when GPU usage reaches a specified value or higher. gpu_usage_error float %(1e-2) 1.00 Generates error when GPU usage reaches a specified value or higher. memory_usage_warn float %(1e-2) 0.90 Generates warning when GPU memory usage reaches a specified value or higher. memory_usage_error float %(1e-2) 1.00 Generates error when GPU memory usage reaches a specified value or higher."},{"location":"system/system_monitor/docs/ros_parameters/#voltage-monitor","title":"Voltage Monitor","text":"voltage_monitor:
Name Type Unit Default Notes cmos_battery_warn float volt 2.9 Generates warning when voltage of CMOS Battery is lower. cmos_battery_error float volt 2.7 Generates error when voltage of CMOS Battery is lower. cmos_battery_label string n/a \"\" voltage string in sensors command outputs. if empty no voltage will be checked."},{"location":"system/system_monitor/docs/seq_diagrams/","title":"Sequence diagrams","text":""},{"location":"system/system_monitor/docs/seq_diagrams/#sequence-diagrams","title":"Sequence diagrams","text":""},{"location":"system/system_monitor/docs/seq_diagrams/#cpu-monitor","title":"CPU Monitor","text":""},{"location":"system/system_monitor/docs/seq_diagrams/#hdd-monitor","title":"HDD Monitor","text":""},{"location":"system/system_monitor/docs/seq_diagrams/#memory-monitor","title":"Memory Monitor","text":""},{"location":"system/system_monitor/docs/seq_diagrams/#net-monitor","title":"Net Monitor","text":""},{"location":"system/system_monitor/docs/seq_diagrams/#ntp-monitor","title":"NTP Monitor","text":""},{"location":"system/system_monitor/docs/seq_diagrams/#process-monitor","title":"Process Monitor","text":""},{"location":"system/system_monitor/docs/seq_diagrams/#gpu-monitor","title":"GPU Monitor","text":""},{"location":"system/system_monitor/docs/topics_cpu_monitor/","title":"ROS topics: CPU Monitor","text":""},{"location":"system/system_monitor/docs/topics_cpu_monitor/#ros-topics-cpu-monitor","title":"ROS topics: CPU Monitor","text":""},{"location":"system/system_monitor/docs/topics_cpu_monitor/#cpu-temperature","title":"CPU Temperature","text":"/diagnostics/cpu_monitor: CPU Temperature
[summary]
level message OK OK[values]
key (example) value (example) Package id 0, Core [0-9], thermal_zone[0-9] 50.0 DegC*key: thermal_zone[0-9] for ARM architecture.
"},{"location":"system/system_monitor/docs/topics_cpu_monitor/#cpu-usage","title":"CPU Usage","text":"/diagnostics/cpu_monitor: CPU Usage
[summary]
level message OK OK WARN high load ERROR very high load[values]
key value (example) CPU [all,0-9]: status OK / high load / very high load CPU [all,0-9]: usr 2.00% CPU [all,0-9]: nice 0.00% CPU [all,0-9]: sys 1.00% CPU [all,0-9]: idle 97.00%"},{"location":"system/system_monitor/docs/topics_cpu_monitor/#cpu-load-average","title":"CPU Load Average","text":"/diagnostics/cpu_monitor: CPU Load Average
[summary]
level message OK OK WARN high load[values]
key value (example) 1min 14.50% 5min 14.55% 15min 9.67%"},{"location":"system/system_monitor/docs/topics_cpu_monitor/#cpu-thermal-throttling","title":"CPU Thermal Throttling","text":"Intel and raspi platform only. Tegra platform not supported.
/diagnostics/cpu_monitor: CPU Thermal Throttling
[summary]
level message OK OK ERROR throttling[values for intel platform]
key value (example) CPU [0-9]: Pkg Thermal Status OK / throttling[values for raspi platform]
key value (example) status All clear / Currently throttled / Soft temperature limit active"},{"location":"system/system_monitor/docs/topics_cpu_monitor/#cpu-frequency","title":"CPU Frequency","text":"/diagnostics/cpu_monitor: CPU Frequency
[summary]
level message OK OK[values]
key value (example) CPU [0-9]: clock 2879MHz"},{"location":"system/system_monitor/docs/topics_gpu_monitor/","title":"ROS topics: GPU Monitor","text":""},{"location":"system/system_monitor/docs/topics_gpu_monitor/#ros-topics-gpu-monitor","title":"ROS topics: GPU Monitor","text":"Intel and tegra platform only. Raspi platform not supported.
"},{"location":"system/system_monitor/docs/topics_gpu_monitor/#gpu-temperature","title":"GPU Temperature","text":"/diagnostics/gpu_monitor: GPU Temperature
[summary]
level message OK OK WARN warm ERROR hot[values]
key (example) value (example) GeForce GTX 1650, thermal_zone[0-9] 46.0 DegC*key: thermal_zone[0-9] for ARM architecture.
"},{"location":"system/system_monitor/docs/topics_gpu_monitor/#gpu-usage","title":"GPU Usage","text":"/diagnostics/gpu_monitor: GPU Usage
[summary]
level message OK OK WARN high load ERROR very high load[values]
key value (example) GPU [0-9]: status OK / high load / very high load GPU [0-9]: name GeForce GTX 1650, gpu.[0-9] GPU [0-9]: usage 19.0%*key: gpu.[0-9] for ARM architecture.
"},{"location":"system/system_monitor/docs/topics_gpu_monitor/#gpu-memory-usage","title":"GPU Memory Usage","text":"Intel platform only. There is no separate gpu memory in tegra. Both cpu and gpu uses cpu memory.
/diagnostics/gpu_monitor: GPU Memory Usage
[summary]
level message OK OK WARN high load ERROR very high load[values]
key value (example) GPU [0-9]: status OK / high load / very high load GPU [0-9]: name GeForce GTX 1650 GPU [0-9]: usage 13.0% GPU [0-9]: total 3G GPU [0-9]: used 1G GPU [0-9]: free 2G"},{"location":"system/system_monitor/docs/topics_gpu_monitor/#gpu-thermal-throttling","title":"GPU Thermal Throttling","text":"Intel platform only. Tegra platform not supported.
/diagnostics/gpu_monitor: GPU Thermal Throttling
[summary]
level message OK OK ERROR throttling[values]
key value (example) GPU [0-9]: status OK / throttling GPU [0-9]: name GeForce GTX 1650 GPU [0-9]: graphics clock 1020 MHz GPU [0-9]: reasons GpuIdle / SwThermalSlowdown etc."},{"location":"system/system_monitor/docs/topics_gpu_monitor/#gpu-frequency","title":"GPU Frequency","text":"/diagnostics/gpu_monitor: GPU Frequency
"},{"location":"system/system_monitor/docs/topics_gpu_monitor/#intel-platform","title":"Intel platform","text":"[summary]
level message OK OK WARN unsupported clock[values]
key value (example) GPU [0-9]: status OK / unsupported clock GPU [0-9]: name GeForce GTX 1650 GPU [0-9]: graphics clock 1020 MHz"},{"location":"system/system_monitor/docs/topics_gpu_monitor/#tegra-platform","title":"Tegra platform","text":"[summary]
level message OK OK[values]
key (example) value (example) GPU 17000000.gv11b: clock 318 MHz"},{"location":"system/system_monitor/docs/topics_hdd_monitor/","title":"ROS topics: HDD Monitor","text":""},{"location":"system/system_monitor/docs/topics_hdd_monitor/#ros-topics-hdd-monitor","title":"ROS topics: HDD Monitor","text":""},{"location":"system/system_monitor/docs/topics_hdd_monitor/#hdd-temperature","title":"HDD Temperature","text":"/diagnostics/hdd_monitor: HDD Temperature
[summary]
level message OK OK WARN hot ERROR critical hot[values]
key value (example) HDD [0-9]: status OK / hot / critical hot HDD [0-9]: name /dev/nvme0 HDD [0-9]: model SAMSUNG MZVLB1T0HBLR-000L7 HDD [0-9]: serial S4EMNF0M820682 HDD [0-9]: temperature 37.0 DegC not available"},{"location":"system/system_monitor/docs/topics_hdd_monitor/#hdd-poweronhours","title":"HDD PowerOnHours","text":"/diagnostics/hdd_monitor: HDD PowerOnHours
[summary]
level message OK OK WARN lifetime limit[values]
key value (example) HDD [0-9]: status OK / lifetime limit HDD [0-9]: name /dev/nvme0 HDD [0-9]: model PHISON PS5012-E12S-512G HDD [0-9]: serial FB590709182505050767 HDD [0-9]: power on hours 4834 Hours not available"},{"location":"system/system_monitor/docs/topics_hdd_monitor/#hdd-totaldatawritten","title":"HDD TotalDataWritten","text":"/diagnostics/hdd_monitor: HDD TotalDataWritten
[summary]
level message OK OK WARN warranty period[values]
key value (example) HDD [0-9]: status OK / warranty period HDD [0-9]: name /dev/nvme0 HDD [0-9]: model PHISON PS5012-E12S-512G HDD [0-9]: serial FB590709182505050767 HDD [0-9]: total data written 146295330 not available"},{"location":"system/system_monitor/docs/topics_hdd_monitor/#hdd-recoverederror","title":"HDD RecoveredError","text":"/diagnostics/hdd_monitor: HDD RecoveredError
[summary]
level message OK OK WARN high soft error rate[values]
key value (example) HDD [0-9]: status OK / high soft error rate HDD [0-9]: name /dev/nvme0 HDD [0-9]: model PHISON PS5012-E12S-512G HDD [0-9]: serial FB590709182505050767 HDD [0-9]: recovered error 0 not available"},{"location":"system/system_monitor/docs/topics_hdd_monitor/#hdd-usage","title":"HDD Usage","text":"/diagnostics/hdd_monitor: HDD Usage
[summary]
level message OK OK WARN low disk space ERROR very low disk space[values]
key value (example) HDD [0-9]: status OK / low disk space / very low disk space HDD [0-9]: filesystem /dev/nvme0n1p4 HDD [0-9]: size 264G HDD [0-9]: used 172G HDD [0-9]: avail 749G HDD [0-9]: use 69% HDD [0-9]: mounted on /"},{"location":"system/system_monitor/docs/topics_hdd_monitor/#hdd-readdatarate","title":"HDD ReadDataRate","text":"/diagnostics/hdd_monitor: HDD ReadDataRate
[summary]
level message OK OK WARN high data rate of read[values]
key value (example) HDD [0-9]: status OK / high data rate of read HDD [0-9]: name /dev/nvme0 HDD [0-9]: data rate of read 0.00 MB/s"},{"location":"system/system_monitor/docs/topics_hdd_monitor/#hdd-writedatarate","title":"HDD WriteDataRate","text":"/diagnostics/hdd_monitor: HDD WriteDataRate
[summary]
level message OK OK WARN high data rate of write[values]
key value (example) HDD [0-9]: status OK / high data rate of write HDD [0-9]: name /dev/nvme0 HDD [0-9]: data rate of write 0.00 MB/s"},{"location":"system/system_monitor/docs/topics_hdd_monitor/#hdd-readiops","title":"HDD ReadIOPS","text":"/diagnostics/hdd_monitor: HDD ReadIOPS
[summary]
level message OK OK WARN high IOPS of read[values]
key value (example) HDD [0-9]: status OK / high IOPS of read HDD [0-9]: name /dev/nvme0 HDD [0-9]: IOPS of read 0.00 IOPS"},{"location":"system/system_monitor/docs/topics_hdd_monitor/#hdd-writeiops","title":"HDD WriteIOPS","text":"/diagnostics/hdd_monitor: HDD WriteIOPS
[summary]
level message OK OK WARN high IOPS of write[values]
key value (example) HDD [0-9]: status OK / high IOPS of write HDD [0-9]: name /dev/nvme0 HDD [0-9]: IOPS of write 0.00 IOPS"},{"location":"system/system_monitor/docs/topics_hdd_monitor/#hdd-connection","title":"HDD Connection","text":"/diagnostics/hdd_monitor: HDD Connection
[summary]
level message OK OK WARN not connected[values]
key value (example) HDD [0-9]: status OK / not connected HDD [0-9]: name /dev/nvme0 HDD [0-9]: mount point /"},{"location":"system/system_monitor/docs/topics_mem_monitor/","title":"ROS topics: Memory Monitor","text":""},{"location":"system/system_monitor/docs/topics_mem_monitor/#ros-topics-memory-monitor","title":"ROS topics: Memory Monitor","text":""},{"location":"system/system_monitor/docs/topics_mem_monitor/#memory-usage","title":"Memory Usage","text":"/diagnostics/mem_monitor: Memory Usage
[summary]
level message OK OK WARN high load ERROR very high load[values]
key value (example) Mem: usage 29.72% Mem: total 31.2G Mem: used 6.0G Mem: free 20.7G Mem: shared 2.9G Mem: buff/cache 4.5G Mem: available 21.9G Swap: total 2.0G Swap: used 218M Swap: free 1.8G Total: total 33.2G Total: used 6.2G Total: free 22.5G Total: used+ 9.1G"},{"location":"system/system_monitor/docs/topics_net_monitor/","title":"ROS topics: Net Monitor","text":""},{"location":"system/system_monitor/docs/topics_net_monitor/#ros-topics-net-monitor","title":"ROS topics: Net Monitor","text":""},{"location":"system/system_monitor/docs/topics_net_monitor/#network-connection","title":"Network Connection","text":"/diagnostics/net_monitor: Network Connection
[summary]
level message OK OK WARN no such device[values]
key value (example) Network [0-9]: status OK / no such device HDD [0-9]: name wlp82s0"},{"location":"system/system_monitor/docs/topics_net_monitor/#network-usage","title":"Network Usage","text":"/diagnostics/net_monitor: Network Usage
[summary]
level message OK OK[values]
key value (example) Network [0-9]: status OK Network [0-9]: interface name wlp82s0 Network [0-9]: rx_usage 0.00% Network [0-9]: tx_usage 0.00% Network [0-9]: rx_traffic 0.00 MB/s Network [0-9]: tx_traffic 0.00 MB/s Network [0-9]: capacity 400.0 MB/s Network [0-9]: mtu 1500 Network [0-9]: rx_bytes 58455228 Network [0-9]: rx_errors 0 Network [0-9]: tx_bytes 11069136 Network [0-9]: tx_errors 0 Network [0-9]: collisions 0"},{"location":"system/system_monitor/docs/topics_net_monitor/#network-traffic","title":"Network Traffic","text":"/diagnostics/net_monitor: Network Traffic
[summary]
level message OK OK[values when specified program is detected]
key value (example) nethogs [0-9]: program /lambda/greengrassSystemComponents/1384/999 nethogs [0-9]: sent (KB/Sec) 1.13574 nethogs [0-9]: received (KB/Sec) 0.261914[values when error is occurring]
key value (example) error execve failed: No such file or directory"},{"location":"system/system_monitor/docs/topics_net_monitor/#network-crc-error","title":"Network CRC Error","text":"/diagnostics/net_monitor: Network CRC Error
[summary]
level message OK OK WARN CRC error[values]
key value (example) Network [0-9]: interface name wlp82s0 Network [0-9]: total rx_crc_errors 0 Network [0-9]: rx_crc_errors per unit time 0"},{"location":"system/system_monitor/docs/topics_net_monitor/#ip-packet-reassembles-failed","title":"IP Packet Reassembles Failed","text":"/diagnostics/net_monitor: IP Packet Reassembles Failed
[summary]
level message OK OK WARN reassembles failed[values]
key value (example) total packet reassembles failed 0 packet reassembles failed per unit time 0"},{"location":"system/system_monitor/docs/topics_ntp_monitor/","title":"ROS topics: NTP Monitor","text":""},{"location":"system/system_monitor/docs/topics_ntp_monitor/#ros-topics-ntp-monitor","title":"ROS topics: NTP Monitor","text":""},{"location":"system/system_monitor/docs/topics_ntp_monitor/#ntp-offset","title":"NTP Offset","text":"/diagnostics/ntp_monitor: NTP Offset
[summary]
level message OK OK WARN high ERROR too high[values]
key value (example) NTP Offset -0.013181 sec NTP Delay 0.053880 sec"},{"location":"system/system_monitor/docs/topics_process_monitor/","title":"ROS topics: Process Monitor","text":""},{"location":"system/system_monitor/docs/topics_process_monitor/#ros-topics-process-monitor","title":"ROS topics: Process Monitor","text":""},{"location":"system/system_monitor/docs/topics_process_monitor/#tasks-summary","title":"Tasks Summary","text":"/diagnostics/process_monitor: Tasks Summary
[summary]
level message OK OK[values]
key value (example) total 409 running 2 sleeping 321 stopped 0 zombie 0"},{"location":"system/system_monitor/docs/topics_process_monitor/#high-load-proc0-9","title":"High-load Proc[0-9]","text":"/diagnostics/process_monitor: High-load Proc[0-9]
[summary]
level message OK OK[values]
key value (example) COMMAND /usr/lib/firefox/firefox %CPU 37.5 %MEM 2.1 PID 14062 USER autoware PR 20 NI 0 VIRT 3461152 RES 669052 SHR 481208 S S TIME+ 23:57.49"},{"location":"system/system_monitor/docs/topics_process_monitor/#high-mem-proc0-9","title":"High-mem Proc[0-9]","text":"/diagnostics/process_monitor: High-mem Proc[0-9]
[summary]
level message OK OK[values]
key value (example) COMMAND /snap/multipass/1784/usr/bin/qemu-system-x86_64 %CPU 0 %MEM 2.5 PID 1565 USER root PR 20 NI 0 VIRT 3722320 RES 812432 SHR 20340 S S TIME+ 0:22.84"},{"location":"system/system_monitor/docs/topics_voltage_monitor/","title":"ROS topics: Voltage Monitor","text":""},{"location":"system/system_monitor/docs/topics_voltage_monitor/#ros-topics-voltage-monitor","title":"ROS topics: Voltage Monitor","text":"\"CMOS Battery Status\" and \"CMOS battery voltage\" are exclusive. Only one or the other is generated. Which one is generated depends on the value of cmos_battery_label.
"},{"location":"system/system_monitor/docs/topics_voltage_monitor/#cmos-battery-status","title":"CMOS Battery Status","text":"/diagnostics/voltage_monitor: CMOS Battery Status
[summary]
level message OK OK WARN Battery Dead[values]
key (example) value (example) CMOS battery status OK / Battery Dead*key: thermal_zone[0-9] for ARM architecture.
"},{"location":"system/system_monitor/docs/topics_voltage_monitor/#cmos-battery-voltage","title":"CMOS Battery Voltage","text":"/diagnostics/voltage_monitor: CMOS battery voltage
[summary]
level message OK OK WARN Low Battery WARN Battery Died[values]
key value (example) CMOS battery voltage 3.06"},{"location":"system/system_monitor/docs/traffic_reader/","title":"traffic_reader","text":""},{"location":"system/system_monitor/docs/traffic_reader/#traffic_reader","title":"traffic_reader","text":""},{"location":"system/system_monitor/docs/traffic_reader/#name","title":"Name","text":"traffic_reader - monitoring network traffic by process
"},{"location":"system/system_monitor/docs/traffic_reader/#synopsis","title":"Synopsis","text":"traffic_reader [OPTION]
"},{"location":"system/system_monitor/docs/traffic_reader/#description","title":"Description","text":"Monitoring network traffic by process. This runs as a daemon process and listens to a TCP/IP port (7636 by default).
Options: -h, --help \u00a0\u00a0\u00a0\u00a0Display help -p, --port # \u00a0\u00a0\u00a0\u00a0Port number to listen to
Exit status: Returns 0 if OK; non-zero otherwise.
"},{"location":"system/system_monitor/docs/traffic_reader/#notes","title":"Notes","text":"The 'traffic_reader' requires nethogs command.
"},{"location":"system/system_monitor/docs/traffic_reader/#operation-confirmed-platform","title":"Operation confirmed platform","text":"This node monitors input topic for abnormalities such as timeout and low frequency. The result of topic status is published as diagnostics.
"},{"location":"system/topic_state_monitor/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"The types of topic status and corresponding diagnostic status are following.
Topic status Diagnostic status DescriptionOK
OK The topic has no abnormalities NotReceived
ERROR The topic has not been received yet WarnRate
WARN The frequency of the topic is dropped ErrorRate
ERROR The frequency of the topic is significantly dropped Timeout
ERROR The topic subscription is stopped for a certain time"},{"location":"system/topic_state_monitor/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"system/topic_state_monitor/#input","title":"Input","text":"Name Type Description any name any type Subscribe target topic to monitor"},{"location":"system/topic_state_monitor/#output","title":"Output","text":"Name Type Description /diagnostics
diagnostic_msgs/DiagnosticArray
Diagnostics outputs"},{"location":"system/topic_state_monitor/#parameters","title":"Parameters","text":""},{"location":"system/topic_state_monitor/#node-parameters","title":"Node Parameters","text":"Name Type Default Value Description topic
string - Name of target topic topic_type
string - Type of target topic (used if the topic is not transform) frame_id
string - Frame ID of transform parent (used if the topic is transform) child_frame_id
string - Frame ID of transform child (used if the topic is transform) transient_local
bool false QoS policy of topic subscription (Transient Local/Volatile) best_effort
bool false QoS policy of topic subscription (Best Effort/Reliable) diag_name
string - Name used for the diagnostics to publish update_rate
double 10.0 Timer callback period [Hz]"},{"location":"system/topic_state_monitor/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description warn_rate
double 0.5 If the topic rate is lower than this value, the topic status becomes WarnRate
error_rate
double 0.1 If the topic rate is lower than this value, the topic status becomes ErrorRate
timeout
double 1.0 If the topic subscription is stopped for more than this time [s], the topic status becomes Timeout
window_size
int 10 Window size of target topic for calculating frequency"},{"location":"system/topic_state_monitor/#assumptions-known-limits","title":"Assumptions / Known limits","text":"TBD.
"},{"location":"system/velodyne_monitor/","title":"velodyne_monitor","text":""},{"location":"system/velodyne_monitor/#velodyne_monitor","title":"velodyne_monitor","text":""},{"location":"system/velodyne_monitor/#purpose","title":"Purpose","text":"This node monitors the status of Velodyne LiDARs. The result of the status is published as diagnostics. Take care not to use this diagnostics to decide the lidar error. Please read Assumptions / Known limits for the detail reason.
"},{"location":"system/velodyne_monitor/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"The status of Velodyne LiDAR can be retrieved from http://[ip_address]/cgi/{info, settings, status, diag}.json
.
The types of abnormal status and corresponding diagnostics status are following.
Abnormal status Diagnostic status No abnormality OK Top board temperature is too cold ERROR Top board temperature is cold WARN Top board temperature is too hot ERROR Top board temperature is hot WARN Bottom board temperature is too cold ERROR Bottom board temperature is cold WARN Bottom board temperature is too hot ERROR Bottom board temperature is hot WARN Rpm(Rotations per minute) of the motor is too low ERROR Rpm(Rotations per minute) of the motor is low WARN Connection error (cannot get Velodyne LiDAR status) ERROR"},{"location":"system/velodyne_monitor/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"system/velodyne_monitor/#input","title":"Input","text":"None
"},{"location":"system/velodyne_monitor/#output","title":"Output","text":"Name Type Description/diagnostics
diagnostic_msgs/DiagnosticArray
Diagnostics outputs"},{"location":"system/velodyne_monitor/#parameters","title":"Parameters","text":""},{"location":"system/velodyne_monitor/#node-parameters","title":"Node Parameters","text":"Name Type Default Value Description timeout
double 0.5 Timeout for HTTP request to get Velodyne LiDAR status [s]"},{"location":"system/velodyne_monitor/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description ip_address
string \"192.168.1.201\" IP address of target Velodyne LiDAR temp_cold_warn
double -5.0 If the temperature of Velodyne LiDAR is lower than this value, the diagnostics status becomes WARN [\u00b0C] temp_cold_error
double -10.0 If the temperature of Velodyne LiDAR is lower than this value, the diagnostics status becomes ERROR [\u00b0C] temp_hot_warn
double 75.0 If the temperature of Velodyne LiDAR is higher than this value, the diagnostics status becomes WARN [\u00b0C] temp_hot_error
double 80.0 If the temperature of Velodyne LiDAR is higher than this value, the diagnostics status becomes ERROR [\u00b0C] rpm_ratio_warn
double 0.80 If the rpm rate of the motor (= current rpm / default rpm) is lower than this value, the diagnostics status becomes WARN rpm_ratio_error
double 0.70 If the rpm rate of the motor (= current rpm / default rpm) is lower than this value, the diagnostics status becomes ERROR"},{"location":"system/velodyne_monitor/#config-files","title":"Config files","text":"Config files for several velodyne models are prepared. The temp_***
parameters are set with reference to the operational temperature from each datasheet. Moreover, the temp_hot_***
of each model are set highly as 20 from operational temperature. Now, VLP-16.param.yaml
is used as default argument because it is lowest spec.
This node uses the http_client and request results by GET method. It takes a few seconds to get results, or generate a timeout exception if it does not succeed the GET request. This occurs frequently and the diagnostics aggregator output STALE. Therefore I recommend to stop using this results to decide the lidar error, and only monitor it to confirm lidar status.
"},{"location":"tools/simulator_test/simulator_compatibility_test/","title":"simulator_compatibility_test","text":""},{"location":"tools/simulator_test/simulator_compatibility_test/#simulator_compatibility_test","title":"simulator_compatibility_test","text":""},{"location":"tools/simulator_test/simulator_compatibility_test/#purpose","title":"Purpose","text":"Test procedures (e.g. test codes) to check whether a certain simulator is compatible with Autoware
"},{"location":"tools/simulator_test/simulator_compatibility_test/#overview-of-the-test-codes","title":"Overview of the test codes","text":"File structure
source install/setup.bash\ncolcon build --packages-select simulator_compatibility_test\ncd src/universe/autoware.universe/tools/simulator_test/simulator_compatibility_test/test_sim_common_manual_testing\n
To run each test case manually
"},{"location":"tools/simulator_test/simulator_compatibility_test/#test-case-1","title":"Test Case #1","text":"Run the test using the following command
python -m pytest test_01_control_mode_and_report.py\n
Check if expected behavior is created within the simulator
Run the test using the following command
python -m pytest test_02_change_gear_and_report.py\n
Check if expected behavior is created within the simulator
Run the test using the following command
python -m pytest test_03_longitudinal_command_and_report.py\n
Check if expected behavior is created within the simulator
Run the test using the following command
python -m pytest test_04_lateral_command_and_report.py\n
Check if expected behavior is created within the simulator
Run the test using the following command
python -m pytest test_05_turn_indicators_cmd_and_report.py\n
Check if expected behavior is created within the simulator
Run the test using the following command
python -m pytest test_06_hazard_lights_cmd_and_report.py\n
Check if expected behavior is created within the simulator
source install/setup.bash\ncolcon build --packages-select simulator_compatibility_test\ncd src/universe/autoware.universe/tools/simulator_test/simulator_compatibility_test/test_morai_sim\n
Detailed process
(WIP)
"},{"location":"tools/simulator_test/simulator_compatibility_test/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"tools/simulator_test/simulator_compatibility_test/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"tools/simulator_test/simulator_compatibility_test/#input","title":"Input","text":"Name Type Description/vehicle/status/control_mode
autoware_auto_vehicle_msgs::msg::ControlModeReport
for [Test Case #1] /vehicle/status/gear_status
autoware_auto_vehicle_msgs::msg::GearReport
for [Test Case #2] /vehicle/status/velocity_status
autoware_auto_vehicle_msgs::msg::VelocityReport
for [Test Case #3] /vehicle/status/steering_status
autoware_auto_vehicle_msgs::msg::SteeringReport
for [Test Case #4] /vehicle/status/turn_indicators_status
autoware_auto_vehicle_msgs::msg::TurnIndicatorsReport
for [Test Case #5] /vehicle/status/hazard_lights_status
autoware_auto_vehicle_msgs::msg::HazardLightsReport
for [Test Case #6]"},{"location":"tools/simulator_test/simulator_compatibility_test/#output","title":"Output","text":"Name Type Description /control/command/control_mode_cmd
autoware_auto_vehicle_msgs/ControlModeCommand
for [Test Case #1] /control/command/gear_cmd
autoware_auto_vehicle_msgs/GearCommand
for [Test Case #2] /control/command/control_cmd
autoware_auto_vehicle_msgs/AckermannControlCommand
for [Test Case #3, #4] /vehicle/status/steering_status
autoware_auto_vehicle_msgs/TurnIndicatorsCommand
for [Test Case #5] /control/command/turn_indicators_cmd
autoware_auto_vehicle_msgs/HazardLightsCommand
for [Test Case #6]"},{"location":"tools/simulator_test/simulator_compatibility_test/#parameters","title":"Parameters","text":"None.
"},{"location":"tools/simulator_test/simulator_compatibility_test/#node-parameters","title":"Node Parameters","text":"None.
"},{"location":"tools/simulator_test/simulator_compatibility_test/#core-parameters","title":"Core Parameters","text":"None.
"},{"location":"tools/simulator_test/simulator_compatibility_test/#assumptions-known-limits","title":"Assumptions / Known limits","text":"None.
"},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/","title":"accel_brake_map_calibrator","text":""},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#accel_brake_map_calibrator","title":"accel_brake_map_calibrator","text":"The role of this node is to automatically calibrate accel_map.csv
/ brake_map.csv
used in the raw_vehicle_cmd_converter
node.
The base map, which is lexus's one by default, is updated iteratively with the loaded driving data.
"},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#how-to-calibrate","title":"How to calibrate","text":""},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#launch-calibrator","title":"Launch Calibrator","text":"After launching Autoware, run the accel_brake_map_calibrator
by the following command and then perform autonomous driving. Note: You can collect data with manual driving if it is possible to use the same vehicle interface as during autonomous driving (e.g. using a joystick).
ros2 launch accel_brake_map_calibrator accel_brake_map_calibrator.launch.xml rviz:=true\n
Or if you want to use rosbag files, run the following commands.
ros2 launch accel_brake_map_calibrator accel_brake_map_calibrator.launch.xml rviz:=true use_sim_time:=true\nros2 bag play <rosbag_file> --clock\n
During the calibration with setting the parameter progress_file_output
to true, the log file is output in [directory of accel_brake_map_calibrator]/config/ . You can also see accel and brake maps in [directory of accel_brake_map_calibrator]/config/accel_map.csv and [directory of accel_brake_map_calibrator]/config/brake_map.csv after calibration.
The rviz:=true
option displays the RViz with a calibration plugin as below.
The current status (velocity and pedal) is shown in the plugin. The color on the current cell varies green/red depending on the current data is valid/invalid. The data that doesn't satisfy the following conditions are considered invalid and will not be used for estimation since aggressive data (e.g. when the pedal is moving fast) causes bad calibration accuracy.
The detailed parameters are described in the parameter section.
Note: You don't need to worry about whether the current state is red or green during calibration. Just keep getting data until all the cells turn red.
The value of each cell in the map is gray at first, and it changes from blue to red as the number of valid data in the cell accumulates. It is preferable to continue the calibration until each cell of the map becomes close to red. In particular, the performance near the stop depends strongly on the velocity of 0 ~ 6m/s range and the pedal value of +0.2 ~ -0.4, range so it is desirable to focus on those areas.
"},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#diagnostics","title":"Diagnostics","text":"The accel brake map_calibrator
publishes diagnostics message depending on the calibration status. Diagnostic type WARN
indicates that the current accel/brake map is estimated to be inaccurate. In this situation, it is strongly recommended to perform a re-calibration of the accel/brake map.
OK
\"OK\" Calibration Required WARN
\"Accel/brake map Calibration is required.\" The accuracy of current accel/brake map may be low. This diagnostics status can be also checked on the following ROS topic.
ros2 topic echo /accel_brake_map_calibrator/output/update_suggest\n
When the diagnostics type is WARN
, True
is published on this topic and the update of the accel/brake map is suggested.
The accuracy of map is evaluated by the Root Mean Squared Error (RMSE) between the observed acceleration and predicted acceleration.
TERMS:
Observed acceleration
: the current vehicle acceleration which is calculated as a derivative value of the wheel speed.Predicted acceleration
: the output of the original accel/brake map, which the Autoware is expecting. The value is calculated using the current pedal and velocity.You can check additional error information with the following topics.
/accel_brake_map_calibrator/output/current_map_error
: The error of the original map set in the csv_path_accel/brake_map
path. The original map is not accurate if this value is large./accel_brake_map_calibrator/output/updated_map_error
: The error of the map calibrated in this node. The calibration quality is low if this value is large./accel_brake_map_calibrator/output/map_error_ratio
: The error ratio between the original map and updated map (ratio = updated / current). If this value is less than 1, it is desirable to update the map.The process of calibration can be visualized as below. Since these scripts need the log output of the calibration, the pedal_accel_graph_output
parameter must be set to true while the calibration is running for the visualization.
The following command shows the plot of used data in the calibration. In each plot of velocity ranges, you can see the distribution of the relationship between pedal and acceleration, and raw data points with colors according to their pitch angles.
ros2 run accel_brake_map_calibrator view_plot.py\n
"},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#visualize-statistics-about-accelerationvelocitypedal-data","title":"Visualize statistics about acceleration/velocity/pedal data","text":"The following command shows the statistics of the calibration:
of all data in each map cell.
ros2 run accel_brake_map_calibrator view_statistics.py\n
"},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#how-to-save-the-calibrated-accel-brake-map-anytime-you-want","title":"How to save the calibrated accel / brake map anytime you want","text":"You can save accel and brake map anytime with the following command.
ros2 service call /accel_brake_map_calibrator/update_map_dir tier4_vehicle_msgs/srv/UpdateAccelBrakeMap \"path: '<accel/brake map directory>'\"\n
You can also save accel and brake map in the default directory where Autoware reads accel_map.csv/brake_map.csv using the RViz plugin (AccelBrakeMapCalibratorButtonPanel) as following.
Click Panels tab, and select AccelBrakeMapCalibratorButtonPanel.
Select the panel, and the button will appear at the bottom of RViz.
Press the button, and the accel / brake map will be saved. (The button cannot be pressed in certain situations, such as when the calibrator node is not running.)
These scripts are useful to test for accel brake map calibration. These generate an ActuationCmd
with a constant accel/brake value given interactively by a user through CLI.
The accel/brake_tester.py
receives a target accel/brake command from CLI. It sends a target value to actuation_cmd_publisher.py
which generates the ActuationCmd
. You can run these scripts by the following commands in the different terminals, and it will be as in the screenshot below.
ros2 run accel_brake_map_calibrator accel_tester.py\nros2 run accel_brake_map_calibrator brake_tester.py\nros2 run accel_brake_map_calibrator actuation_cmd_publisher.py\n
"},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#calibration-method","title":"Calibration Method","text":"Two algorithms are selectable for the acceleration map update, update_offset_four_cell_around and update_offset_each_cell. Please see the link for details.
"},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#data-preprocessing","title":"Data Preprocessing","text":"Before calibration, missing or unusable data (e.g., too large handle angles) must first be eliminated. The following parameters are used to determine which data to remove.
"},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#parameters_1","title":"Parameters","text":"Name Description Default Value velocity_min_threshold Exclude minimal velocity 0.1 max_steer_threshold Exclude large steering angle 0.2 max_pitch_threshold Exclude large pitch angle 0.02 max_jerk_threshold Exclude large jerk 0.7 pedal_velocity_thresh Exclude large pedaling speed 0.15"},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#update_offset_each_cell","title":"update_offset_each_cell","text":"Update by Recursive Least Squares(RLS) method using data close enough to each grid.
Advantage : Only data close enough to each grid is used for calibration, allowing accurate updates at each point.
Disadvantage : Calibration is time-consuming due to a large amount of data to be excluded.
"},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#parameters_2","title":"Parameters","text":"Data selection is determined by the following thresholds. | Name | Default Value | | ----------------------- | ------------- | | velocity_diff_threshold | 0.556 | | pedal_diff_threshold | 0.03 |
"},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#update-formula","title":"Update formula","text":"\\[ \\begin{align} \\theta[n]=& \\theta[n-1]+\\frac{p[n-1]x^{(n)}}{\\lambda+p[n-1]{(x^{(n)})}^2}(y^{(n)}-\\theta[n-1]x^{(n)})\\\\ p[n]=&\\frac{p[n-1]}{\\lambda+p[n-1]{(x^{(n)})}^2} \\end{align} \\]"},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#variables","title":"Variables","text":"Variable name Symbol covariance \\(p[n-1]\\) map_offset \\(\\theta[n]\\) forgettingfactor \\(\\lambda\\) phi \\(x(=1)\\) measured_acc \\(y\\)"},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#update_offset_four_cell_around-1","title":"update_offset_four_cell_around [1]","text":"Update the offsets by RLS in four grids around newly obtained data. By considering linear interpolation, the update takes into account appropriate weights. Therefore, there is no need to remove data by thresholding.
Advantage : No data is wasted because updates are performed on the 4 grids around the data with appropriate weighting. Disadvantage : Accuracy may be degraded due to extreme bias of the data. For example, if data \\(z(k)\\) is biased near \\(Z_{RR}\\) in Fig. 2, updating is performed at the four surrounding points ( \\(Z_{RR}\\), \\(Z_{RL}\\), \\(Z_{LR}\\), and \\(Z_{LL}\\)), but accuracy at \\(Z_{LL}\\) is not expected.
"},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#implementation","title":"Implementation","text":"
See eq.(7)-(10) in [1] for the updated formula. In addition, eq.(17),(18) from [1] are used for Anti-Windup.
"},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#references","title":"References","text":"[1] Gabrielle Lochrie, Michael Doljevic, Mario Nona, Yongsoon Yoon, Anti-Windup Recursive Least Squares Method for Adaptive Lookup Tables with Application to Automotive Powertrain Control Systems, IFAC-PapersOnLine, Volume 54, Issue 20, 2021, Pages 840-845
"},{"location":"vehicle/external_cmd_converter/","title":"external_cmd_converter","text":""},{"location":"vehicle/external_cmd_converter/#external_cmd_converter","title":"external_cmd_converter","text":"external_cmd_converter
is a node that converts desired mechanical input to acceleration and velocity by using accel/brake map.
~/in/external_control_cmd
tier4_external_api_msgs::msg::ControlCommand target throttle/brake/steering_angle/steering_angle_velocity
is necessary to calculate desired control command. ~/input/shift_cmd\"
autoware_auto_vehicle_msgs::GearCommand current command of gear. ~/input/emergency_stop
tier4_external_api_msgs::msg::Heartbeat emergency heart beat for external command. ~/input/current_gate_mode
tier4_control_msgs::msg::GateMode topic for gate mode. ~/input/odometry
navigation_msgs::Odometry twist topic in odometry is used."},{"location":"vehicle/external_cmd_converter/#output-topics","title":"Output topics","text":"Name Type Description ~/out/control_cmd
autoware_auto_control_msgs::msg::AckermannControlCommand ackermann control command converted from selected external command"},{"location":"vehicle/external_cmd_converter/#parameters","title":"Parameters","text":"Parameter Type Description timer_rate
double timer's update rate wait_for_first_topic
double if time out check is done after receiving first topic control_command_timeout
double time out check for control command emergency_stop_timeout
double time out check for emergency stop command"},{"location":"vehicle/external_cmd_converter/#limitation","title":"Limitation","text":"tbd.
"},{"location":"vehicle/raw_vehicle_cmd_converter/","title":"raw_vehicle_cmd_converter","text":""},{"location":"vehicle/raw_vehicle_cmd_converter/#raw_vehicle_cmd_converter","title":"raw_vehicle_cmd_converter","text":""},{"location":"vehicle/raw_vehicle_cmd_converter/#overview","title":"Overview","text":"The raw_vehicle_command_converter is a crucial node in vehicle automation systems, responsible for translating desired steering and acceleration inputs into specific vehicle control commands. This process is achieved through a combination of a lookup table and an optional feedback control system.
"},{"location":"vehicle/raw_vehicle_cmd_converter/#lookup-table","title":"Lookup Table","text":"The core of the converter's functionality lies in its use of a CSV-formatted lookup table. This table encapsulates the relationship between the throttle/brake pedal (depending on your vehicle control interface) and the corresponding vehicle acceleration across various speeds. The converter utilizes this data to accurately translate target accelerations into appropriate throttle/brake values.
"},{"location":"vehicle/raw_vehicle_cmd_converter/#creation-of-reference-data","title":"Creation of Reference Data","text":"Reference data for the lookup table is generated through the following steps:
Once the acceleration map is crafted, it should be loaded when the RawVehicleCmdConverter node is launched, with the file path defined in the launch file.
"},{"location":"vehicle/raw_vehicle_cmd_converter/#auto-calibration-tool","title":"Auto-Calibration Tool","text":"For ease of calibration and adjustments to the lookup table, an auto-calibration tool is available. More information and instructions for this tool can be found here.
"},{"location":"vehicle/raw_vehicle_cmd_converter/#input-topics","title":"Input topics","text":"Name Type Description~/input/control_cmd
autoware_auto_control_msgs::msg::AckermannControlCommand target velocity/acceleration/steering_angle/steering_angle_velocity
is necessary to calculate actuation command. ~/input/steering\"
autoware_auto_vehicle_msgs::SteeringReport current status of steering used for steering feed back control ~/input/twist
navigation_msgs::Odometry twist topic in odometry is used."},{"location":"vehicle/raw_vehicle_cmd_converter/#output-topics","title":"Output topics","text":"Name Type Description ~/output/actuation_cmd
tier4_vehicle_msgs::msg::ActuationCommandStamped actuation command for vehicle to apply mechanical input"},{"location":"vehicle/raw_vehicle_cmd_converter/#parameters","title":"Parameters","text":"Name Type Description Default Range convert_accel_cmd boolean use accel or not true N/A convert_brake_cmd boolean use brake or not true N/A convert_steer_cmd boolean use steer or not true N/A use_steer_ff boolean steering steer controller using steer feed forward or not true N/A use_steer_fb boolean steering steer controller using steer feed back or not true N/A is_debugging boolean debugging mode or not false N/A max_throttle float maximum value of throttle 0.4 \u22650.0 max_brake float maximum value of brake 0.8 \u22650.0 max_steer float maximum value of steer 10.0 N/A min_steer float minimum value of steer -10.0 N/A steer_pid.kp float proportional coefficient value in PID control 150.0 N/A steer_pid.ki float integral coefficient value in PID control 15.0 >0.0 steer_pid.kd float derivative coefficient value in PID control 0.0 N/A steer_pid.max float maximum value of PID 8.0 N/A steer_pid.min float minimum value of PID -8.0. N/A steer_pid.max_p float maximum value of Proportional in PID 8.0 N/A steer_pid.min_p float minimum value of Proportional in PID -8.0 N/A steer_pid.max_i float maximum value of Integral in PID 8.0 N/A steer_pid.min_i float minimum value of Integral in PID -8.0 N/A steer_pid.max_d float maximum value of Derivative in PID 0.0 N/A steer_pid.min_d float minimum value of Derivative in PID 0.0 N/A steer_pid.invalid_integration_decay float invalid integration decay value in PID control 0.97 >0.0"},{"location":"vehicle/raw_vehicle_cmd_converter/#limitation","title":"Limitation","text":"The current feed back implementation is only applied to steering control.
"},{"location":"vehicle/steer_offset_estimator/Readme/","title":"steer_offset_estimator","text":""},{"location":"vehicle/steer_offset_estimator/Readme/#steer_offset_estimator","title":"steer_offset_estimator","text":""},{"location":"vehicle/steer_offset_estimator/Readme/#purpose","title":"Purpose","text":"The role of this node is to automatically calibrate steer_offset
used in the vehicle_interface
node.
The base steer offset value is 0 by default, which is standard, is updated iteratively with the loaded driving data. This module is supposed to be used in below straight driving situation.
"},{"location":"vehicle/steer_offset_estimator/Readme/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"Estimates sequential steering offsets from kinematic model and state observations. Calculate yaw rate error and then calculate steering error recursively by least squared method, for more details see updateSteeringOffset()
function.
~/input/twist
geometry_msgs::msg::TwistStamped
vehicle twist ~/input/steer
autoware_auto_vehicle_msgs::msg::SteeringReport
steering"},{"location":"vehicle/steer_offset_estimator/Readme/#output","title":"Output","text":"Name Type Description ~/output/steering_offset
tier4_debug_msgs::msg::Float32Stamped
steering offset ~/output/steering_offset_covariance
tier4_debug_msgs::msg::Float32Stamped
covariance of steering offset"},{"location":"vehicle/steer_offset_estimator/Readme/#launch-calibrator","title":"Launch Calibrator","text":"After launching Autoware, run the steer_offset_estimator
by the following command and then perform autonomous driving. Note: You can collect data with manual driving if it is possible to use the same vehicle interface as during autonomous driving (e.g. using a joystick).
ros2 launch steer_offset_estimator steer_offset_estimator.launch.xml\n
Or if you want to use rosbag files, run the following commands.
ros2 param set /use_sim_time true\nros2 bag play <rosbag_file> --clock\n
"},{"location":"vehicle/steer_offset_estimator/Readme/#parameters","title":"Parameters","text":"Name Type Description Default Range initial_covariance float steer offset is larger than tolerance 1000 N/A steer_update_hz float update hz of steer data 10 \u22650.0 forgetting_factor float weight of using previous value 0.999 \u22650.0 valid_min_velocity float velocity below this value is not used 5 \u22650.0 valid_max_steer float steer above this value is not used 0.05 N/A warn_steer_offset_deg float Warn if offset is above this value. ex. if absolute estimated offset is larger than 2.5[deg] => warning 2.5 N/A"},{"location":"vehicle/steer_offset_estimator/Readme/#diagnostics","title":"Diagnostics","text":"The steer_offset_estimator
publishes diagnostics message depending on the calibration status. Diagnostic type WARN
indicates that the current steer_offset is estimated to be inaccurate. In this situation, it is strongly recommended to perform a re-calibration of the steer_offset.
OK
\"Preparation\" Calibration Required WARN
\"Steer offset is larger than tolerance\" This diagnostics status can be also checked on the following ROS topic.
ros2 topic echo /vehicle/status/steering_offset\n
"},{"location":"vehicle/vehicle_info_util/Readme/","title":"Vehicle Info Util","text":""},{"location":"vehicle/vehicle_info_util/Readme/#vehicle-info-util","title":"Vehicle Info Util","text":""},{"location":"vehicle/vehicle_info_util/Readme/#purpose","title":"Purpose","text":"This package is to get vehicle info parameters.
"},{"location":"vehicle/vehicle_info_util/Readme/#description","title":"Description","text":"In here, you can check the vehicle dimensions with more detail. The current format supports only the Ackermann model. This file defines the model assumed in autoware path planning, control, etc. and does not represent the exact physical model. If a model other than the Ackermann model is used, it is assumed that a vehicle interface will be designed to change the control output for the model.
"},{"location":"vehicle/vehicle_info_util/Readme/#versioning-policy","title":"Versioning Policy","text":"We have implemented a versioning system for the vehicle_info.param.yaml
file to ensure clarity and consistency in file format across different versions of Autoware and its external applications. Please see discussion for the details.
version:
field is commented out).0.1.0
. Follow the semantic versioning format (MAJOR.MINOR.PATCH)./**:\nros__parameters:\n# version: 0.1.0 # Uncomment and update this line for future format changes.\nwheel_radius: 0.383\n...\n
"},{"location":"vehicle/vehicle_info_util/Readme/#why-versioning","title":"Why Versioning?","text":"vehicle_info.param.yaml
need to reference the correct file version for optimal compatibility and functionality.vehicle_info.param.yaml
file simplifies management compared to maintaining separate versions for multiple customized Autoware branches. This approach streamlines version tracking and reduces complexity.$ ros2 run vehicle_info_util min_turning_radius_calculator.py\nyaml path is /home/autoware/pilot-auto/install/vehicle_info_util/share/vehicle_info_util/config/vehicle_info.param.yaml\nMinimum turning radius is 3.253042620027102 [m] for rear, 4.253220695862465 [m] for front.\n
You can designate yaml file with -y
option as follows.
ros2 run vehicle_info_util min_turning_radius_calculator.py -y <path-to-yaml>\n
"},{"location":"vehicle/vehicle_info_util/Readme/#assumptions-known-limits","title":"Assumptions / Known limits","text":"TBD.
"}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"autoware.universe","text":""},{"location":"#autowareuniverse","title":"autoware.universe","text":"For Autoware's general documentation, see Autoware Documentation.
For detailed documents of Autoware Universe components, see Autoware Universe Documentation.
"},{"location":"CODE_OF_CONDUCT/","title":"Contributor Covenant Code of Conduct","text":""},{"location":"CODE_OF_CONDUCT/#contributor-covenant-code-of-conduct","title":"Contributor Covenant Code of Conduct","text":""},{"location":"CODE_OF_CONDUCT/#our-pledge","title":"Our Pledge","text":"We as members, contributors, and leaders pledge to make participation in our community a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation.
We pledge to act and interact in ways that contribute to an open, welcoming, diverse, inclusive, and healthy community.
"},{"location":"CODE_OF_CONDUCT/#our-standards","title":"Our Standards","text":"Examples of behavior that contributes to a positive environment for our community include:
Examples of unacceptable behavior include:
Community leaders are responsible for clarifying and enforcing our standards of acceptable behavior and will take appropriate and fair corrective action in response to any behavior that they deem inappropriate, threatening, offensive, or harmful.
Community leaders have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, and will communicate reasons for moderation decisions when appropriate.
"},{"location":"CODE_OF_CONDUCT/#scope","title":"Scope","text":"This Code of Conduct applies within all community spaces, and also applies when an individual is officially representing the community in public spaces. Examples of representing our community include using an official e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event.
"},{"location":"CODE_OF_CONDUCT/#enforcement","title":"Enforcement","text":"Instances of abusive, harassing, or otherwise unacceptable behavior may be reported to the community leaders responsible for enforcement at conduct@autoware.org. All complaints will be reviewed and investigated promptly and fairly.
All community leaders are obligated to respect the privacy and security of the reporter of any incident.
"},{"location":"CODE_OF_CONDUCT/#enforcement-guidelines","title":"Enforcement Guidelines","text":"Community leaders will follow these Community Impact Guidelines in determining the consequences for any action they deem in violation of this Code of Conduct:
"},{"location":"CODE_OF_CONDUCT/#1-correction","title":"1. Correction","text":"Community Impact: Use of inappropriate language or other behavior deemed unprofessional or unwelcome in the community.
Consequence: A private, written warning from community leaders, providing clarity around the nature of the violation and an explanation of why the behavior was inappropriate. A public apology may be requested.
"},{"location":"CODE_OF_CONDUCT/#2-warning","title":"2. Warning","text":"Community Impact: A violation through a single incident or series of actions.
Consequence: A warning with consequences for continued behavior. No interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, for a specified period of time. This includes avoiding interactions in community spaces as well as external channels like social media. Violating these terms may lead to a temporary or permanent ban.
"},{"location":"CODE_OF_CONDUCT/#3-temporary-ban","title":"3. Temporary Ban","text":"Community Impact: A serious violation of community standards, including sustained inappropriate behavior.
Consequence: A temporary ban from any sort of interaction or public communication with the community for a specified period of time. No public or private interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, is allowed during this period. Violating these terms may lead to a permanent ban.
"},{"location":"CODE_OF_CONDUCT/#4-permanent-ban","title":"4. Permanent Ban","text":"Community Impact: Demonstrating a pattern of violation of community standards, including sustained inappropriate behavior, harassment of an individual, or aggression toward or disparagement of classes of individuals.
Consequence: A permanent ban from any sort of public interaction within the community.
"},{"location":"CODE_OF_CONDUCT/#attribution","title":"Attribution","text":"This Code of Conduct is adapted from the Contributor Covenant, version 2.1, available at https://www.contributor-covenant.org/version/2/1/code_of_conduct.html.
Community Impact Guidelines were inspired by Mozilla's code of conduct enforcement ladder.
For answers to common questions about this code of conduct, see the FAQ at https://www.contributor-covenant.org/faq. Translations are available at https://www.contributor-covenant.org/translations.
"},{"location":"CONTRIBUTING/","title":"Contributing","text":""},{"location":"CONTRIBUTING/#contributing","title":"Contributing","text":"See https://autowarefoundation.github.io/autoware-documentation/main/contributing/.
"},{"location":"DISCLAIMER/","title":"DISCLAIMER","text":"DISCLAIMER
\u201cAutoware\u201d will be provided by The Autoware Foundation under the Apache License 2.0. This \u201cDISCLAIMER\u201d will be applied to all users of Autoware (a \u201cUser\u201d or \u201cUsers\u201d) with the Apache License 2.0 and Users shall hereby approve and acknowledge all the contents specified in this disclaimer below and will be deemed to consent to this disclaimer without any objection upon utilizing or downloading Autoware.
Disclaimer and Waiver of Warranties
AUTOWARE FOUNDATION MAKES NO REPRESENTATION OR WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, WITH RESPECT TO PROVIDING AUTOWARE (the \u201cService\u201d) including but not limited to any representation or warranty (i) of fitness or suitability for a particular purpose contemplated by the Users, (ii) of the expected functions, commercial value, accuracy, or usefulness of the Service, (iii) that the use by the Users of the Service complies with the laws and regulations applicable to the Users or any internal rules established by industrial organizations, (iv) that the Service will be free of interruption or defects, (v) of the non-infringement of any third party's right and (vi) the accuracy of the content of the Services and the software itself.
The Autoware Foundation shall not be liable for any damage incurred by the User that are attributable to the Autoware Foundation for any reasons whatsoever. UNDER NO CIRCUMSTANCES SHALL THE AUTOWARE FOUNDATION BE LIABLE FOR INCIDENTAL, INDIRECT, SPECIAL OR FUTURE DAMAGES OR LOSS OF PROFITS.
A User shall be entirely responsible for the content posted by the User and its use of any content of the Service or the Website. If the User is held responsible in a civil action such as a claim for damages or even in a criminal case, the Autoware Foundation and member companies, governments and academic & non-profit organizations and their directors, officers, employees and agents (collectively, the \u201cIndemnified Parties\u201d) shall be completely discharged from any rights or assertions the User may have against the Indemnified Parties, or from any legal action, litigation or similar procedures.
Indemnity
A User shall indemnify and hold the Indemnified Parties harmless from any of their damages, losses, liabilities, costs or expenses (including attorneys' fees or criminal compensation), or any claims or demands made against the Indemnified Parties by any third party, due to or arising out of, or in connection with utilizing Autoware (including the representations and warranties), the violation of applicable Product Liability Law of each country (including criminal case) or violation of any applicable laws by the Users, or the content posted by the User or its use of any content of the Service or the Website.
"},{"location":"common/autoware_ad_api_specs/","title":"autoware_adapi_specs","text":""},{"location":"common/autoware_ad_api_specs/#autoware_adapi_specs","title":"autoware_adapi_specs","text":"This package is a specification of Autoware AD API.
"},{"location":"common/autoware_auto_common/design/comparisons/","title":"Comparisons","text":""},{"location":"common/autoware_auto_common/design/comparisons/#comparisons","title":"Comparisons","text":"The float_comparisons.hpp
library is a simple set of functions for performing approximate numerical comparisons. There are separate functions for performing comparisons using absolute bounds and relative bounds. Absolute comparison checks are prefixed with abs_
and relative checks are prefixed with rel_
.
The bool_comparisons.hpp
library additionally contains an XOR operator.
The intent of the library is to improve readability of code and reduce likelihood of typographical errors when using numerical and boolean comparisons.
"},{"location":"common/autoware_auto_common/design/comparisons/#target-use-cases","title":"Target use cases","text":"The approximate comparisons are intended to be used to check whether two numbers lie within some absolute or relative interval. The exclusive_or
function will test whether two values cast to different boolean values.
epsilon
parameter. The value of this parameter must be >= 0.#include \"autoware_auto_common/common/bool_comparisons.hpp\"\n#include \"autoware_auto_common/common/float_comparisons.hpp\"\n\n#include <iostream>\n\n// using-directive is just for illustration; don't do this in practice\nusing namespace autoware::common::helper_functions::comparisons;\n\nstatic constexpr auto epsilon = 0.2;\nstatic constexpr auto relative_epsilon = 0.01;\n\nstd::cout << exclusive_or(true, false) << \"\\n\";\n// Prints: true\n\nstd::cout << rel_eq(1.0, 1.1, relative_epsilon)) << \"\\n\";\n// Prints: false\n\nstd::cout << approx_eq(10000.0, 10010.0, epsilon, relative_epsilon)) << \"\\n\";\n// Prints: true\n\nstd::cout << abs_eq(4.0, 4.2, epsilon) << \"\\n\";\n// Prints: true\n\nstd::cout << abs_ne(4.0, 4.2, epsilon) << \"\\n\";\n// Prints: false\n\nstd::cout << abs_eq_zero(0.2, epsilon) << \"\\n\";\n// Prints: false\n\nstd::cout << abs_lt(4.0, 4.25, epsilon) << \"\\n\";\n// Prints: true\n\nstd::cout << abs_lte(1.0, 1.2, epsilon) << \"\\n\";\n// Prints: true\n\nstd::cout << abs_gt(1.25, 1.0, epsilon) << \"\\n\";\n// Prints: true\n\nstd::cout << abs_gte(0.75, 1.0, epsilon) << \"\\n\";\n// Prints: false\n
"},{"location":"common/autoware_auto_geometry/design/interval/","title":"Interval","text":""},{"location":"common/autoware_auto_geometry/design/interval/#interval","title":"Interval","text":"The interval is a standard 1D real-valued interval. The class implements a representation and operations on the interval type and guarantees interval validity on construction. Basic operations and accessors are implemented, as well as other common operations. See 'Example Usage' below.
"},{"location":"common/autoware_auto_geometry/design/interval/#target-use-cases","title":"Target use cases","text":"NaN
.#include \"autoware_auto_geometry/interval.hpp\"\n\n#include <iostream>\n\n// using-directive is just for illustration; don't do this in practice\nusing namespace autoware::common::geometry;\n\n// bounds for example interval\nconstexpr auto MIN = 0.0;\nconstexpr auto MAX = 1.0;\n\n//\n// Try to construct an invalid interval. This will give the following error:\n// 'Attempted to construct an invalid interval: {\"min\": 1.0, \"max\": 0.0}'\n//\n\ntry {\nconst auto i = Interval_d(MAX, MIN);\n} catch (const std::runtime_error& e) {\nstd::cerr << e.what();\n}\n\n//\n// Construct a double precision interval from 0 to 1\n//\n\nconst auto i = Interval_d(MIN, MAX);\n\n//\n// Test accessors and properties\n//\n\nstd::cout << Interval_d::min(i) << \" \" << Interval_d::max(i) << \"\\n\";\n// Prints: 0.0 1.0\n\nstd::cout << Interval_d::empty(i) << \" \" << Interval_d::length(i) << \"\\n\";\n// Prints: false 1.0\n\nstd::cout << Interval_d::contains(i, 0.3) << \"\\n\";\n// Prints: true\n\nstd::cout << Interval_d::is_subset_eq(Interval_d(0.2, 0.4), i) << \"\\n\";\n// Prints: true\n\n//\n// Test operations.\n//\n\nstd::cout << Interval_d::intersect(i, Interval(-1.0, 0.3)) << \"\\n\";\n// Prints: {\"min\": 0.0, \"max\": 0.3}\n\nstd::cout << Interval_d::project_to_interval(i, 0.5) << \" \"\n<< Interval_d::project_to_interval(i, -1.3) << \"\\n\";\n// Prints: 0.5 0.0\n\n//\n// Distinguish empty/zero measure\n//\n\nconst auto i_empty = Interval();\nconst auto i_zero_length = Interval(0.0, 0.0);\n\nstd::cout << Interval_d::empty(i_empty) << \" \"\n<< Interval_d::empty(i_zero_length) << \"\\n\";\n// Prints: true false\n\nstd::cout << Interval_d::zero_measure(i_empty) << \" \"\n<< Interval_d::zero_measure(i_zero_length) << \"\\n\";\n// Prints: false false\n
"},{"location":"common/autoware_auto_geometry/design/polygon_intersection_2d-design/","title":"2D Convex Polygon Intersection","text":""},{"location":"common/autoware_auto_geometry/design/polygon_intersection_2d-design/#2d-convex-polygon-intersection","title":"2D Convex Polygon Intersection","text":"Two convex polygon's intersection can be visualized on the image below as the blue area:
"},{"location":"common/autoware_auto_geometry/design/polygon_intersection_2d-design/#purpose-use-cases","title":"Purpose / Use cases","text":"Computing the intersection between two polygons can be useful in many applications of scene understanding. It can be used to estimate collision detection, shape alignment, shape association and in any application that deals with the objects around the perceiving agent.
"},{"location":"common/autoware_auto_geometry/design/polygon_intersection_2d-design/#design","title":"Design","text":"\\(Livermore, Calif, 1977\\) mention the following observations about convex polygon intersection:
With the observation mentioned above, the current algorithm operates in the following way:
Inputs:
Outputs:
The spatial hash is a data structure designed for efficient fixed-radius near-neighbor queries in low dimensions.
The fixed-radius near-neighbors problem is defined as follows:
For point p, find all points p' s.t. d(p, p') < r
Where in this case d(p, p')
is euclidean distance, and r
is the fixed radius.
For n
points with an average of k
neighbors each, this data structure can perform m
near-neighbor queries (to generate lists of near-neighbors for m
different points) in O(mk)
time.
By contrast, using a k-d tree for successive nearest-neighbor queries results in a running time of O(m log n)
.
The spatial hash works as follows:
x_min/x_max
and y_min/y_max
x_min
and y_min
as index (0, 0)
Under the hood, an std::unordered_multimap
is used, where the key is a bin/voxel index. The bin size was computed to be the same as the lookup distance.
In addition, this data structure can support 2D or 3D queries. This is determined during configuration, and baked into the data structure via the configuration class. The purpose of this was to avoid if statements in tight loops. The configuration class specializations themselves use CRTP (Curiously Recurring Template Patterns) to do \"static polymorphism\", and avoid a dispatching call.
"},{"location":"common/autoware_auto_geometry/design/spatial-hash-design/#performance-characterization","title":"Performance characterization","text":""},{"location":"common/autoware_auto_geometry/design/spatial-hash-design/#time","title":"Time","text":"Insertion is O(n)
because lookup time for the underlying hashmap is O(n)
for hashmaps. In practice, lookup time for hashmaps and thus insertion time should be O(1)
.
Removing a point is O(1)
because the current API only supports removal via direct reference to a node.
Finding k
near-neighbors is worst case O(n)
in the case of an adversarial example, but in practice O(k)
.
The module consists of the following components:
O(n + n + A * n)
, where A
is an arbitrary constant (load factor)O(n + n)
This results in O(n)
space complexity.
The spatial hash's state is dictated by the status of the underlying unordered_multimap.
The data structure is wholly configured by a config class. The constructor of the class determines in the data structure accepts strictly 2D or strictly 3D queries.
"},{"location":"common/autoware_auto_geometry/design/spatial-hash-design/#inputs","title":"Inputs","text":"The primary method of introducing data into the data structure is via the insert method.
"},{"location":"common/autoware_auto_geometry/design/spatial-hash-design/#outputs","title":"Outputs","text":"The primary method of retrieving data from the data structure is via the near\\(2D configuration\\) or near \\(3D configuration\\) method.
The whole data structure can also be traversed using standard constant iterators.
"},{"location":"common/autoware_auto_geometry/design/spatial-hash-design/#future-work","title":"Future Work","text":"It is an rviz plugin for visualizing the result from perception module. This package is based on the implementation of the rviz plugin developed by Autoware.Auto.
See Autoware.Auto design documentation for the original design philosophy. [1]
"},{"location":"common/autoware_auto_perception_rviz_plugin/#input-types-visualization-results","title":"Input Types / Visualization Results","text":""},{"location":"common/autoware_auto_perception_rviz_plugin/#detectedobjects","title":"DetectedObjects","text":""},{"location":"common/autoware_auto_perception_rviz_plugin/#input-types","title":"Input Types","text":"Name Type Descriptionautoware_auto_perception_msgs::msg::DetectedObjects
detection result array"},{"location":"common/autoware_auto_perception_rviz_plugin/#visualization-result","title":"Visualization Result","text":""},{"location":"common/autoware_auto_perception_rviz_plugin/#trackedobjects","title":"TrackedObjects","text":""},{"location":"common/autoware_auto_perception_rviz_plugin/#input-types_1","title":"Input Types","text":"Name Type Description autoware_auto_perception_msgs::msg::TrackedObjects
tracking result array"},{"location":"common/autoware_auto_perception_rviz_plugin/#visualization-result_1","title":"Visualization Result","text":"Overwrite tracking results with detection results.
"},{"location":"common/autoware_auto_perception_rviz_plugin/#predictedobjects","title":"PredictedObjects","text":""},{"location":"common/autoware_auto_perception_rviz_plugin/#input-types_2","title":"Input Types","text":"Name Type Descriptionautoware_auto_perception_msgs::msg::PredictedObjects
prediction result array"},{"location":"common/autoware_auto_perception_rviz_plugin/#visualization-result_2","title":"Visualization Result","text":"Overwrite prediction results with tracking results.
"},{"location":"common/autoware_auto_perception_rviz_plugin/#referencesexternal-links","title":"References/External links","text":"[1] https://gitlab.com/autowarefoundation/autoware.auto/AutowareAuto/-/tree/master/src/tools/visualization/autoware_rviz_plugins
"},{"location":"common/autoware_auto_perception_rviz_plugin/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":""},{"location":"common/autoware_auto_tf2/design/autoware-auto-tf2-design/","title":"autoware_auto_tf2","text":""},{"location":"common/autoware_auto_tf2/design/autoware-auto-tf2-design/#autoware_auto_tf2","title":"autoware_auto_tf2","text":"This is the design document for the autoware_auto_tf2
package.
In general, users of ROS rely on tf (and its successor, tf2) for publishing and utilizing coordinate frame transforms. This is true even to the extent that the tf2 contains the packages tf2_geometry_msgs
and tf2_sensor_msgs
which allow for easy conversion to and from the message types defined in geometry_msgs
and sensor_msgs
, respectively. However, AutowareAuto contains some specialized message types which are not transformable between frames using the ROS 2 library. The autoware_auto_tf2
package aims to provide developers with tools to transform applicable autoware_auto_msgs
types. In addition to this, this package also provides transform tools for messages types in geometry_msgs
missing in tf2_geometry_msgs
.
While writing tf2_some_msgs
or contributing to tf2_geometry_msgs
, compatibility and design intent was ensured with the following files in the existing tf2 framework:
tf2/convert.h
tf2_ros/buffer_interface.h
For example:
void tf2::convert( const A & a,B & b)\n
The method tf2::convert
is dependent on the following:
template<typename A, typename B>\nB tf2::toMsg(const A& a);\ntemplate<typename A, typename B>\nvoid tf2::fromMsg(const A&, B& b);\n\n// New way to transform instead of using tf2::doTransform() directly\ntf2_ros::BufferInterface::transform(...)\n
Which, in turn, is dependent on the following:
void tf2::convert( const A & a,B & b)\nconst std::string& tf2::getFrameId(const T& t)\nconst ros::Time& tf2::getTimestamp(const T& t);\n
"},{"location":"common/autoware_auto_tf2/design/autoware-auto-tf2-design/#current-implementation-of-tf2_geometry_msgs","title":"Current Implementation of tf2_geometry_msgs","text":"In both ROS 1 and ROS 2 stamped msgs like Vector3Stamped
, QuaternionStamped
have associated functions like:
getTimestamp
getFrameId
doTransform
toMsg
fromMsg
In ROS 1, to support tf2::convert
and need in doTransform
of the stamped data, non-stamped underlying data like Vector3
, Point
, have implementations of the following functions:
toMsg
fromMsg
In ROS 2, much of the doTransform
method is not using toMsg
and fromMsg
as data types from tf2 are not used. Instead doTransform
is done using KDL
, thus functions relating to underlying data were not added; such as Vector3
, Point
, or ported in this commit ros/geometry2/commit/6f2a82. The non-stamped data with toMsg
and fromMsg
are Quaternion
, Transform
. Pose
has the modified toMsg
and not used by PoseStamped
.
The initial rough plan was to implement some of the common tf2 functions like toMsg
, fromMsg
, and doTransform
, as needed for all the underlying data types in BoundingBoxArray
. Examples of the data types include: BoundingBox
, Quaternion32
, and Point32
. In addition, the implementation should be done such that upstream contributions could also be made to geometry_msgs
.
Due to conflicts in a function signatures, the predefined template of convert.h
/ transform_functions.h
is not followed and compatibility with tf2::convert(..)
is broken and toMsg
is written differently.
// Old style\ngeometry_msgs::Vector3 toMsg(const tf2::Vector3& in)\ngeometry_msgs::Point& toMsg(const tf2::Vector3& in)\n\n// New style\ngeometry_msgs::Point& toMsg(const tf2::Vector3& in, geometry_msgs::Point& out)\n
"},{"location":"common/autoware_auto_tf2/design/autoware-auto-tf2-design/#inputs-outputs-api","title":"Inputs / Outputs / API","text":"The library provides API doTransform
for the following data-types that are either not available in tf2_geometry_msgs
or the messages types are part of autoware_auto_msgs
and are therefore custom and not inherently supported by any of the tf2 libraries. The following APIs are provided for the following data types:
Point32
inline void doTransform(\nconst geometry_msgs::msg::Point32 & t_in,\ngeometry_msgs::msg::Point32 & t_out,\nconst geometry_msgs::msg::TransformStamped & transform)\n
Quaternion32
(autoware_auto_msgs
)inline void doTransform(\nconst autoware_auto_geometry_msgs::msg::Quaternion32 & t_in,\nautoware_auto_geometry_msgs::msg::Quaternion32 & t_out,\nconst geometry_msgs::msg::TransformStamped & transform)\n
BoundingBox
(autoware_auto_msgs
)inline void doTransform(\nconst BoundingBox & t_in, BoundingBox & t_out,\nconst geometry_msgs::msg::TransformStamped & transform)\n
BoundingBoxArray
inline void doTransform(\nconst BoundingBoxArray & t_in,\nBoundingBoxArray & t_out,\nconst geometry_msgs::msg::TransformStamped & transform)\n
In addition, the following helper methods are also added:
BoundingBoxArray
inline tf2::TimePoint getTimestamp(const BoundingBoxArray & t)\n\ninline std::string getFrameId(const BoundingBoxArray & t)\n
"},{"location":"common/autoware_auto_tf2/design/autoware-auto-tf2-design/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":""},{"location":"common/autoware_auto_tf2/design/autoware-auto-tf2-design/#challenges","title":"Challenges","text":"tf2_geometry_msgs
does not implement doTransform
for any non-stamped data types, but it is possible with the same function template. It is needed when transforming sub-data, with main data that does have a stamp and can call doTransform on the sub-data with the same transform. Is this a useful upstream contribution?tf2_geometry_msgs
does not have Point
, Point32
, does not seem it needs one, also the implementation of non-standard toMsg
would not help the convert.BoundingBox
uses 32-bit float like Quaternion32
and Point32
to save space, as they are used repeatedly in BoundingBoxArray
. While transforming is it better to convert to 64-bit Quaternion
, Point
, or PoseStamped
, to re-use existing implementation of doTransform
, or does it need to be implemented? It may not be simple to template.This is the design document for the autoware_testing
package.
The package aims to provide a unified way to add standard testing functionality to the package, currently supporting:
add_smoke_test
): launch a node with default configuration and ensure that it starts up and does not crash.Uses ros_testing
(which is an extension of launch_testing
) and provides some parametrized, reusable standard tests to run.
Parametrization is limited to package, executable names, parameters filename and executable arguments. Test namespace is set as 'test'. Parameters file for the package is expected to be in param
directory inside package.
To add a smoke test to your package tests, add test dependency on autoware_testing
to package.xml
<test_depend>autoware_testing</test_depend>\n
and add the following two lines to CMakeLists.txt
in the IF (BUILD_TESTING)
section:
find_package(autoware_testing REQUIRED)\nadd_smoke_test(<package_name> <executable_name> [PARAM_FILENAME <param_filename>] [EXECUTABLE_ARGUMENTS <arguments>])\n
Where
<package_name>
- [required] tested node package name.
<executable_name>
- [required] tested node executable name.
<param_filename>
- [optional] param filename. Default value is test.param.yaml
. Required mostly in situation where there are multiple smoke tests in a package and each requires different parameters set
<arguments>
- [optional] arguments passed to executable. By default no arguments are passed.
which adds <executable_name>_smoke_test
test to suite.
Example test result:
build/<package_name>/test_results/<package_name>/<executable_name>_smoke_test.xunit.xml: 1 test, 0 errors, 0 failures, 0 skipped\n
"},{"location":"common/autoware_testing/design/autoware_testing-design/#references-external-links","title":"References / External links","text":"autoware_testing
Plugin for displaying 2D overlays over the RViz2 3D scene.
Based on the jsk_visualization package, under the 3-Clause BSD license.
"},{"location":"common/awf_vehicle_rviz_plugin/awf_2d_overlay_vehicle/#purpose","title":"Purpose","text":"This plugin provides a visual and easy-to-understand display of vehicle speed, turn signal, steering status and gears.
"},{"location":"common/awf_vehicle_rviz_plugin/awf_2d_overlay_vehicle/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"common/awf_vehicle_rviz_plugin/awf_2d_overlay_vehicle/#input","title":"Input","text":"Name Type Description/vehicle/status/velocity_status
autoware_auto_vehicle_msgs::msg::VelocityReport
The topic is vehicle twist /vehicle/status/turn_indicators_status
autoware_auto_vehicle_msgs::msg::TurnIndicatorsReport
The topic is status of turn signal /vehicle/status/hazard_status
autoware_auto_vehicle_msgs::msg::HazardReport
The topic is status of hazard /vehicle/status/steering_status
autoware_auto_vehicle_msgs::msg::SteeringReport
The topic is status of steering /vehicle/status/gear_status
autoware_auto_vehicle_msgs::msg::GearReport
The topic is status of gear"},{"location":"common/awf_vehicle_rviz_plugin/awf_2d_overlay_vehicle/#parameter","title":"Parameter","text":""},{"location":"common/awf_vehicle_rviz_plugin/awf_2d_overlay_vehicle/#core-parameters","title":"Core Parameters","text":""},{"location":"common/awf_vehicle_rviz_plugin/awf_2d_overlay_vehicle/#signaldisplay","title":"SignalDisplay","text":"Name Type Default Value Description property_width_
int 128 Width of the plotter window [px] property_height_
int 128 Height of the plotter window [px] property_left_
int 128 Left of the plotter window [px] property_top_
int 128 Top of the plotter window [px] property_signal_color_
QColor QColor(25, 255, 240) Turn Signal color"},{"location":"common/awf_vehicle_rviz_plugin/awf_2d_overlay_vehicle/#assumptions-known-limits","title":"Assumptions / Known limits","text":"TBD.
"},{"location":"common/awf_vehicle_rviz_plugin/awf_2d_overlay_vehicle/#usage","title":"Usage","text":"Start rviz and select Add under the Displays panel.
Select any one of the tier4_vehicle_rviz_plugin and press OK.
Enter the name of the topic where you want to view the status.
This plugin allows publishing and controlling the ros bag time.
"},{"location":"common/bag_time_manager_rviz_plugin/#output","title":"Output","text":"tbd.
"},{"location":"common/bag_time_manager_rviz_plugin/#howtouse","title":"HowToUse","text":"Start rviz and select panels/Add new panel.
Select BagTimeManagerPanel and press OK.
See bag_time_manager_rviz_plugin/BagTimeManagerPanel is added.
This package is a specification of component interfaces.
"},{"location":"common/component_interface_tools/","title":"component_interface_tools","text":""},{"location":"common/component_interface_tools/#component_interface_tools","title":"component_interface_tools","text":"This package provides the following tools for component interface.
"},{"location":"common/component_interface_tools/#service_log_checker","title":"service_log_checker","text":"Monitor the service log of component_interface_utils and display if the response status is an error.
"},{"location":"common/component_interface_utils/","title":"component_interface_utils","text":""},{"location":"common/component_interface_utils/#component_interface_utils","title":"component_interface_utils","text":""},{"location":"common/component_interface_utils/#features","title":"Features","text":"This is a utility package that provides the following features:
This package provides the wrappers for the interface classes of rclcpp. The wrappers limit the usage of the original class to enforce the processing recommended by the component interface. Do not inherit the class of rclcpp, and forward or wrap the member function that is allowed to be used.
"},{"location":"common/component_interface_utils/#instantiation-of-the-wrapper-class","title":"Instantiation of the wrapper class","text":"The wrapper class requires interface information in this format.
struct SampleService\n{\nusing Service = sample_msgs::srv::ServiceType;\nstatic constexpr char name[] = \"/sample/service\";\n};\n\nstruct SampleMessage\n{\nusing Message = sample_msgs::msg::MessageType;\nstatic constexpr char name[] = \"/sample/message\";\nstatic constexpr size_t depth = 1;\nstatic constexpr auto reliability = RMW_QOS_POLICY_RELIABILITY_RELIABLE;\nstatic constexpr auto durability = RMW_QOS_POLICY_DURABILITY_TRANSIENT_LOCAL;\n};\n
Create the wrapper using the above definition as follows.
// header file\ncomponent_interface_utils::Service<SampleService>::SharedPtr srv_;\ncomponent_interface_utils::Client<SampleService>::SharedPtr cli_;\ncomponent_interface_utils::Publisher<SampleMessage>::SharedPtr pub_;\ncomponent_interface_utils::Subscription<SampleMessage>::SharedPtr sub_;\n\n// source file\nconst auto node = component_interface_utils::NodeAdaptor(this);\nnode.init_srv(srv_, callback);\nnode.init_cli(cli_);\nnode.init_pub(pub_);\nnode.init_sub(sub_, callback);\n
"},{"location":"common/component_interface_utils/#logging-for-service-and-client","title":"Logging for service and client","text":"If the wrapper class is used, logging is automatically enabled. The log level is RCLCPP_INFO
.
If the wrapper class is used and the service response has status, throwing ServiceException
will automatically catch and set it to status. This is useful when returning an error from a function called from the service callback.
void service_callback(Request req, Response res)\n{\nfunction();\nres->status.success = true;\n}\n\nvoid function()\n{\nthrow ServiceException(ERROR_CODE, \"message\");\n}\n
If the wrapper class is not used or the service response has no status, manually catch the ServiceException
as follows.
void service_callback(Request req, Response res)\n{\ntry {\nfunction();\nres->status.success = true;\n} catch (const ServiceException & error) {\nres->status = error.status();\n}\n}\n
"},{"location":"common/component_interface_utils/#relays-for-topic-and-service","title":"Relays for topic and service","text":"There are utilities for relaying services and messages of the same type.
const auto node = component_interface_utils::NodeAdaptor(this);\nservice_callback_group_ = create_callback_group(rclcpp::CallbackGroupType::MutuallyExclusive);\nnode.relay_message(pub_, sub_);\nnode.relay_service(cli_, srv_, service_callback_group_); // group is for avoiding deadlocks\n
"},{"location":"common/cuda_utils/","title":"cuda_utils","text":""},{"location":"common/cuda_utils/#cuda_utils","title":"cuda_utils","text":""},{"location":"common/cuda_utils/#purpose","title":"Purpose","text":"This package contains a library of common functions related to CUDA.
"},{"location":"common/fake_test_node/design/fake_test_node-design/","title":"Fake Test Node","text":""},{"location":"common/fake_test_node/design/fake_test_node-design/#fake-test-node","title":"Fake Test Node","text":""},{"location":"common/fake_test_node/design/fake_test_node-design/#what-this-package-provides","title":"What this package provides","text":"When writing an integration test for a node in C++ using GTest, there is quite some boilerplate code that needs to be written to set up a fake node that would publish expected messages on an expected topic and subscribes to messages on some other topic. This is usually implemented as a custom GTest fixture.
This package contains a library that introduces two utility classes that can be used in place of custom fixtures described above to write integration tests for a node:
autoware::tools::testing::FakeTestNode
- to use as a custom test fixture with TEST_F
testsautoware::tools::testing::FakeTestNodeParametrized
- to use a custom test fixture with the parametrized TEST_P
tests (accepts a template parameter that gets forwarded to testing::TestWithParam<T>
)These fixtures take care of initializing and re-initializing rclcpp as well as of checking that all subscribers and publishers have a match, thus reducing the amount of boilerplate code that the user needs to write.
"},{"location":"common/fake_test_node/design/fake_test_node-design/#how-to-use-this-library","title":"How to use this library","text":"After including the relevant header the user can use a typedef to use a custom fixture name and use the provided classes as fixtures in TEST_F
and TEST_P
tests directly.
Let's say there is a node NodeUnderTest
that requires testing. It just subscribes to std_msgs::msg::Int32
messages and publishes a std_msgs::msg::Bool
to indicate that the input is positive. To test such a node the following code can be used utilizing the autoware::tools::testing::FakeTestNode
:
using FakeNodeFixture = autoware::tools::testing::FakeTestNode;\n\n/// @test Test that we can use a non-parametrized test.\nTEST_F(FakeNodeFixture, Test) {\nInt32 msg{};\nmsg.data = 15;\nconst auto node = std::make_shared<NodeUnderTest>();\n\nBool::SharedPtr last_received_msg{};\nauto fake_odom_publisher = create_publisher<Int32>(\"/input_topic\");\nauto result_odom_subscription = create_subscription<Bool>(\"/output_topic\", *node,\n[&last_received_msg](const Bool::SharedPtr msg) {last_received_msg = msg;});\n\nconst auto dt{std::chrono::milliseconds{100LL}};\nconst auto max_wait_time{std::chrono::seconds{10LL}};\nauto time_passed{std::chrono::milliseconds{0LL}};\nwhile (!last_received_msg) {\nfake_odom_publisher->publish(msg);\nrclcpp::spin_some(node);\nrclcpp::spin_some(get_fake_node());\nstd::this_thread::sleep_for(dt);\ntime_passed += dt;\nif (time_passed > max_wait_time) {\nFAIL() << \"Did not receive a message soon enough.\";\n}\n}\nEXPECT_TRUE(last_received_msg->data);\nSUCCEED();\n}\n
Here only the TEST_F
example is shown but a TEST_P
usage is very similar with a little bit more boilerplate to set up all the parameter values, see test_fake_test_node.cpp
for an example usage.
This package contains geography-related functions used by other packages, so please refer to them as needed.
"},{"location":"common/global_parameter_loader/Readme/","title":"Autoware Global Parameter Loader","text":""},{"location":"common/global_parameter_loader/Readme/#autoware-global-parameter-loader","title":"Autoware Global Parameter Loader","text":"This package is to set common ROS parameters to each node.
"},{"location":"common/global_parameter_loader/Readme/#usage","title":"Usage","text":"Add the following lines to the launch file of the node in which you want to get global parameters.
<!-- Global parameters -->\n<include file=\"$(find-pkg-share global_parameter_loader)/launch/global_params.launch.py\">\n<arg name=\"vehicle_model\" value=\"$(var vehicle_model)\"/>\n</include>\n
The vehicle model parameter is read from config/vehicle_info.param.yaml
in vehicle_model
_description package.
Currently only vehicle_info is loaded by this launcher.
"},{"location":"common/glog_component/","title":"glog_component","text":""},{"location":"common/glog_component/#glog_component","title":"glog_component","text":"This package provides the glog (google logging library) feature as a ros2 component library. This is used to dynamically load the glog feature with container.
See the glog github for the details of its features.
"},{"location":"common/glog_component/#example","title":"Example","text":"When you load the glog_component
in container, the launch file can be like below:
glog_component = ComposableNode(\n package=\"glog_component\",\n plugin=\"GlogComponent\",\n name=\"glog_component\",\n)\n\ncontainer = ComposableNodeContainer(\n name=\"my_container\",\n namespace=\"\",\n package=\"rclcpp_components\",\n executable=LaunchConfiguration(\"container_executable\"),\n composable_node_descriptions=[\n component1,\n component2,\n glog_component,\n ],\n)\n
"},{"location":"common/goal_distance_calculator/Readme/","title":"goal_distance_calculator","text":""},{"location":"common/goal_distance_calculator/Readme/#goal_distance_calculator","title":"goal_distance_calculator","text":""},{"location":"common/goal_distance_calculator/Readme/#purpose","title":"Purpose","text":"This node publishes deviation of self-pose from goal pose.
"},{"location":"common/goal_distance_calculator/Readme/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"common/goal_distance_calculator/Readme/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"common/goal_distance_calculator/Readme/#input","title":"Input","text":"Name Type Description/planning/mission_planning/route
autoware_auto_planning_msgs::msg::Route
Used to get goal pose /tf
tf2_msgs/TFMessage
TF (self-pose)"},{"location":"common/goal_distance_calculator/Readme/#output","title":"Output","text":"Name Type Description deviation/lateral
tier4_debug_msgs::msg::Float64Stamped
publish lateral deviation of self-pose from goal pose[m] deviation/longitudinal
tier4_debug_msgs::msg::Float64Stamped
publish longitudinal deviation of self-pose from goal pose[m] deviation/yaw
tier4_debug_msgs::msg::Float64Stamped
publish yaw deviation of self-pose from goal pose[rad] deviation/yaw_deg
tier4_debug_msgs::msg::Float64Stamped
publish yaw deviation of self-pose from goal pose[deg]"},{"location":"common/goal_distance_calculator/Readme/#parameters","title":"Parameters","text":""},{"location":"common/goal_distance_calculator/Readme/#node-parameters","title":"Node Parameters","text":"Name Type Default Value Explanation update_rate
double 10.0 Timer callback period. [Hz]"},{"location":"common/goal_distance_calculator/Readme/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Explanation oneshot
bool true publish deviations just once or repeatedly"},{"location":"common/goal_distance_calculator/Readme/#assumptions-known-limits","title":"Assumptions / Known limits","text":"TBD.
"},{"location":"common/grid_map_utils/","title":"Grid Map Utils","text":""},{"location":"common/grid_map_utils/#grid-map-utils","title":"Grid Map Utils","text":""},{"location":"common/grid_map_utils/#overview","title":"Overview","text":"This packages contains a re-implementation of the grid_map::PolygonIterator
used to iterate over all cells of a grid map contained inside some polygon.
This implementation uses the scan line algorithm, a common algorithm used to draw polygons on a rasterized image. The main idea of the algorithm adapted to a grid map is as follow:
(row, column)
indexes are inside of the polygon.More details on the scan line algorithm can be found in the References.
"},{"location":"common/grid_map_utils/#api","title":"API","text":"The grid_map_utils::PolygonIterator
follows the same API as the original grid_map::PolygonIterator
.
The behavior of the grid_map_utils::PolygonIterator
is only guaranteed to match the grid_map::PolygonIterator
if edges of the polygon do not exactly cross any cell center. In such a case, whether the crossed cell is considered inside or outside of the polygon can vary due to floating precision error.
Benchmarking code is implemented in test/benchmarking.cpp
and is also used to validate that the grid_map_utils::PolygonIterator
behaves exactly like the grid_map::PolygonIterator
.
The following figure shows a comparison of the runtime between the implementation of this package (grid_map_utils
) and the original implementation (grid_map
). The time measured includes the construction of the iterator and the iteration over all indexes and is shown using a logarithmic scale. Results were obtained varying the side size of a square grid map with 100 <= n <= 1000
(size=n
means a grid of n x n
cells), random polygons with a number of vertices 3 <= m <= 100
and with each parameter (n,m)
repeated 10 times.
There exists variations of the scan line algorithm for multiple polygons. These can be implemented if we want to iterate over the cells contained in at least one of multiple polygons.
The current implementation imitate the behavior of the original grid_map::PolygonIterator
where a cell is selected if its center position is inside the polygon. This behavior could be changed for example to only return all cells overlapped by the polygon.
This package supplies linear and spline interpolation functions.
"},{"location":"common/interpolation/#linear-interpolation","title":"Linear Interpolation","text":"lerp(src_val, dst_val, ratio)
(for scalar interpolation) interpolates src_val
and dst_val
with ratio
. This will be replaced with std::lerp(src_val, dst_val, ratio)
in C++20
.
lerp(base_keys, base_values, query_keys)
(for vector interpolation) applies linear regression to each two continuous points whose x values arebase_keys
and whose y values are base_values
. Then it calculates interpolated values on y-axis for query_keys
on x-axis.
spline(base_keys, base_values, query_keys)
(for vector interpolation) applies spline regression to each two continuous points whose x values arebase_keys
and whose y values are base_values
. Then it calculates interpolated values on y-axis for query_keys
on x-axis.
We evaluated calculation cost of spline interpolation for 100 points, and adopted the best one which is tridiagonal matrix algorithm. Methods except for tridiagonal matrix algorithm exists in spline_interpolation
package, which has been removed from Autoware.
Assuming that the size of base_keys
(\\(x_i\\)) and base_values
(\\(y_i\\)) are \\(N + 1\\), we aim to calculate spline interpolation with the following equation to interpolate between \\(y_i\\) and \\(y_{i+1}\\).
Constraints on spline interpolation are as follows. The number of constraints is \\(4N\\), which is equal to the number of variables of spline interpolation.
\\[ \\begin{align} Y_i (x_i) & = y_i \\ \\ \\ (i = 0, \\dots, N-1) \\\\ Y_i (x_{i+1}) & = y_{i+1} \\ \\ \\ (i = 0, \\dots, N-1) \\\\ Y'_i (x_{i+1}) & = Y'_{i+1} (x_{i+1}) \\ \\ \\ (i = 0, \\dots, N-2) \\\\ Y''_i (x_{i+1}) & = Y''_{i+1} (x_{i+1}) \\ \\ \\ (i = 0, \\dots, N-2) \\\\ Y''_0 (x_0) & = 0 \\\\ Y''_{N-1} (x_N) & = 0 \\end{align} \\]According to this article, spline interpolation is formulated as the following linear equation.
\\[ \\begin{align} \\begin{pmatrix} 2(h_0 + h_1) & h_1 \\\\ h_0 & 2 (h_1 + h_2) & h_2 & & O \\\\ & & & \\ddots \\\\ O & & & & h_{N-2} & 2 (h_{N-2} + h_{N-1}) \\end{pmatrix} \\begin{pmatrix} v_1 \\\\ v_2 \\\\ v_3 \\\\ \\vdots \\\\ v_{N-1} \\end{pmatrix}= \\begin{pmatrix} w_1 \\\\ w_2 \\\\ w_3 \\\\ \\vdots \\\\ w_{N-1} \\end{pmatrix} \\end{align} \\]where
\\[ \\begin{align} h_i & = x_{i+1} - x_i \\ \\ \\ (i = 0, \\dots, N-1) \\\\ w_i & = 6 \\left(\\frac{y_{i+1} - y_{i+1}}{h_i} - \\frac{y_i - y_{i-1}}{h_{i-1}}\\right) \\ \\ \\ (i = 1, \\dots, N-1) \\end{align} \\]The coefficient matrix of this linear equation is tridiagonal matrix. Therefore, it can be solve with tridiagonal matrix algorithm, which can solve linear equations without gradient descent methods.
Solving this linear equation with tridiagonal matrix algorithm, we can calculate coefficients of spline interpolation as follows.
\\[ \\begin{align} a_i & = \\frac{v_{i+1} - v_i}{6 (x_{i+1} - x_i)} \\ \\ \\ (i = 0, \\dots, N-1) \\\\ b_i & = \\frac{v_i}{2} \\ \\ \\ (i = 0, \\dots, N-1) \\\\ c_i & = \\frac{y_{i+1} - y_i}{x_{i+1} - x_i} - \\frac{1}{6}(x_{i+1} - x_i)(2 v_i + v_{i+1}) \\ \\ \\ (i = 0, \\dots, N-1) \\\\ d_i & = y_i \\ \\ \\ (i = 0, \\dots, N-1) \\end{align} \\]"},{"location":"common/interpolation/#tridiagonal-matrix-algorithm","title":"Tridiagonal Matrix Algorithm","text":"We solve tridiagonal linear equation according to this article where variables of linear equation are expressed as follows in the implementation.
\\[ \\begin{align} \\begin{pmatrix} b_0 & c_0 & & \\\\ a_0 & b_1 & c_2 & O \\\\ & & \\ddots \\\\ O & & a_{N-2} & b_{N-1} \\end{pmatrix} x = \\begin{pmatrix} d_0 \\\\ d_2 \\\\ d_3 \\\\ \\vdots \\\\ d_{N-1} \\end{pmatrix} \\end{align} \\]"},{"location":"common/kalman_filter/","title":"kalman_filter","text":""},{"location":"common/kalman_filter/#kalman_filter","title":"kalman_filter","text":""},{"location":"common/kalman_filter/#purpose","title":"Purpose","text":"This common package contains the kalman filter with time delay and the calculation of the kalman filter.
"},{"location":"common/kalman_filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":"TBD.
"},{"location":"common/motion_utils/","title":"Motion Utils package","text":""},{"location":"common/motion_utils/#motion-utils-package","title":"Motion Utils package","text":""},{"location":"common/motion_utils/#definition-of-terms","title":"Definition of terms","text":""},{"location":"common/motion_utils/#segment","title":"Segment","text":"Segment
in Autoware is the line segment between two successive points as follows.
The nearest segment index and nearest point index to a certain position is not always th same. Therefore, we prepare two different utility functions to calculate a nearest index for points and segments.
"},{"location":"common/motion_utils/#nearest-index-search","title":"Nearest index search","text":"In this section, the nearest index and nearest segment index search is explained.
We have the same functions for the nearest index search and nearest segment index search. Taking for the example the nearest index search, we have two types of functions.
The first function finds the nearest index with distance and yaw thresholds.
template <class T>\nsize_t findFirstNearestIndexWithSoftConstraints(\nconst T & points, const geometry_msgs::msg::Pose & pose,\nconst double dist_threshold = std::numeric_limits<double>::max(),\nconst double yaw_threshold = std::numeric_limits<double>::max());\n
This function finds the first local solution within thresholds. The reason to find the first local one is to deal with some edge cases explained in the next subsection.
There are default parameters for thresholds arguments so that you can decide which thresholds to pass to the function.
The second function finds the nearest index in the lane whose id is lane_id
.
size_t findNearestIndexFromLaneId(\nconst autoware_auto_planning_msgs::msg::PathWithLaneId & path,\nconst geometry_msgs::msg::Point & pos, const int64_t lane_id);\n
"},{"location":"common/motion_utils/#application-to-various-object","title":"Application to various object","text":"Many node packages often calculate the nearest index of objects. We will explain the recommended method to calculate it.
"},{"location":"common/motion_utils/#nearest-index-for-the-ego","title":"Nearest index for the ego","text":"Assuming that the path length before the ego is short enough, we expect to find the correct nearest index in the following edge cases by findFirstNearestIndexWithSoftConstraints
with both distance and yaw thresholds. Blue circles describes the distance threshold from the base link position and two blue lines describe the yaw threshold against the base link orientation. Among points in these cases, the correct nearest point which is red can be found.
Therefore, the implementation is as follows.
const size_t ego_nearest_idx = findFirstNearestIndexWithSoftConstraints(points, ego_pose, ego_nearest_dist_threshold, ego_nearest_yaw_threshold);\nconst size_t ego_nearest_seg_idx = findFirstNearestIndexWithSoftConstraints(points, ego_pose, ego_nearest_dist_threshold, ego_nearest_yaw_threshold);\n
"},{"location":"common/motion_utils/#nearest-index-for-dynamic-objects","title":"Nearest index for dynamic objects","text":"For the ego nearest index, the orientation is considered in addition to the position since the ego is supposed to follow the points. However, for the dynamic objects (e.g., predicted object), sometimes its orientation may be different from the points order, e.g. the dynamic object driving backward although the ego is driving forward.
Therefore, the yaw threshold should not be considered for the dynamic object. The implementation is as follows.
const size_t dynamic_obj_nearest_idx = findFirstNearestIndexWithSoftConstraints(points, dynamic_obj_pose, dynamic_obj_nearest_dist_threshold);\nconst size_t dynamic_obj_nearest_seg_idx = findFirstNearestIndexWithSoftConstraints(points, dynamic_obj_pose, dynamic_obj_nearest_dist_threshold);\n
"},{"location":"common/motion_utils/#nearest-index-for-traffic-objects","title":"Nearest index for traffic objects","text":"In lanelet maps, traffic objects belong to the specific lane. With this specific lane's id, the correct nearest index can be found.
The implementation is as follows.
// first extract `lane_id` which the traffic object belong to.\nconst size_t traffic_obj_nearest_idx = findNearestIndexFromLaneId(path_with_lane_id, traffic_obj_pos, lane_id);\nconst size_t traffic_obj_nearest_seg_idx = findNearestSegmentIndexFromLaneId(path_with_lane_id, traffic_obj_pos, lane_id);\n
"},{"location":"common/motion_utils/#pathtrajectory-length-calculation-between-designated-points","title":"Path/Trajectory length calculation between designated points","text":"Based on the discussion so far, the nearest index search algorithm is different depending on the object type. Therefore, we recommended using the wrapper utility functions which require the nearest index search (e.g., calculating the path length) with each nearest index search.
For example, when we want to calculate the path length between the ego and the dynamic object, the implementation is as follows.
const size_t ego_nearest_seg_idx = findFirstNearestSegmentIndex(points, ego_pose, ego_nearest_dist_threshold, ego_nearest_yaw_threshold);\nconst size_t dyn_obj_nearest_seg_idx = findFirstNearestSegmentIndex(points, dyn_obj_pose, dyn_obj_nearest_dist_threshold);\nconst double length_from_ego_to_obj = calcSignedArcLength(points, ego_pose, ego_nearest_seg_idx, dyn_obj_pose, dyn_obj_nearest_seg_idx);\n
"},{"location":"common/motion_utils/#for-developers","title":"For developers","text":"Some of the template functions in trajectory.hpp
are mostly used for specific types (autoware_auto_planning_msgs::msg::PathPoint
, autoware_auto_planning_msgs::msg::PathPoint
, autoware_auto_planning_msgs::msg::TrajectoryPoint
), so they are exported as extern template
functions to speed-up compilation time.
motion_utils.hpp
header file was removed because the source files that directly/indirectly include this file took a long time for preprocessing.
Vehicle utils provides a convenient library used to check vehicle status.
"},{"location":"common/motion_utils/docs/vehicle/vehicle/#feature","title":"Feature","text":"The library contains following classes.
"},{"location":"common/motion_utils/docs/vehicle/vehicle/#vehicle_stop_checker","title":"vehicle_stop_checker","text":"This class check whether the vehicle is stopped or not based on localization result.
"},{"location":"common/motion_utils/docs/vehicle/vehicle/#subscribed-topics","title":"Subscribed Topics","text":"Name Type Description/localization/kinematic_state
nav_msgs::msg::Odometry
vehicle odometry"},{"location":"common/motion_utils/docs/vehicle/vehicle/#parameters","title":"Parameters","text":"Name Type Default Value Explanation velocity_buffer_time_sec
double 10.0 odometry buffering time [s]"},{"location":"common/motion_utils/docs/vehicle/vehicle/#member-functions","title":"Member functions","text":"bool isVehicleStopped(const double stop_duration)\n
true
if the vehicle is stopped, even if system outputs a non-zero target velocity.Necessary includes:
#include <tier4_autoware_utils/vehicle/vehicle_state_checker.hpp>\n
1.Create a checker instance.
class SampleNode : public rclcpp::Node\n{\npublic:\nSampleNode() : Node(\"sample_node\")\n{\nvehicle_stop_checker_ = std::make_unique<VehicleStopChecker>(this);\n}\n\nstd::unique_ptr<VehicleStopChecker> vehicle_stop_checker_;\n\nbool sampleFunc();\n\n...\n}\n
2.Check the vehicle state.
bool SampleNode::sampleFunc()\n{\n...\n\nconst auto result_1 = vehicle_stop_checker_->isVehicleStopped();\n\n...\n\nconst auto result_2 = vehicle_stop_checker_->isVehicleStopped(3.0);\n\n...\n}\n
"},{"location":"common/motion_utils/docs/vehicle/vehicle/#vehicle_arrival_checker","title":"vehicle_arrival_checker","text":"This class check whether the vehicle arrive at stop point based on localization and planning result.
"},{"location":"common/motion_utils/docs/vehicle/vehicle/#subscribed-topics_1","title":"Subscribed Topics","text":"Name Type Description/localization/kinematic_state
nav_msgs::msg::Odometry
vehicle odometry /planning/scenario_planning/trajectory
autoware_auto_planning_msgs::msg::Trajectory
trajectory"},{"location":"common/motion_utils/docs/vehicle/vehicle/#parameters_1","title":"Parameters","text":"Name Type Default Value Explanation velocity_buffer_time_sec
double 10.0 odometry buffering time [s] th_arrived_distance_m
double 1.0 threshold distance to check if vehicle has arrived at target point [m]"},{"location":"common/motion_utils/docs/vehicle/vehicle/#member-functions_1","title":"Member functions","text":"bool isVehicleStopped(const double stop_duration)\n
true
if the vehicle is stopped, even if system outputs a non-zero target velocity.bool isVehicleStoppedAtStopPoint(const double stop_duration)\n
true
if the vehicle is not only stopped but also arrived at stop point.Necessary includes:
#include <tier4_autoware_utils/vehicle/vehicle_state_checker.hpp>\n
1.Create a checker instance.
class SampleNode : public rclcpp::Node\n{\npublic:\nSampleNode() : Node(\"sample_node\")\n{\nvehicle_arrival_checker_ = std::make_unique<VehicleArrivalChecker>(this);\n}\n\nstd::unique_ptr<VehicleArrivalChecker> vehicle_arrival_checker_;\n\nbool sampleFunc();\n\n...\n}\n
2.Check the vehicle state.
bool SampleNode::sampleFunc()\n{\n...\n\nconst auto result_1 = vehicle_arrival_checker_->isVehicleStopped();\n\n...\n\nconst auto result_2 = vehicle_arrival_checker_->isVehicleStopped(3.0);\n\n...\n\nconst auto result_3 = vehicle_arrival_checker_->isVehicleStoppedAtStopPoint();\n\n...\n\nconst auto result_4 = vehicle_arrival_checker_->isVehicleStoppedAtStopPoint(3.0);\n\n...\n}\n
"},{"location":"common/motion_utils/docs/vehicle/vehicle/#assumptions-known-limits","title":"Assumptions / Known limits","text":"vehicle_stop_checker
and vehicle_arrival_checker
cannot check whether the vehicle is stopped more than velocity_buffer_time_sec
second.
This package contains a library of common functions that are useful across the object recognition module. This package may include functions for converting between different data types, msg types, and performing common operations on them.
"},{"location":"common/osqp_interface/design/osqp_interface-design/","title":"Interface for the OSQP library","text":""},{"location":"common/osqp_interface/design/osqp_interface-design/#interface-for-the-osqp-library","title":"Interface for the OSQP library","text":"This is the design document for the osqp_interface
package.
This packages provides a C++ interface for the OSQP library.
"},{"location":"common/osqp_interface/design/osqp_interface-design/#design","title":"Design","text":"The class OSQPInterface
takes a problem formulation as Eigen matrices and vectors, converts these objects into C-style Compressed-Column-Sparse matrices and dynamic arrays, loads the data into the OSQP workspace dataholder, and runs the optimizer.
The interface can be used in several ways:
Initialize the interface WITHOUT data. Load the problem formulation at the optimization call.
osqp_interface = OSQPInterface();\nosqp_interface.optimize(P, A, q, l, u);\n
Initialize the interface WITH data.
osqp_interface = OSQPInterface(P, A, q, l, u);\nosqp_interface.optimize();\n
WARM START OPTIMIZATION by modifying the problem formulation between optimization runs.
osqp_interface = OSQPInterface(P, A, q, l, u);\nosqp_interface.optimize();\nosqp.initializeProblem(P_new, A_new, q_new, l_new, u_new);\nosqp_interface.optimize();\n
The optimization results are returned as a vector by the optimization function.
std::tuple<std::vector<double>, std::vector<double>> result = osqp_interface.optimize();\nstd::vector<double> param = std::get<0>(result);\ndouble x_0 = param[0];\ndouble x_1 = param[1];\n
This node publishes a distance from the closest path point from the self-position to the end point of the path. Note that the distance means the arc-length along the path, not the Euclidean distance between the two points.
"},{"location":"common/path_distance_calculator/Readme/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"common/path_distance_calculator/Readme/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"common/path_distance_calculator/Readme/#input","title":"Input","text":"Name Type Description/planning/scenario_planning/lane_driving/behavior_planning/path
autoware_auto_planning_msgs::msg::Path
Reference path /tf
tf2_msgs/TFMessage
TF (self-pose)"},{"location":"common/path_distance_calculator/Readme/#output","title":"Output","text":"Name Type Description ~/distance
tier4_debug_msgs::msg::Float64Stamped
Publish a distance from the closest path point from the self-position to the end point of the path[m]"},{"location":"common/path_distance_calculator/Readme/#parameters","title":"Parameters","text":""},{"location":"common/path_distance_calculator/Readme/#node-parameters","title":"Node Parameters","text":"None.
"},{"location":"common/path_distance_calculator/Readme/#core-parameters","title":"Core Parameters","text":"None.
"},{"location":"common/path_distance_calculator/Readme/#assumptions-known-limits","title":"Assumptions / Known limits","text":"TBD.
"},{"location":"common/perception_utils/","title":"perception_utils","text":""},{"location":"common/perception_utils/#perception_utils","title":"perception_utils","text":""},{"location":"common/perception_utils/#purpose","title":"Purpose","text":"This package contains a library of common functions that are useful across the perception module.
"},{"location":"common/polar_grid/Readme/","title":"Polar Grid","text":""},{"location":"common/polar_grid/Readme/#polar-grid","title":"Polar Grid","text":""},{"location":"common/polar_grid/Readme/#purpose","title":"Purpose","text":"This plugin displays polar grid around ego vehicle in Rviz.
"},{"location":"common/polar_grid/Readme/#core-parameters","title":"Core Parameters","text":"Name Type Default Value ExplanationMax Range
float 200.0f max range for polar grid. [m] Wave Velocity
float 100.0f wave ring velocity. [m/s] Delta Range
float 10.0f wave ring distance for polar grid. [m]"},{"location":"common/polar_grid/Readme/#assumptions-known-limits","title":"Assumptions / Known limits","text":"TBD.
"},{"location":"common/qp_interface/design/qp_interface-design/","title":"Interface for QP solvers","text":""},{"location":"common/qp_interface/design/qp_interface-design/#interface-for-qp-solvers","title":"Interface for QP solvers","text":"This is the design document for the qp_interface
package.
This packages provides a C++ interface for QP solvers. Currently, supported QP solvers are
The class QPInterface
takes a problem formulation as Eigen matrices and vectors, converts these objects into C-style Compressed-Column-Sparse matrices and dynamic arrays, loads the data into the QP workspace dataholder, and runs the optimizer.
The interface can be used in several ways:
Initialize the interface, and load the problem formulation at the optimization call.
QPInterface qp_interface;\nqp_interface.optimize(P, A, q, l, u);\n
WARM START OPTIMIZATION by modifying the problem formulation between optimization runs.
QPInterface qp_interface(true);\nqp_interface.optimize(P, A, q, l, u);\nqp_interface.optimize(P_new, A_new, q_new, l_new, u_new);\n
The optimization results are returned as a vector by the optimization function.
const auto solution = qp_interface.optimize();\ndouble x_0 = solution[0];\ndouble x_1 = solution[1];\n
The purpose of this Rviz plugin is
To display each content of RTC status.
To switch each module of RTC auto mode.
To change RTC cooperate commands by button.
/api/external/get/rtc_status
tier4_rtc_msgs::msg::CooperateStatusArray
The statuses of each Cooperate Commands"},{"location":"common/rtc_manager_rviz_plugin/#output","title":"Output","text":"Name Type Description /api/external/set/rtc_commands
tier4_rtc_msgs::src::CooperateCommands
The Cooperate Commands for each planning /planning/enable_auto_mode/*
tier4_rtc_msgs::src::AutoMode
The Cooperate Commands mode for each planning module"},{"location":"common/rtc_manager_rviz_plugin/#howtouse","title":"HowToUse","text":"Start rviz and select panels/Add new panel.
tier4_state_rviz_plugin/RTCManagerPanel and press OK.
In this package, we present signal processing related methods for the Autoware applications. The following functionalities are available in the current version.
low-pass filter currently supports only the 1-D low pass filtering.
"},{"location":"common/signal_processing/#assumptions-known-limits","title":"Assumptions / Known limits","text":"TBD.
"},{"location":"common/signal_processing/documentation/ButterworthFilter/","title":"ButterworthFilter","text":""},{"location":"common/signal_processing/documentation/ButterworthFilter/#butterworth-low-pass-filter-design-tool-class","title":"Butterworth Low-pass Filter Design Tool Class","text":"This Butterworth low-pass filter design tool can be used to design a Butterworth filter in continuous and discrete-time from the given specifications of the filter performance. The Butterworth filter is a class implementation. A default constructor creates the object without any argument.
The filter can be prepared in three ways. If the filter specifications are known, such as the pass-band, and stop-band frequencies (Wp and Ws) together with the pass-band and stop-band ripple magnitudes (Ap and As), one can call the filter's buttord method with these arguments to obtain the recommended filter order (N) and cut-off frequency (Wc_rad_sec [rad/s]).
Figure 1. Butterworth Low-pass filter specification from [1].
An example call is demonstrated below;
ButterworthFilter bf();\n\nWp = 2.0; // pass-band frequency [rad/sec]\nWs = 3.0; // stop-band frequency [rad/sec]\nAp = 6.0; // pass-band ripple mag or loss [dB]\nAs = 20.0; // stop band ripple attenuation [dB]\n\n// Computing filter coefficients from the specs\nbf.Buttord(Wp, Ws, Ap, As);\n\n// Get the computed order and Cut-Off frequency\nsOrderCutOff NWc = bf.getOrderCutOff();]\n\ncout << \" The computed order is ;\" << NWc.N << endl;\ncout << \" The computed cut-off frequency is ;\" << NWc.Wc_rad_sec << endl;\n
The filter order and cut-off frequency can be obtained in a struct using bf.getOrderCutoff() method. These specs can be printed on the screen by calling PrintFilterSpecs() method. If the user would like to define the order and cut-off frequency manually, the setter methods for these variables can be called to set the filter order (N) and the desired cut-off frequency (Wc_rad_sec [rad/sec]) for the filter.
"},{"location":"common/signal_processing/documentation/ButterworthFilter/#obtaining-filter-transfer-functions","title":"Obtaining Filter Transfer Functions","text":"The discrete transfer function of the filter requires the roots and gain of the continuous-time transfer function. Therefore, it is a must to call the first computeContinuousTimeTF() to create the continuous-time transfer function of the filter using;
bf.computeContinuousTimeTF();\n
The computed continuous-time transfer function roots can be printed on the screen using the methods;
bf.PrintFilter_ContinuousTimeRoots();\nbf.PrintContinuousTimeTF();\n
The resulting screen output for a 5th order filter is demonstrated below.
Roots of Continuous Time Filter Transfer Function Denominator are :\n-0.585518 + j 1.80204\n-1.53291 + j 1.11372\n-1.89478 + j 2.32043e-16\n-1.53291 + j -1.11372\n-0.585518 + j -1.80204\n\n\nThe Continuous-Time Transfer Function of the Filter is ;\n\n 24.422\n-------------------------------------------------------------------------------\n1.000 *s[5] + 6.132 *s[4] + 18.798 *s[3] + 35.619 *s[2] + 41.711 *s[1] + 24.422\n
"},{"location":"common/signal_processing/documentation/ButterworthFilter/#discrete-time-transfer-function-difference-equations","title":"Discrete Time Transfer Function (Difference Equations)","text":"The digital filter equivalent of the continuous-time definitions is produced by using the bi-linear transformation. When creating the discrete-time function of the ButterworthFilter object, its Numerator (Bn) and Denominator (An ) coefficients are stored in a vector of filter order size N.
The discrete transfer function method is called using ;
bf.computeDiscreteTimeTF();\nbf.PrintDiscreteTimeTF();\n
The results are printed on the screen like; The Discrete-Time Transfer Function of the Filter is ;
0.191 *z[-5] + 0.956 *z[-4] + 1.913 *z[-3] + 1.913 *z[-2] + 0.956 *z[-1] + 0.191\n--------------------------------------------------------------------------------\n1.000 *z[-5] + 1.885 *z[-4] + 1.888 *z[-3] + 1.014 *z[-2] + 0.298 *z[-1] + 0.037\n
and the associated difference coefficients An and Bn by withing a struct ;
sDifferenceAnBn AnBn = bf.getAnBn();\n
The difference coefficients appear in the filtering equation in the form of.
An * Y_filtered = Bn * Y_unfiltered\n
To filter a signal given in a vector form ;
"},{"location":"common/signal_processing/documentation/ButterworthFilter/#calling-filter-by-a-specified-cut-off-and-sampling-frequencies-in-hz","title":"Calling Filter by a specified cut-off and sampling frequencies [in Hz]","text":"The Butterworth filter can also be created by defining the desired order (N), a cut-off frequency (fc in [Hz]), and a sampling frequency (fs in [Hz]). In this method, the cut-off frequency is pre-warped with respect to the sampling frequency [1, 2] to match the continuous and digital filter frequencies.
The filter is prepared by the following calling options;
// 3rd METHOD defining a sampling frequency together with the cut-off fc, fs\n bf.setOrder(2);\n bf.setCutOffFrequency(10, 100);\n
At this step, we define a boolean variable whether to use the pre-warping option or not.
// Compute Continuous Time TF\nbool use_sampling_frequency = true;\nbf.computeContinuousTimeTF(use_sampling_frequency);\nbf.PrintFilter_ContinuousTimeRoots();\nbf.PrintContinuousTimeTF();\n\n// Compute Discrete Time TF\nbf.computeDiscreteTimeTF(use_sampling_frequency);\nbf.PrintDiscreteTimeTF();\n
References:
Manolakis, Dimitris G., and Vinay K. Ingle. Applied digital signal processing: theory and practice. Cambridge University Press, 2011.
https://en.wikibooks.org/wiki/Digital_Signal_Processing/Bilinear_Transform
This package contains a library of common functions related to TensorRT. This package may include functions for handling TensorRT engine and calibration algorithm used for quantization
"},{"location":"common/tier4_adapi_rviz_plugin/","title":"tier4_adapi_rviz_plugin","text":""},{"location":"common/tier4_adapi_rviz_plugin/#tier4_adapi_rviz_plugin","title":"tier4_adapi_rviz_plugin","text":""},{"location":"common/tier4_adapi_rviz_plugin/#routepanel","title":"RoutePanel","text":"To use the panel, set the topic name from 2D Goal Pose Tool to /rviz/routing/pose
. By default, when a tool publish a pose, the panel immediately sets a route with that as the goal. Enable or disable of allow_goal_modification option can be set with the check box.
Push the mode button in the waypoint to enter waypoint mode. In this mode, the pose is added to waypoints. Press the apply button to set the route using the saved waypoints (the last one is a goal). Reset the saved waypoints with the reset button.
"},{"location":"common/tier4_api_utils/","title":"tier4_api_utils","text":""},{"location":"common/tier4_api_utils/#tier4_api_utils","title":"tier4_api_utils","text":"This is an old implementation of a class that logs when calling a service. Please use component_interface_utils instead.
"},{"location":"common/tier4_automatic_goal_rviz_plugin/","title":"tier4_automatic_goal_rviz_plugin","text":""},{"location":"common/tier4_automatic_goal_rviz_plugin/#tier4_automatic_goal_rviz_plugin","title":"tier4_automatic_goal_rviz_plugin","text":""},{"location":"common/tier4_automatic_goal_rviz_plugin/#purpose","title":"Purpose","text":"Defining a GoalsList
by adding goals using RvizTool
(Pose on the map).
Automatic execution of the created GoalsList
from the selected goal - it can be stopped and restarted.
Looping the current GoalsList
.
Saving achieved goals to a file.
Plan the route to one (single) selected goal and starting that route - it can be stopped and restarted.
Remove any goal from the list or clear the current route.
Save the current GoalsList
to a file and load the list from the file.
The application enables/disables access to options depending on the current state.
The saved GoalsList
can be executed without using a plugin - using a node automatic_goal_sender
.
/api/operation_mode/state
autoware_adapi_v1_msgs::msg::OperationModeState
The topic represents the state of operation mode /api/routing/state
autoware_adapi_v1_msgs::msg::RouteState
The topic represents the state of route /rviz2/automatic_goal/goal
geometry_msgs::msgs::PoseStamped
The topic for adding goals to GoalsList"},{"location":"common/tier4_automatic_goal_rviz_plugin/#output","title":"Output","text":"Name Type Description /api/operation_mode/change_to_autonomous
autoware_adapi_v1_msgs::srv::ChangeOperationMode
The service to change operation mode to autonomous /api/operation_mode/change_to_stop
autoware_adapi_v1_msgs::srv::ChangeOperationMode
The service to change operation mode to stop /api/routing/set_route_points
autoware_adapi_v1_msgs::srv::SetRoutePoints
The service to set route /api/routing/clear_route
autoware_adapi_v1_msgs::srv::ClearRoute
The service to clear route state /rviz2/automatic_goal/markers
visualization_msgs::msg::MarkerArray
The topic to visualize goals as rviz markers"},{"location":"common/tier4_automatic_goal_rviz_plugin/#howtouse","title":"HowToUse","text":"Start rviz and select panels/Add new panel.
Select tier4_automatic_goal_rviz_plugin/AutowareAutomaticGoalPanel
and press OK.
Select Add a new tool.
Select tier4_automatic_goal_rviz_plugin/AutowareAutomaticGoalTool
and press OK.
Add goals visualization as markers to Displays
.
Append goals to the GoalsList
to be achieved using 2D Append Goal
- in such a way that routes can be planned.
Start sequential planning and goal achievement by clicking Send goals automatically
You can save GoalsList
by clicking Save to file
.
After saving, you can run the GoalsList
without using a plugin also:
ros2 launch tier4_automatic_goal_rviz_plugin automatic_goal_sender.launch.xml goals_list_file_path:=\"/tmp/goals_list.yaml\" goals_achieved_dir_path:=\"/tmp/\"
goals_list_file_path
- is the path to the saved GoalsList
file to be loadedgoals_achieved_dir_path
- is the path to the directory where the file goals_achieved.log
will be created and the achieved goals will be written to itIf the application (Engagement) goes into ERROR
mode (usually returns to EDITING
later), it means that one of the services returned a calling error (code!=0
). In this situation, check the terminal output for more information.
This package contains many common functions used by other packages, so please refer to them as needed.
"},{"location":"common/tier4_autoware_utils/#for-developers","title":"For developers","text":"tier4_autoware_utils.hpp
header file was removed because the source files that directly/indirectly include this file took a long time for preprocessing.
Add the tier4_camera_view_rviz_plugin/ThirdPersonViewTool
tool to the RViz. Push the button, the camera will focus on the vehicle and set the target frame to base_link
. Short cut key 'o'.
Add the tier4_camera_view_rviz_plugin/BirdEyeViewTool
tool to the RViz. Push the button, the camera will turn to the BEV view, the target frame is consistent with the latest frame. Short cut key 'r'.
This package is to mimic external control for simulation.
"},{"location":"common/tier4_control_rviz_plugin/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"common/tier4_control_rviz_plugin/#input","title":"Input","text":"Name Type Description/control/current_gate_mode
tier4_control_msgs::msg::GateMode
Current GATE mode /vehicle/status/velocity_status
autoware_auto_vehicle_msgs::msg::VelocityReport
Current velocity status /api/autoware/get/engage
tier4_external_api_msgs::srv::Engage
Getting Engage /vehicle/status/gear_status
autoware_auto_vehicle_msgs::msg::GearReport
The state of GEAR"},{"location":"common/tier4_control_rviz_plugin/#output","title":"Output","text":"Name Type Description /control/gate_mode_cmd
tier4_control_msgs::msg::GateMode
GATE mode /external/selected/control_cmd
autoware_auto_control_msgs::msg::AckermannControlCommand
AckermannControlCommand /external/selected/gear_cmd
autoware_auto_vehicle_msgs::msg::GearCommand
GEAR"},{"location":"common/tier4_control_rviz_plugin/#usage","title":"Usage","text":"Start rviz and select Panels.
Select tier4_control_rviz_plugin/ManualController and press OK.
Enter velocity in \"Set Cruise Velocity\" and Press the button to confirm. You can notice that GEAR shows D (DRIVE).
Press \"Enable Manual Control\" and you can notice that \"GATE\" and \"Engage\" turn \"Ready\" and the vehicle starts!
This plugin displays the ROS Time and Wall Time in rviz.
"},{"location":"common/tier4_datetime_rviz_plugin/#assumptions-known-limits","title":"Assumptions / Known limits","text":"TBD.
"},{"location":"common/tier4_datetime_rviz_plugin/#usage","title":"Usage","text":"This package is including jsk code. Note that jsk_overlay_utils.cpp and jsk_overlay_utils.hpp are BSD license.
"},{"location":"common/tier4_debug_rviz_plugin/#plugins","title":"Plugins","text":""},{"location":"common/tier4_debug_rviz_plugin/#float32multiarraystampedpiechart","title":"Float32MultiArrayStampedPieChart","text":"Pie chart from tier4_debug_msgs::msg::Float32MultiArrayStamped
.
This package provides useful features for debugging Autoware.
"},{"location":"common/tier4_debug_tools/#usage","title":"Usage","text":""},{"location":"common/tier4_debug_tools/#tf2pose","title":"tf2pose","text":"This tool converts any tf
to pose
topic. With this tool, for example, you can plot x
values of tf
in rqt_multiplot
.
ros2 run tier4_debug_tools tf2pose {tf_from} {tf_to} {hz}\n
Example:
$ ros2 run tier4_debug_tools tf2pose base_link ndt_base_link 100\n\n$ ros2 topic echo /tf2pose/pose -n1\nheader:\n seq: 13\nstamp:\n secs: 1605168366\nnsecs: 549174070\nframe_id: \"base_link\"\npose:\n position:\n x: 0.0387684271191\n y: -0.00320360406477\n z: 0.000276674520819\n orientation:\n x: 0.000335221893885\n y: 0.000122020672186\n z: -0.00539673212896\n w: 0.999985368502\n---\n
"},{"location":"common/tier4_debug_tools/#pose2tf","title":"pose2tf","text":"This tool converts any pose
topic to tf
.
ros2 run tier4_debug_tools pose2tf {pose_topic_name} {tf_name}\n
Example:
$ ros2 run tier4_debug_tools pose2tf /localization/pose_estimator/pose ndt_pose\n\n$ ros2 run tf tf_echo ndt_pose ndt_base_link 100\nAt time 1605168365.449\n- Translation: [0.000, 0.000, 0.000]\n- Rotation: in Quaternion [0.000, 0.000, 0.000, 1.000]\nin RPY (radian) [0.000, -0.000, 0.000]\nin RPY (degree) [0.000, -0.000, 0.000]\n
"},{"location":"common/tier4_debug_tools/#stop_reason2pose","title":"stop_reason2pose","text":"This tool extracts pose
from stop_reasons
. Topics without numbers such as /stop_reason2pose/pose/detection_area
are the nearest stop_reasons, and topics with numbers are individual stop_reasons that are roughly matched with previous ones.
ros2 run tier4_debug_tools stop_reason2pose {stop_reason_topic_name}\n
Example:
$ ros2 run tier4_debug_tools stop_reason2pose /planning/scenario_planning/status/stop_reasons\n\n$ ros2 topic list | ag stop_reason2pose\n/stop_reason2pose/pose/detection_area\n/stop_reason2pose/pose/detection_area_1\n/stop_reason2pose/pose/obstacle_stop\n/stop_reason2pose/pose/obstacle_stop_1\n\n$ ros2 topic echo /stop_reason2pose/pose/detection_area -n1\nheader:\n seq: 1\nstamp:\n secs: 1605168355\nnsecs: 821713\nframe_id: \"map\"\npose:\n position:\n x: 60608.8433457\n y: 43886.2410876\n z: 44.9078212441\n orientation:\n x: 0.0\n y: 0.0\n z: -0.190261378408\n w: 0.981733470901\n---\n
"},{"location":"common/tier4_debug_tools/#stop_reason2tf","title":"stop_reason2tf","text":"This is an all-in-one script that uses tf2pose
, pose2tf
, and stop_reason2pose
. With this tool, you can view the relative position from base_link to the nearest stop_reason.
ros2 run tier4_debug_tools stop_reason2tf {stop_reason_name}\n
Example:
$ ros2 run tier4_debug_tools stop_reason2tf obstacle_stop\nAt time 1605168359.501\n- Translation: [0.291, -0.095, 0.266]\n- Rotation: in Quaternion [0.007, 0.011, -0.005, 1.000]\nin RPY (radian) [0.014, 0.023, -0.010]\nin RPY (degree) [0.825, 1.305, -0.573]\n
"},{"location":"common/tier4_debug_tools/#lateral_error_publisher","title":"lateral_error_publisher","text":"This node calculate the control error and localization error in the trajectory normal direction as shown in the figure below.
Set the reference trajectory, vehicle pose and ground truth pose in the launch file.
ros2 launch tier4_debug_tools lateral_error_publisher.launch.xml\n
"},{"location":"common/tier4_localization_rviz_plugin/","title":"tier4_localization_rviz_plugin","text":""},{"location":"common/tier4_localization_rviz_plugin/#tier4_localization_rviz_plugin","title":"tier4_localization_rviz_plugin","text":""},{"location":"common/tier4_localization_rviz_plugin/#purpose","title":"Purpose","text":"This plugin can display the history of the localization obtained by ekf_localizer or ndt_scan_matching.
"},{"location":"common/tier4_localization_rviz_plugin/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"common/tier4_localization_rviz_plugin/#input","title":"Input","text":"Name Type Descriptioninput/pose
geometry_msgs::msg::PoseStamped
In input/pose, put the result of localization calculated by ekf_localizer or ndt_scan_matching"},{"location":"common/tier4_localization_rviz_plugin/#parameters","title":"Parameters","text":""},{"location":"common/tier4_localization_rviz_plugin/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description property_buffer_size_
int 100 Buffer size of topic property_line_view_
bool true Use Line property or not property_line_width_
float 0.1 Width of Line property [m] property_line_alpha_
float 1.0 Alpha of Line property property_line_color_
QColor Qt::white Color of Line property"},{"location":"common/tier4_localization_rviz_plugin/#assumptions-known-limits","title":"Assumptions / Known limits","text":"TBD.
"},{"location":"common/tier4_localization_rviz_plugin/#usage","title":"Usage","text":"This package provides an rviz_plugin that can easily change the logger level of each node
This plugin dispatches services to the \"logger name\" associated with \"nodes\" specified in YAML, adjusting the logger level.
As of November 2023, in ROS 2 Humble, users are required to initiate a service server in the node to use this feature. (This might be integrated into ROS standards in the future.) For easy service server generation, you can use the LoggerLevelConfigure utility.
"},{"location":"common/tier4_perception_rviz_plugin/","title":"tier4_perception_rviz_plugin","text":""},{"location":"common/tier4_perception_rviz_plugin/#tier4_perception_rviz_plugin","title":"tier4_perception_rviz_plugin","text":""},{"location":"common/tier4_perception_rviz_plugin/#purpose","title":"Purpose","text":"This plugin is used to generate dummy pedestrians, cars, and obstacles in planning simulator.
"},{"location":"common/tier4_perception_rviz_plugin/#overview","title":"Overview","text":"The CarInitialPoseTool sends a topic for generating a dummy car. The PedestrianInitialPoseTool sends a topic for generating a dummy pedestrian. The UnknownInitialPoseTool sends a topic for generating a dummy obstacle. The DeleteAllObjectsTool deletes the dummy cars, pedestrians, and obstacles displayed by the above three tools.
"},{"location":"common/tier4_perception_rviz_plugin/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"common/tier4_perception_rviz_plugin/#output","title":"Output","text":"Name Type Description/simulation/dummy_perception_publisher/object_info
dummy_perception_publisher::msg::Object
The topic on which to publish dummy object info"},{"location":"common/tier4_perception_rviz_plugin/#parameter","title":"Parameter","text":""},{"location":"common/tier4_perception_rviz_plugin/#core-parameters","title":"Core Parameters","text":""},{"location":"common/tier4_perception_rviz_plugin/#carpose","title":"CarPose","text":"Name Type Default Value Description topic_property_
string /simulation/dummy_perception_publisher/object_info
The topic on which to publish dummy object info std_dev_x_
float 0.03 X standard deviation for initial pose [m] std_dev_y_
float 0.03 Y standard deviation for initial pose [m] std_dev_z_
float 0.03 Z standard deviation for initial pose [m] std_dev_theta_
float 5.0 * M_PI / 180.0 Theta standard deviation for initial pose [rad] length_
float 4.0 X standard deviation for initial pose [m] width_
float 1.8 Y standard deviation for initial pose [m] height_
float 2.0 Z standard deviation for initial pose [m] position_z_
float 0.0 Z position for initial pose [m] velocity_
float 0.0 Velocity [m/s]"},{"location":"common/tier4_perception_rviz_plugin/#buspose","title":"BusPose","text":"Name Type Default Value Description topic_property_
string /simulation/dummy_perception_publisher/object_info
The topic on which to publish dummy object info std_dev_x_
float 0.03 X standard deviation for initial pose [m] std_dev_y_
float 0.03 Y standard deviation for initial pose [m] std_dev_z_
float 0.03 Z standard deviation for initial pose [m] std_dev_theta_
float 5.0 * M_PI / 180.0 Theta standard deviation for initial pose [rad] length_
float 10.5 X standard deviation for initial pose [m] width_
float 2.5 Y standard deviation for initial pose [m] height_
float 3.5 Z standard deviation for initial pose [m] position_z_
float 0.0 Z position for initial pose [m] velocity_
float 0.0 Velocity [m/s]"},{"location":"common/tier4_perception_rviz_plugin/#pedestrianpose","title":"PedestrianPose","text":"Name Type Default Value Description topic_property_
string /simulation/dummy_perception_publisher/object_info
The topic on which to publish dummy object info std_dev_x_
float 0.03 X standard deviation for initial pose [m] std_dev_y_
float 0.03 Y standard deviation for initial pose [m] std_dev_z_
float 0.03 Z standard deviation for initial pose [m] std_dev_theta_
float 5.0 * M_PI / 180.0 Theta standard deviation for initial pose [rad] position_z_
float 0.0 Z position for initial pose [m] velocity_
float 0.0 Velocity [m/s]"},{"location":"common/tier4_perception_rviz_plugin/#unknownpose","title":"UnknownPose","text":"Name Type Default Value Description topic_property_
string /simulation/dummy_perception_publisher/object_info
The topic on which to publish dummy object info std_dev_x_
float 0.03 X standard deviation for initial pose [m] std_dev_y_
float 0.03 Y standard deviation for initial pose [m] std_dev_z_
float 0.03 Z standard deviation for initial pose [m] std_dev_theta_
float 5.0 * M_PI / 180.0 Theta standard deviation for initial pose [rad] position_z_
float 0.0 Z position for initial pose [m] velocity_
float 0.0 Velocity [m/s]"},{"location":"common/tier4_perception_rviz_plugin/#deleteallobjects","title":"DeleteAllObjects","text":"Name Type Default Value Description topic_property_
string /simulation/dummy_perception_publisher/object_info
The topic on which to publish dummy object info"},{"location":"common/tier4_perception_rviz_plugin/#assumptions-known-limits","title":"Assumptions / Known limits","text":"Using a planning simulator
"},{"location":"common/tier4_perception_rviz_plugin/#usage","title":"Usage","text":"You can interactively manipulate the object.
This package is including jsk code. Note that jsk_overlay_utils.cpp and jsk_overlay_utils.hpp are BSD license.
"},{"location":"common/tier4_planning_rviz_plugin/#purpose","title":"Purpose","text":"This plugin displays the path, trajectory, and maximum speed.
"},{"location":"common/tier4_planning_rviz_plugin/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"common/tier4_planning_rviz_plugin/#input","title":"Input","text":"Name Type Description/input/path
autoware_auto_planning_msgs::msg::Path
The topic on which to subscribe path /input/trajectory
autoware_auto_planning_msgs::msg::Trajectory
The topic on which to subscribe trajectory /planning/scenario_planning/current_max_velocity
tier4_planning_msgs/msg/VelocityLimit
The topic on which to publish max velocity"},{"location":"common/tier4_planning_rviz_plugin/#output","title":"Output","text":"Name Type Description /planning/mission_planning/checkpoint
geometry_msgs/msg/PoseStamped
The topic on which to publish checkpoint"},{"location":"common/tier4_planning_rviz_plugin/#parameter","title":"Parameter","text":""},{"location":"common/tier4_planning_rviz_plugin/#core-parameters","title":"Core Parameters","text":""},{"location":"common/tier4_planning_rviz_plugin/#missioncheckpoint","title":"MissionCheckpoint","text":"Name Type Default Value Description pose_topic_property_
string mission_checkpoint
The topic on which to publish checkpoint std_dev_x_
float 0.5 X standard deviation for checkpoint pose [m] std_dev_y_
float 0.5 Y standard deviation for checkpoint pose [m] std_dev_theta_
float M_PI / 12.0 Theta standard deviation for checkpoint pose [rad] position_z_
float 0.0 Z position for checkpoint pose [m]"},{"location":"common/tier4_planning_rviz_plugin/#path","title":"Path","text":"Name Type Default Value Description property_path_view_
bool true Use Path property or not property_path_width_view_
bool false Use Constant Width or not property_path_width_
float 2.0 Width of Path property [m] property_path_alpha_
float 1.0 Alpha of Path property property_path_color_view_
bool false Use Constant Color or not property_path_color_
QColor Qt::black Color of Path property property_velocity_view_
bool true Use Velocity property or not property_velocity_alpha_
float 1.0 Alpha of Velocity property property_velocity_scale_
float 0.3 Scale of Velocity property property_velocity_color_view_
bool false Use Constant Color or not property_velocity_color_
QColor Qt::black Color of Velocity property property_vel_max_
float 3.0 Max velocity [m/s]"},{"location":"common/tier4_planning_rviz_plugin/#drivablearea","title":"DrivableArea","text":"Name Type Default Value Description color_scheme_property_
int 0 Color scheme of DrivableArea property alpha_property_
float 0.2 Alpha of DrivableArea property draw_under_property_
bool false Draw as background or not"},{"location":"common/tier4_planning_rviz_plugin/#pathfootprint","title":"PathFootprint","text":"Name Type Default Value Description property_path_footprint_view_
bool true Use Path Footprint property or not property_path_footprint_alpha_
float 1.0 Alpha of Path Footprint property property_path_footprint_color_
QColor Qt::black Color of Path Footprint property property_vehicle_length_
float 4.77 Vehicle length [m] property_vehicle_width_
float 1.83 Vehicle width [m] property_rear_overhang_
float 1.03 Rear overhang [m]"},{"location":"common/tier4_planning_rviz_plugin/#trajectory","title":"Trajectory","text":"Name Type Default Value Description property_path_view_
bool true Use Path property or not property_path_width_
float 2.0 Width of Path property [m] property_path_alpha_
float 1.0 Alpha of Path property property_path_color_view_
bool false Use Constant Color or not property_path_color_
QColor Qt::black Color of Path property property_velocity_view_
bool true Use Velocity property or not property_velocity_alpha_
float 1.0 Alpha of Velocity property property_velocity_scale_
float 0.3 Scale of Velocity property property_velocity_color_view_
bool false Use Constant Color or not property_velocity_color_
QColor Qt::black Color of Velocity property property_velocity_text_view_
bool false View text Velocity property_velocity_text_scale_
float 0.3 Scale of Velocity property property_vel_max_
float 3.0 Max velocity [m/s]"},{"location":"common/tier4_planning_rviz_plugin/#trajectoryfootprint","title":"TrajectoryFootprint","text":"Name Type Default Value Description property_trajectory_footprint_view_
bool true Use Trajectory Footprint property or not property_trajectory_footprint_alpha_
float 1.0 Alpha of Trajectory Footprint property property_trajectory_footprint_color_
QColor QColor(230, 230, 50) Color of Trajectory Footprint property property_vehicle_length_
float 4.77 Vehicle length [m] property_vehicle_width_
float 1.83 Vehicle width [m] property_rear_overhang_
float 1.03 Rear overhang [m] property_trajectory_point_view_
bool false Use Trajectory Point property or not property_trajectory_point_alpha_
float 1.0 Alpha of Trajectory Point property property_trajectory_point_color_
QColor QColor(0, 60, 255) Color of Trajectory Point property property_trajectory_point_radius_
float 0.1 Radius of Trajectory Point property"},{"location":"common/tier4_planning_rviz_plugin/#maxvelocity","title":"MaxVelocity","text":"Name Type Default Value Description property_topic_name_
string /planning/scenario_planning/current_max_velocity
The topic on which to subscribe max velocity property_text_color_
QColor QColor(255, 255, 255) Text color property_left_
int 128 Left of the plotter window [px] property_top_
int 128 Top of the plotter window [px] property_length_
int 96 Length of the plotter window [px] property_value_scale_
float 1.0 / 4.0 Value scale"},{"location":"common/tier4_planning_rviz_plugin/#usage","title":"Usage","text":"This plugin captures the screen of rviz.
"},{"location":"common/tier4_screen_capture_rviz_plugin/#assumptions-known-limits","title":"Assumptions / Known limits","text":"This is only for debug or analyze. The capture screen
button is still beta version which can slow frame rate. set lower frame rate according to PC spec.
This plugin allows publishing and controlling the simulated ROS time.
"},{"location":"common/tier4_simulated_clock_rviz_plugin/#output","title":"Output","text":"Name Type Description/clock
rosgraph_msgs::msg::Clock
the current simulated time"},{"location":"common/tier4_simulated_clock_rviz_plugin/#howtouse","title":"HowToUse","text":"Use the added panel to control how the simulated clock is published.
This plugin displays the current status of autoware. This plugin also can engage from the panel.
"},{"location":"common/tier4_state_rviz_plugin/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"common/tier4_state_rviz_plugin/#input","title":"Input","text":"Name Type Description/api/operation_mode/state
autoware_adapi_v1_msgs::msg::OperationModeState
The topic represents the state of operation mode /api/routing/state
autoware_adapi_v1_msgs::msg::RouteState
The topic represents the state of route /api/localization/initialization_state
autoware_adapi_v1_msgs::msg::LocalizationInitializationState
The topic represents the state of localization initialization /api/motion/state
autoware_adapi_v1_msgs::msg::MotionState
The topic represents the state of motion /api/autoware/get/emergency
tier4_external_api_msgs::msg::Emergency
The topic represents the state of external emergency /vehicle/status/gear_status
autoware_auto_vehicle_msgs::msg::GearReport
The topic represents the state of gear"},{"location":"common/tier4_state_rviz_plugin/#output","title":"Output","text":"Name Type Description /api/operation_mode/change_to_autonomous
autoware_adapi_v1_msgs::srv::ChangeOperationMode
The service to change operation mode to autonomous /api/operation_mode/change_to_stop
autoware_adapi_v1_msgs::srv::ChangeOperationMode
The service to change operation mode to stop /api/operation_mode/change_to_local
autoware_adapi_v1_msgs::srv::ChangeOperationMode
The service to change operation mode to local /api/operation_mode/change_to_remote
autoware_adapi_v1_msgs::srv::ChangeOperationMode
The service to change operation mode to remote /api/operation_mode/enable_autoware_control
autoware_adapi_v1_msgs::srv::ChangeOperationMode
The service to enable vehicle control by Autoware /api/operation_mode/disable_autoware_control
autoware_adapi_v1_msgs::srv::ChangeOperationMode
The service to disable vehicle control by Autoware /api/routing/clear_route
autoware_adapi_v1_msgs::srv::ClearRoute
The service to clear route state /api/motion/accept_start
autoware_adapi_v1_msgs::srv::AcceptStart
The service to accept the vehicle to start /api/autoware/set/emergency
tier4_external_api_msgs::srv::SetEmergency
The service to set external emergency /planning/scenario_planning/max_velocity_default
tier4_planning_msgs::msg::VelocityLimit
The topic to set maximum speed of the vehicle"},{"location":"common/tier4_state_rviz_plugin/#howtouse","title":"HowToUse","text":"Start rviz and select panels/Add new panel.
Select tier4_state_rviz_plugin/AutowareStatePanel and press OK.
If the auto button is activated, can engage by clicking it.
This plugin display the Hazard information from Autoware; and output notices when emergencies are from initial localization and route setting.
"},{"location":"common/tier4_system_rviz_plugin/#input","title":"Input","text":"Name Type Description/system/emergency/hazard_status
autoware_auto_system_msgs::msg::HazardStatusStamped
The topic represents the emergency information from Autoware"},{"location":"common/tier4_target_object_type_rviz_plugin/","title":"tier4_target_object_type_rviz_plugin","text":""},{"location":"common/tier4_target_object_type_rviz_plugin/#tier4_target_object_type_rviz_plugin","title":"tier4_target_object_type_rviz_plugin","text":"This plugin allows you to check which types of the dynamic object is being used by each planner.
"},{"location":"common/tier4_target_object_type_rviz_plugin/#limitations","title":"Limitations","text":"Currently, which parameters of which module to check are hardcoded. In the future, this will be parameterized using YAML.
"},{"location":"common/tier4_traffic_light_rviz_plugin/","title":"tier4_traffic_light_rviz_plugin","text":""},{"location":"common/tier4_traffic_light_rviz_plugin/#tier4_traffic_light_rviz_plugin","title":"tier4_traffic_light_rviz_plugin","text":""},{"location":"common/tier4_traffic_light_rviz_plugin/#purpose","title":"Purpose","text":"This plugin panel publishes dummy traffic light signals.
"},{"location":"common/tier4_traffic_light_rviz_plugin/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"common/tier4_traffic_light_rviz_plugin/#output","title":"Output","text":"Name Type Description/perception/traffic_light_recognition/traffic_signals
autoware_perception_msgs::msg::TrafficSignalArray
Publish traffic light signals"},{"location":"common/tier4_traffic_light_rviz_plugin/#howtouse","title":"HowToUse","text":"Traffic Light ID
& Traffic Light Status
and press SET
button.PUBLISH
button is pushed.This package is including jsk code. Note that jsk_overlay_utils.cpp and jsk_overlay_utils.hpp are BSD license.
"},{"location":"common/tier4_vehicle_rviz_plugin/#purpose","title":"Purpose","text":"This plugin provides a visual and easy-to-understand display of vehicle speed, turn signal, steering status and acceleration.
"},{"location":"common/tier4_vehicle_rviz_plugin/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"common/tier4_vehicle_rviz_plugin/#input","title":"Input","text":"Name Type Description/vehicle/status/velocity_status
autoware_auto_vehicle_msgs::msg::VelocityReport
The topic is vehicle twist /control/turn_signal_cmd
autoware_auto_vehicle_msgs::msg::TurnIndicatorsReport
The topic is status of turn signal /vehicle/status/steering_status
autoware_auto_vehicle_msgs::msg::SteeringReport
The topic is status of steering /localization/acceleration
geometry_msgs::msg::AccelWithCovarianceStamped
The topic is the acceleration"},{"location":"common/tier4_vehicle_rviz_plugin/#parameter","title":"Parameter","text":""},{"location":"common/tier4_vehicle_rviz_plugin/#core-parameters","title":"Core Parameters","text":""},{"location":"common/tier4_vehicle_rviz_plugin/#consolemeter","title":"ConsoleMeter","text":"Name Type Default Value Description property_text_color_
QColor QColor(25, 255, 240) Text color property_left_
int 128 Left of the plotter window [px] property_top_
int 128 Top of the plotter window [px] property_length_
int 256 Height of the plotter window [px] property_value_height_offset_
int 0 Height offset of the plotter window [px] property_value_scale_
float 1.0 / 6.667 Value scale"},{"location":"common/tier4_vehicle_rviz_plugin/#steeringangle","title":"SteeringAngle","text":"Name Type Default Value Description property_text_color_
QColor QColor(25, 255, 240) Text color property_left_
int 128 Left of the plotter window [px] property_top_
int 128 Top of the plotter window [px] property_length_
int 256 Height of the plotter window [px] property_value_height_offset_
int 0 Height offset of the plotter window [px] property_value_scale_
float 1.0 / 6.667 Value scale property_handle_angle_scale_
float 3.0 Scale is steering angle to handle angle"},{"location":"common/tier4_vehicle_rviz_plugin/#turnsignal","title":"TurnSignal","text":"Name Type Default Value Description property_left_
int 128 Left of the plotter window [px] property_top_
int 128 Top of the plotter window [px] property_width_
int 256 Left of the plotter window [px] property_height_
int 256 Width of the plotter window [px]"},{"location":"common/tier4_vehicle_rviz_plugin/#velocityhistory","title":"VelocityHistory","text":"Name Type Default Value Description property_velocity_timeout_
float 10.0 Timeout of velocity [s] property_velocity_alpha_
float 1.0 Alpha of velocity property_velocity_scale_
float 0.3 Scale of velocity property_velocity_color_view_
bool false Use Constant Color or not property_velocity_color_
QColor Qt::black Color of velocity history property_vel_max_
float 3.0 Color Border Vel Max [m/s]"},{"location":"common/tier4_vehicle_rviz_plugin/#accelerationmeter","title":"AccelerationMeter","text":"Name Type Default Value Description property_normal_text_color_
QColor QColor(25, 255, 240) Normal text color property_emergency_text_color_
QColor QColor(255, 80, 80) Emergency acceleration color property_left_
int 896 Left of the plotter window [px] property_top_
int 128 Top of the plotter window [px] property_length_
int 256 Height of the plotter window [px] property_value_height_offset_
int 0 Height offset of the plotter window [px] property_value_scale_
float 1 / 6.667 Value text scale property_emergency_threshold_max_
float 1.0 Max acceleration threshold for emergency [m/s^2] property_emergency_threshold_min_
float -2.5 Min acceleration threshold for emergency [m/s^2]"},{"location":"common/tier4_vehicle_rviz_plugin/#assumptions-known-limits","title":"Assumptions / Known limits","text":"TBD.
"},{"location":"common/tier4_vehicle_rviz_plugin/#usage","title":"Usage","text":"This node publishes a marker array for visualizing traffic signal recognition results on Rviz.
"},{"location":"common/traffic_light_recognition_marker_publisher/Readme/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"common/traffic_light_recognition_marker_publisher/Readme/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"common/traffic_light_recognition_marker_publisher/Readme/#input","title":"Input","text":"Name Type Description/map/vector_map
autoware_auto_mapping_msgs::msg::HADMapBin
Vector map for getting traffic signal information /perception/traffic_light_recognition/traffic_signals
autoware_auto_perception_msgs::msg::TrafficSignalArray
The result of traffic signal recognition"},{"location":"common/traffic_light_recognition_marker_publisher/Readme/#output","title":"Output","text":"Name Type Description /perception/traffic_light_recognition/traffic_signals_marker
visualization_msgs::msg::MarkerArray
Publish a marker array for visualization of traffic signal recognition results"},{"location":"common/traffic_light_recognition_marker_publisher/Readme/#parameters","title":"Parameters","text":"None.
"},{"location":"common/traffic_light_recognition_marker_publisher/Readme/#node-parameters","title":"Node Parameters","text":"None.
"},{"location":"common/traffic_light_recognition_marker_publisher/Readme/#core-parameters","title":"Core Parameters","text":"None.
"},{"location":"common/traffic_light_recognition_marker_publisher/Readme/#assumptions-known-limits","title":"Assumptions / Known limits","text":"TBD.
"},{"location":"common/traffic_light_utils/","title":"traffic_light_utils","text":""},{"location":"common/traffic_light_utils/#traffic_light_utils","title":"traffic_light_utils","text":""},{"location":"common/traffic_light_utils/#purpose","title":"Purpose","text":"This package contains a library of common functions that are useful across the traffic light recognition module. This package may include functions for handling ROI types, converting between different data types and message types, as well as common functions related to them.
"},{"location":"common/tvm_utility/","title":"TVM Utility","text":""},{"location":"common/tvm_utility/#tvm-utility","title":"TVM Utility","text":"This is the design document for the tvm_utility
package. For instructions on how to build the tests for YOLOv2 Tiny, see the YOLOv2 Tiny Example Pipeline. For information about where to store test artifacts see the TVM Utility Artifacts.
A set of c++ utilities to help build a TVM based machine learning inference pipeline. The library contains a pipeline class which helps building the pipeline and a number of utility functions that are common in machine learning.
"},{"location":"common/tvm_utility/#design","title":"Design","text":"The Pipeline Class is a standardized way to write an inference pipeline. The pipeline class contains 3 different stages: the pre-processor, the inference engine and the post-processor. The TVM implementation of an inference engine stage is provided.
"},{"location":"common/tvm_utility/#api","title":"API","text":"The pre-processor and post-processor need to be implemented by the user before instantiating the pipeline. You can see example usage in the example pipeline at test/yolo_v2_tiny
.
Each stage in the pipeline has a schedule
function which takes input data as a parameter and return the output data. Once the pipeline object is created, pipeline.schedule
is called to run the pipeline.
int main() {\n create_subscription<sensor_msgs::msg::PointCloud2>(\"points_raw\",\n rclcpp::QoS{1}, [this](const sensor_msgs::msg::PointCloud2::SharedPtr msg)\n {pipeline.schedule(msg);});\n}\n
"},{"location":"common/tvm_utility/#version-checking","title":"Version checking","text":"The InferenceEngineTVM::version_check
function can be used to check the version of the neural network in use against the range of earliest to latest supported versions.
The InferenceEngineTVM
class holds the latest supported version, which needs to be updated when the targeted version changes; after having tested the effect of the version change on the packages dependent on this one.
The earliest supported version depends on each package making use of the inference, and so should be defined (and maintained) in those packages.
"},{"location":"common/tvm_utility/#models","title":"Models","text":"Dependent packages are expected to use the get_neural_network
cmake function from this package in order to build proper external dependency.
std::runtime_error
should be thrown whenever an error is encountered. It should be populated with an appropriate text error description.
The neural networks are compiled as part of the Model Zoo CI pipeline and saved to an S3 bucket.
The get_neural_network
function creates an abstraction for the artifact management. Users should check if model configuration header file is under \"data/user/${MODEL_NAME}/\". Otherwise, nothing happens and compilation of the package will be skipped.
The structure inside of the source directory of the package making use of the function is as follow:
.\n\u251c\u2500\u2500 data\n\u2502 \u2514\u2500\u2500 models\n\u2502 \u251c\u2500\u2500 ${MODEL 1}\n\u2502 \u2502 \u2514\u2500\u2500 inference_engine_tvm_config.hpp\n\u2502 \u251c\u2500\u2500 ...\n\u2502 \u2514\u2500\u2500 ${MODEL ...}\n\u2502 \u2514\u2500\u2500 ...\n
The inference_engine_tvm_config.hpp
file needed for compilation by dependent packages should be available under \"data/models/${MODEL_NAME}/inference_engine_tvm_config.hpp\". Dependent packages can use the cmake add_dependencies
function with the name provided in the DEPENDENCY
output parameter of get_neural_network
to ensure this file is created before it gets used.
The other deploy_*
files are installed to \"models/${MODEL_NAME}/\" under the share
directory of the package.
The other model files should be stored in autoware_data folder under package folder with the structure:
$HOME/autoware_data\n| \u2514\u2500\u2500${package}\n| \u2514\u2500\u2500models\n| \u251c\u2500\u2500 ${MODEL 1}\n| | \u251c\u2500\u2500 deploy_graph.json\n| | \u251c\u2500\u2500 deploy_lib.so\n| | \u2514\u2500\u2500 deploy_param.params\n| \u251c\u2500\u2500 ...\n| \u2514\u2500\u2500 ${MODEL ...}\n| \u2514\u2500\u2500 ...\n
"},{"location":"common/tvm_utility/#inputs-outputs","title":"Inputs / Outputs","text":"Outputs:
get_neural_network
cmake function; create proper external dependency for a package with use of the model provided by the user.In/Out:
DEPENDENCY
argument of get_neural_network
can be checked for the outcome of the function. It is an empty string when the neural network wasn't provided by the user.Both the input and output are controlled by the same actor, so the following security concerns are out-of-scope:
Leaking data to another actor would require a flaw in TVM or the host operating system that allows arbitrary memory to be read, a significant security flaw in itself. This is also true for an external actor operating the pipeline early: only the object that initiated the pipeline can run the methods to receive its output.
A Denial-of-Service attack could make the target hardware unusable for other pipelines but would require being able to run code on the CPU, which would already allow a more severe Denial-of-Service attack.
No elevation of privilege is required for this package.
"},{"location":"common/tvm_utility/#network-provider","title":"Network provider","text":"The pre-compiled networks are downloaded from an S3 bucket and are under threat of spoofing, tampering and denial of service. Spoofing is mitigated by using an https connection. Mitigations for tampering and denial of service are left to AWS.
The user-provided networks are installed as they are on the host system. The user is in charge of securing the files they provide with regard to information disclosure.
"},{"location":"common/tvm_utility/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":"Future packages will use tvm_utility as part of the perception stack to run machine learning workloads.
"},{"location":"common/tvm_utility/#related-issues","title":"Related issues","text":"https://github.com/autowarefoundation/autoware/discussions/2557
"},{"location":"common/tvm_utility/tvm-utility-yolo-v2-tiny-tests/","title":"YOLOv2 Tiny Example Pipeline","text":""},{"location":"common/tvm_utility/tvm-utility-yolo-v2-tiny-tests/#yolov2-tiny-example-pipeline","title":"YOLOv2 Tiny Example Pipeline","text":"This is an example implementation of an inference pipeline using the pipeline framework. This example pipeline executes the YOLO V2 Tiny model and decodes its output.
"},{"location":"common/tvm_utility/tvm-utility-yolo-v2-tiny-tests/#compiling-the-example","title":"Compiling the Example","text":"Check if model was downloaded during the env preparation step by ansible and models files exist in the folder $HOME/autoware_data/tvm_utility/models/yolo_v2_tiny.
If not you can download them manually, see Manual Artifacts Downloading.
Download an example image to be used as test input. This image needs to be saved in the artifacts/yolo_v2_tiny/
folder.
curl https://raw.githubusercontent.com/pjreddie/darknet/master/data/dog.jpg \\\n> artifacts/yolo_v2_tiny/test_image_0.jpg\n
Build.
colcon build --packages-up-to tvm_utility --cmake-args -DBUILD_EXAMPLE=ON\n
Run.
ros2 launch tvm_utility yolo_v2_tiny_example.launch.xml\n
image_filename
string $(find-pkg-share tvm_utility)/artifacts/yolo_v2_tiny/test_image_0.jpg
Filename of the image on which to run the inference. label_filename
string $(find-pkg-share tvm_utility)/artifacts/yolo_v2_tiny/labels.txt
Name of file containing the human readable names of the classes. One class on each line. anchor_filename
string $(find-pkg-share tvm_utility)/artifacts/yolo_v2_tiny/anchors.csv
Name of file containing the anchor values for the network. Each line is one anchor. each anchor has 2 comma separated floating point values. data_path
string $(env HOME)/autoware_data
Packages data and artifacts directory path."},{"location":"common/tvm_utility/tvm-utility-yolo-v2-tiny-tests/#gpu-backend","title":"GPU backend","text":"Vulkan is supported by default by the tvm_vendor package. It can be selected by setting the tvm_utility_BACKEND
variable:
colcon build --packages-up-to tvm_utility -Dtvm_utility_BACKEND=vulkan\n
"},{"location":"common/tvm_utility/artifacts/","title":"TVM Utility Artifacts","text":""},{"location":"common/tvm_utility/artifacts/#tvm-utility-artifacts","title":"TVM Utility Artifacts","text":"Place any test artifacts in subdirectories within this directory.
e.g.: ./artifacts/yolo_v2_tiny
"},{"location":"control/autonomous_emergency_braking/","title":"Autonomous Emergency Braking (AEB)","text":""},{"location":"control/autonomous_emergency_braking/#autonomous-emergency-braking-aeb","title":"Autonomous Emergency Braking (AEB)","text":""},{"location":"control/autonomous_emergency_braking/#purpose-role","title":"Purpose / Role","text":"autonomous_emergency_braking
is a module that prevents collisions with obstacles on the predicted path created by a control module or sensor values estimated from the control module.
This module has following assumptions.
AEB has the following steps before it outputs the emergency stop signal.
Activate AEB if necessary.
Generate a predicted path of the ego vehicle.
Get target obstacles from the input point cloud.
Collision check with target obstacles.
Send emergency stop signals to /diagnostics
.
We give more details of each section below.
"},{"location":"control/autonomous_emergency_braking/#1-activate-aeb-if-necessary","title":"1. Activate AEB if necessary","text":"We do not activate AEB module if it satisfies the following conditions.
AEB generates a predicted path based on current velocity and current angular velocity obtained from attached sensors. Note that if use_imu_path
is false
, it skips this step. This predicted path is generated as:
where \\(v\\) and \\(\\omega\\) are current longitudinal velocity and angular velocity respectively. \\(dt\\) is time interval that users can define in advance.
"},{"location":"control/autonomous_emergency_braking/#3-get-target-obstacles-from-the-input-point-cloud","title":"3. Get target obstacles from the input point cloud","text":"After generating the ego predicted path, we select target obstacles from the input point cloud. This obstacle filtering has two major steps, which are rough filtering and rigorous filtering.
"},{"location":"control/autonomous_emergency_braking/#rough-filtering","title":"Rough filtering","text":"In rough filtering step, we select target obstacle with simple filter. Create a search area up to a certain distance (default 5[m]) away from the predicted path of the ego vehicle and ignore the point cloud (obstacles) that are not within it. The image of the rough filtering is illustrated below.
"},{"location":"control/autonomous_emergency_braking/#rigorous-filtering","title":"Rigorous filtering","text":"After rough filtering, it performs a geometric collision check to determine whether the filtered obstacles actually have possibility to collide with the ego vehicle. In this check, the ego vehicle is represented as a rectangle, and the point cloud obstacles are represented as points.
"},{"location":"control/autonomous_emergency_braking/#4-collision-check-with-target-obstacles","title":"4. Collision check with target obstacles","text":"In the fourth step, it checks the collision with filtered obstacles using RSS distance. RSS is formulated as:
\\[ d = v_{ego}*t_{response} + v_{ego}^2/(2*a_{min}) - v_{obj}^2/(2*a_{obj_{min}}) + offset \\]where \\(v_{ego}\\) and \\(v_{obj}\\) is current ego and obstacle velocity, \\(a_{min}\\) and \\(a_{obj_{min}}\\) is ego and object minimum acceleration (maximum deceleration), \\(t_{response}\\) is response time of the ego vehicle to start deceleration. Therefore the distance from the ego vehicle to the obstacle is smaller than this RSS distance \\(d\\), the ego vehicle send emergency stop signals. This is illustrated in the following picture.
"},{"location":"control/autonomous_emergency_braking/#5-send-emergency-stop-signals-to-diagnostics","title":"5. Send emergency stop signals to/diagnostics
","text":"If AEB detects collision with point cloud obstacles in the previous step, it sends emergency signal to /diagnostics
in this step. Note that in order to enable emergency stop, it has to send ERROR level emergency. Moreover, AEB user should modify the setting file to keep the emergency level, otherwise Autoware does not hold the emergency state.
control_performance_analysis
is the package to analyze the tracking performance of a control module and monitor the driving status of the vehicle.
This package is used as a tool to quantify the results of the control module. That's why it doesn't interfere with the core logic of autonomous driving.
Based on the various input from planning, control, and vehicle, it publishes the result of analysis as control_performance_analysis::msg::ErrorStamped
defined in this package.
All results in ErrorStamped
message are calculated in Frenet Frame of curve. Errors and velocity errors are calculated by using paper below.
Werling, Moritz & Groell, Lutz & Bretthauer, Georg. (2010). Invariant Trajectory Tracking With a Full-Size Autonomous Road Vehicle. IEEE Transactions on Robotics. 26. 758 - 765. 10.1109/TRO.2010.2052325.
If you are interested in calculations, you can see the error and error velocity calculations in section C. Asymptotical Trajectory Tracking With Orientation Control
.
Error acceleration calculations are made based on the velocity calculations above. You can see below the calculation of error acceleration.
"},{"location":"control/control_performance_analysis/#input-output","title":"Input / Output","text":""},{"location":"control/control_performance_analysis/#input-topics","title":"Input topics","text":"Name Type Description/planning/scenario_planning/trajectory
autoware_auto_planning_msgs::msg::Trajectory Output trajectory from planning module. /control/command/control_cmd
autoware_auto_control_msgs::msg::AckermannControlCommand Output control command from control module. /vehicle/status/steering_status
autoware_auto_vehicle_msgs::msg::SteeringReport Steering information from vehicle. /localization/kinematic_state
nav_msgs::msg::Odometry Use twist from odometry. /tf
tf2_msgs::msg::TFMessage Extract ego pose from tf."},{"location":"control/control_performance_analysis/#output-topics","title":"Output topics","text":"Name Type Description /control_performance/performance_vars
control_performance_analysis::msg::ErrorStamped The result of the performance analysis. /control_performance/driving_status
control_performance_analysis::msg::DrivingMonitorStamped Driving status (acceleration, jerk etc.) monitoring"},{"location":"control/control_performance_analysis/#outputs","title":"Outputs","text":""},{"location":"control/control_performance_analysis/#control_performance_analysismsgdrivingmonitorstamped","title":"control_performance_analysis::msg::DrivingMonitorStamped","text":"Name Type Description longitudinal_acceleration
float [m / s^2] longitudinal_jerk
float [m / s^3] lateral_acceleration
float [m / s^2] lateral_jerk
float [m / s^3] desired_steering_angle
float [rad] controller_processing_time
float Timestamp between last two control command messages [ms]"},{"location":"control/control_performance_analysis/#control_performance_analysismsgerrorstamped","title":"control_performance_analysis::msg::ErrorStamped","text":"Name Type Description lateral_error
float [m] lateral_error_velocity
float [m / s] lateral_error_acceleration
float [m / s^2] longitudinal_error
float [m] longitudinal_error_velocity
float [m / s] longitudinal_error_acceleration
float [m / s^2] heading_error
float [rad] heading_error_velocity
float [rad / s] control_effort_energy
float [u * R * u^T] error_energy
float lateral_error^2 + heading_error^2 value_approximation
float V = xPx' ; Value function from DARE Lyap matrix P curvature_estimate
float [1 / m] curvature_estimate_pp
float [1 / m] vehicle_velocity_error
float [m / s] tracking_curvature_discontinuity_ability
float Measures the ability to tracking the curvature changes [abs(delta(curvature)) / (1 + abs(delta(lateral_error))
]"},{"location":"control/control_performance_analysis/#parameters","title":"Parameters","text":"Name Type Description curvature_interval_length
double Used for estimating current curvature prevent_zero_division_value
double Value to avoid zero division. Default is 0.001
odom_interval
unsigned integer Interval between odom messages, increase it for smoother curve. acceptable_max_distance_to_waypoint
double Maximum distance between trajectory point and vehicle [m] acceptable_max_yaw_difference_rad
double Maximum yaw difference between trajectory point and vehicle [rad] low_pass_filter_gain
double Low pass filter gain"},{"location":"control/control_performance_analysis/#usage","title":"Usage","text":"control_performance_analysis.launch.xml
.Plotjuggler
and use config/controller_monitor.xml
as layout.Plotjuggler
you can export the statistic (max, min, average) values as csv file. Use that statistics to compare the control modules.The control_validator
is a module that checks the validity of the output of the control component. The status of the validation can be viewed in the /diagnostics
topic.
The following features are supported for the validation and can have thresholds set by parameters:
Other features are to be implemented.
"},{"location":"control/control_validator/#inputsoutputs","title":"Inputs/Outputs","text":""},{"location":"control/control_validator/#inputs","title":"Inputs","text":"The control_validator
takes in the following inputs:
~/input/kinematics
nav_msgs/Odometry ego pose and twist ~/input/reference_trajectory
autoware_auto_control_msgs/Trajectory reference trajectory which is outputted from planning module to to be followed ~/input/predicted_trajectory
autoware_auto_control_msgs/Trajectory predicted trajectory which is outputted from control module"},{"location":"control/control_validator/#outputs","title":"Outputs","text":"It outputs the following:
Name Type Description~/output/validation_status
control_validator/ControlValidatorStatus validator status to inform the reason why the trajectory is valid/invalid /diagnostics
diagnostic_msgs/DiagnosticStatus diagnostics to report errors"},{"location":"control/control_validator/#parameters","title":"Parameters","text":"The following parameters can be set for the control_validator
:
publish_diag
bool if true, diagnostics msg is published. true diag_error_count_threshold
int the Diag will be set to ERROR when the number of consecutive invalid trajectory exceeds this threshold. (For example, threshold = 1 means, even if the trajectory is invalid, the Diag will not be ERROR if the next trajectory is valid.) true display_on_terminal
bool show error msg on terminal true"},{"location":"control/control_validator/#algorithm-parameters","title":"Algorithm parameters","text":""},{"location":"control/control_validator/#thresholds","title":"Thresholds","text":"The input trajectory is detected as invalid if the index exceeds the following thresholds.
Name Type Description Default valuethresholds.max_distance_deviation
double invalid threshold of the max distance deviation between the predicted path and the reference trajectory [m] 1.0"},{"location":"control/external_cmd_selector/","title":"external_cmd_selector","text":""},{"location":"control/external_cmd_selector/#external_cmd_selector","title":"external_cmd_selector","text":""},{"location":"control/external_cmd_selector/#purpose","title":"Purpose","text":"external_cmd_selector
is the package to publish external_control_cmd
, gear_cmd
, hazard_lights_cmd
, heartbeat
and turn_indicators_cmd
, according to the current mode, which is remote
or local
.
The current mode is set via service, remote
is remotely operated, local
is to use the values calculated by Autoware.
/api/external/set/command/local/control
TBD Local. Calculated control value. /api/external/set/command/local/heartbeat
TBD Local. Heartbeat. /api/external/set/command/local/shift
TBD Local. Gear shift like drive, rear and etc. /api/external/set/command/local/turn_signal
TBD Local. Turn signal like left turn, right turn and etc. /api/external/set/command/remote/control
TBD Remote. Calculated control value. /api/external/set/command/remote/heartbeat
TBD Remote. Heartbeat. /api/external/set/command/remote/shift
TBD Remote. Gear shift like drive, rear and etc. /api/external/set/command/remote/turn_signal
TBD Remote. Turn signal like left turn, right turn and etc."},{"location":"control/external_cmd_selector/#output-topics","title":"Output topics","text":"Name Type Description /control/external_cmd_selector/current_selector_mode
TBD Current selected mode, remote or local. /diagnostics
diagnostic_msgs::msg::DiagnosticArray Check if node is active or not. /external/selected/external_control_cmd
TBD Pass through control command with current mode. /external/selected/gear_cmd
autoware_auto_vehicle_msgs::msg::GearCommand Pass through gear command with current mode. /external/selected/hazard_lights_cmd
autoware_auto_vehicle_msgs::msg::HazardLightsCommand Pass through hazard light with current mode. /external/selected/heartbeat
TBD Pass through heartbeat with current mode. /external/selected/turn_indicators_cmd
autoware_auto_vehicle_msgs::msg::TurnIndicatorsCommand Pass through turn indicator with current mode."},{"location":"control/joy_controller/","title":"joy_controller","text":""},{"location":"control/joy_controller/#joy_controller","title":"joy_controller","text":""},{"location":"control/joy_controller/#role","title":"Role","text":"joy_controller
is the package to convert a joy msg to autoware commands (e.g. steering wheel, shift, turn signal, engage) for a vehicle.
~/input/joy
sensor_msgs::msg::Joy joy controller command ~/input/odometry
nav_msgs::msg::Odometry ego vehicle odometry to get twist"},{"location":"control/joy_controller/#output-topics","title":"Output topics","text":"Name Type Description ~/output/control_command
autoware_auto_control_msgs::msg::AckermannControlCommand lateral and longitudinal control command ~/output/external_control_command
tier4_external_api_msgs::msg::ControlCommandStamped lateral and longitudinal control command ~/output/shift
tier4_external_api_msgs::msg::GearShiftStamped gear command ~/output/turn_signal
tier4_external_api_msgs::msg::TurnSignalStamped turn signal command ~/output/gate_mode
tier4_control_msgs::msg::GateMode gate mode (Auto or External) ~/output/heartbeat
tier4_external_api_msgs::msg::Heartbeat heartbeat ~/output/vehicle_engage
autoware_auto_vehicle_msgs::msg::Engage vehicle engage"},{"location":"control/joy_controller/#parameters","title":"Parameters","text":"Parameter Type Description joy_type
string joy controller type (default: DS4) update_rate
double update rate to publish control commands accel_ratio
double ratio to calculate acceleration (commanded acceleration is ratio * operation) brake_ratio
double ratio to calculate deceleration (commanded acceleration is -ratio * operation) steer_ratio
double ratio to calculate deceleration (commanded steer is ratio * operation) steering_angle_velocity
double steering angle velocity for operation accel_sensitivity
double sensitivity to calculate acceleration for external API (commanded acceleration is pow(operation, 1 / sensitivity)) brake_sensitivity
double sensitivity to calculate deceleration for external API (commanded acceleration is pow(operation, 1 / sensitivity)) raw_control
bool skip input odometry if true velocity_gain
double ratio to calculate velocity by acceleration max_forward_velocity
double absolute max velocity to go forward max_backward_velocity
double absolute max velocity to go backward backward_accel_ratio
double ratio to calculate deceleration (commanded acceleration is -ratio * operation)"},{"location":"control/joy_controller/#p65-joystick-key-map","title":"P65 Joystick Key Map","text":"Action Button Acceleration R2 Brake L2 Steering Left Stick Left Right Shift up Cursor Up Shift down Cursor Down Shift Drive Cursor Left Shift Reverse Cursor Right Turn Signal Left L1 Turn Signal Right R1 Clear Turn Signal A Gate Mode B Emergency Stop Select Clear Emergency Stop Start Autoware Engage X Autoware Disengage Y Vehicle Engage PS Vehicle Disengage Right Trigger"},{"location":"control/joy_controller/#ds4-joystick-key-map","title":"DS4 Joystick Key Map","text":"Action Button Acceleration R2, \u00d7, or Right Stick Up Brake L2, \u25a1, or Right Stick Down Steering Left Stick Left Right Shift up Cursor Up Shift down Cursor Down Shift Drive Cursor Left Shift Reverse Cursor Right Turn Signal Left L1 Turn Signal Right R1 Clear Turn Signal SHARE Gate Mode OPTIONS Emergency Stop PS Clear Emergency Stop PS Autoware Engage \u25cb Autoware Disengage \u25cb Vehicle Engage \u25b3 Vehicle Disengage \u25b3"},{"location":"control/joy_controller/#xbox-joystick-key-map","title":"XBOX Joystick Key Map","text":"Action Button Acceleration RT Brake LT Steering Left Stick Left Right Shift up Cursor Up Shift down Cursor Down Shift Drive Cursor Left Shift Reverse Cursor Right Turn Signal Left LB Turn Signal Right RB Clear Turn Signal A Gate Mode B Emergency Stop View Clear Emergency Stop Menu Autoware Engage X Autoware Disengage Y Vehicle Engage Left Stick Button Vehicle Disengage Right Stick Button"},{"location":"control/lane_departure_checker/","title":"Lane Departure Checker","text":""},{"location":"control/lane_departure_checker/#lane-departure-checker","title":"Lane Departure Checker","text":"The Lane Departure Checker checks if vehicle follows a trajectory. If it does not follow the trajectory, it reports its status via diagnostic_updater
.
This package includes the following features:
Calculate the standard deviation of error ellipse(covariance) in vehicle coordinate.
1.Transform covariance into vehicle coordinate.
Calculate covariance in vehicle coordinate.
2.The longitudinal length we want to expand is correspond to marginal distribution of \\(x_{vehicle}\\), which is represented in \\(Cov_{vehicle}(0,0)\\). In the same way, the lateral length is represented in \\(Cov_{vehicle}(1,1)\\). Wikipedia reference here.
Expand footprint based on the standard deviation multiplied with footprint_margin_scale
.
nav_msgs::msg::Odometry
]autoware_auto_mapping_msgs::msg::HADMapBin
]autoware_planning_msgs::msg::LaneletRoute
]autoware_auto_planning_msgs::msg::Trajectory
]autoware_auto_planning_msgs::msg::Trajectory
]diagnostic_updater
] lane_departure : Update diagnostic level when ego vehicle is out of lane.diagnostic_updater
] trajectory_deviation : Update diagnostic level when ego vehicle deviates from trajectory.This is the design document for the lateral controller node in the trajectory_follower_node
package.
This node is used to general lateral control commands (steering angle and steering rate) when following a path.
"},{"location":"control/mpc_lateral_controller/#design","title":"Design","text":"The node uses an implementation of linear model predictive control (MPC) for accurate path tracking. The MPC uses a model of the vehicle to simulate the trajectory resulting from the control command. The optimization of the control command is formulated as a Quadratic Program (QP).
Different vehicle models are implemented:
For the optimization, a Quadratic Programming (QP) solver is used and two options are currently implemented:
Filtering is required for good noise reduction. A Butterworth filter is employed for processing the yaw and lateral errors, which are used as inputs for the MPC, as well as for refining the output steering angle. Other filtering methods can be considered as long as the noise reduction performances are good enough. The moving average filter for example is not suited and can yield worse results than without any filtering.
"},{"location":"control/mpc_lateral_controller/#assumptions-known-limits","title":"Assumptions / Known limits","text":"The tracking is not accurate if the first point of the reference trajectory is at or in front of the current ego pose.
"},{"location":"control/mpc_lateral_controller/#inputs-outputs-api","title":"Inputs / Outputs / API","text":""},{"location":"control/mpc_lateral_controller/#inputs","title":"Inputs","text":"Set the following from the controller_node
autoware_auto_planning_msgs/Trajectory
: reference trajectory to follow.nav_msgs/Odometry
: current odometryautoware_auto_vehicle_msgs/SteeringReport
: current steeringReturn LateralOutput which contains the following to the controller node
autoware_auto_control_msgs/AckermannLateralCommand
The MPC
class (defined in mpc.hpp
) provides the interface with the MPC algorithm. Once a vehicle model, a QP solver, and the reference trajectory to follow have been set (using setVehicleModel()
, setQPSolver()
, setReferenceTrajectory()
), a lateral control command can be calculated by providing the current steer, velocity, and pose to function calculateMPC()
.
The default parameters defined in param/lateral_controller_defaults.param.yaml
are adjusted to the AutonomouStuff Lexus RX 450h for under 40 km/h driving.
(*1) To prevent unnecessary steering movement, the steering command is fixed to the previous value in the stop state.
"},{"location":"control/mpc_lateral_controller/#steer-offset","title":"Steer Offset","text":"Defined in the steering_offset
namespace. This logic is designed as simple as possible, with minimum design parameters.
First, it's important to set the appropriate parameters for vehicle kinematics. This includes parameters like wheelbase
, which represents the distance between the front and rear wheels, and max_steering_angle
, which indicates the maximum tire steering angle. These parameters should be set in the vehicle_info.param.yaml
.
Next, you need to set the proper parameters for the dynamics model. These include the time constant steering_tau
and time delay steering_delay
for steering dynamics, and the maximum acceleration mpc_acceleration_limit
and the time constant mpc_velocity_time_constant
for velocity dynamics.
It's also important to make sure the input information is accurate. Information such as the velocity of the center of the rear wheel [m/s] and the steering angle of the tire [rad] is required. Please note that there have been frequent reports of performance degradation due to errors in input information. For instance, there are cases where the velocity of the vehicle is offset due to an unexpected difference in tire radius, or the tire angle cannot be accurately measured due to a deviation in the steering gear ratio or midpoint. It is suggested to compare information from multiple sensors (e.g., integrated vehicle speed and GNSS position, steering angle and IMU angular velocity), and ensure the input information for MPC is appropriate.
"},{"location":"control/mpc_lateral_controller/#mpc-weight-tuning","title":"MPC weight tuning","text":"Then, tune the weights of the MPC. One simple approach of tuning is to keep the weight for the lateral deviation (weight_lat_error
) constant, and vary the input weight (weight_steering_input
) while observing the trade-off between steering oscillation and control accuracy.
Here, weight_lat_error
acts to suppress the lateral error in path following, while weight_steering_input
works to adjust the steering angle to a standard value determined by the path's curvature. When weight_lat_error
is large, the steering moves significantly to improve accuracy, which can cause oscillations. On the other hand, when weight_steering_input
is large, the steering doesn't respond much to tracking errors, providing stable driving but potentially reducing tracking accuracy.
The steps are as follows:
weight_lat_error
= 0.1, weight_steering_input
= 1.0 and other weights to 0.weight_steering_input
larger.weight_steering_input
smaller.If you want to adjust the effect only in the high-speed range, you can use weight_steering_input_squared_vel
. This parameter corresponds to the steering weight in the high-speed range.
weight_lat_error
: Reduce lateral tracking error. This acts like P gain in PID.weight_heading_error
: Make a drive straight. This acts like D gain in PID.weight_heading_error_squared_vel_coeff
: Make a drive straight in high speed range.weight_steering_input
: Reduce oscillation of tracking.weight_steering_input_squared_vel_coeff
: Reduce oscillation of tracking in high speed range.weight_lat_jerk
: Reduce lateral jerk.weight_terminal_lat_error
: Preferable to set a higher value than normal lateral weight weight_lat_error
for stability.weight_terminal_heading_error
: Preferable to set a higher value than normal heading weight weight_heading_error
for stability.Here are some tips for adjusting other parameters:
weight_terminal_lat_error
and weight_terminal_heading_error
, can enhance the tracking stability. This method sometimes proves effective.prediction_horizon
and a smaller prediction_sampling_time
are efficient for tracking performance. However, these come at the cost of higher computational costs.mpc_low_curvature_thresh_curvature
and adjust mpc_low_curvature_weight_**
weights.steer_rate_lim_dps_list_by_curvature
, curvature_list_for_steer_rate_lim
, steer_rate_lim_dps_list_by_velocity
, velocity_list_for_steer_rate_lim
. By doing this, you can enforce the steering rate limit during high-speed driving or relax it while curving.curvature_smoothing
becomes critically important for accurate curvature calculations. A larger value yields a smooth curvature calculation which reduces noise but can cause delay in feedforward computation and potentially degrade performance.steering_lpf_cutoff_hz
value can also be effective to forcefully reduce computational noise. This refers to the cutoff frequency in the second order Butterworth filter installed in the final layer. The smaller the cutoff frequency, the stronger the noise reduction, but it also induce operation delay.enable_auto_steering_offset_removal
to true and activate the steering offset remover. The steering offset estimation logic works when driving at high speeds with the steering close to the center, applying offset removal.input_delay
and vehicle_model_steer_tau
. Additionally, as a part of its debug information, MPC outputs the current steering angle assumed by the MPC model, so please check if that steering angle matches the actual one.Model Predictive Control (MPC) is a control method that solves an optimization problem during each control cycle to determine an optimal control sequence based on a given vehicle model. The calculated sequence of control inputs is used to control the system.
In simpler terms, an MPC controller calculates a series of control inputs that optimize the state and output trajectories to achieve the desired behavior. The key characteristics of an MPC control system can be summarized as follows:
The choice between a linear or nonlinear model or constraint equation depends on the specific formulation of the MPC problem. If any nonlinear expressions are present in the motion equation or constraints, the optimization problem becomes nonlinear. In the following sections, we provide a step-by-step explanation of how linear and nonlinear optimization problems are solved within the MPC framework. Note that in this documentation, we utilize the linearization method to accommodate the nonlinear model.
"},{"location":"control/mpc_lateral_controller/model_predictive_control_algorithm/#linear-mpc-formulation","title":"Linear MPC formulation","text":""},{"location":"control/mpc_lateral_controller/model_predictive_control_algorithm/#formulate-as-an-optimization-problem","title":"Formulate as an optimization problem","text":"This section provides an explanation of MPC specifically for linear systems. In the following section, it also demonstrates the formulation of a vehicle path following problem as an application.
In the linear MPC formulation, all motion and constraint expressions are linear. For the path following problem, let's assume that the system's motion can be described by a set of equations, denoted as (1). The state evolution and measurements are presented in a discrete state space format, where matrices \\(A\\), \\(B\\), and \\(C\\) represent the state transition, control, and measurement matrices, respectively.
\\[ \\begin{gather} x_{k+1}=Ax_{k}+Bu_{k}+w_{k}, y_{k}=Cx_{k} \\tag{1} \\\\ x_{k}\\in R^{n},u_{k}\\in R^{m},w_{k}\\in R^{n}, y_{k}\\in R^{l}, A\\in R^{n\\times n}, B\\in R^{n\\times m}, C\\in R^{l \\times n} \\end{gather} \\]Equation (1) represents the state-space equation, where \\(x_k\\) represents the internal states, \\(u_k\\) denotes the input, and \\(w_k\\) represents a known disturbance caused by linearization or problem structure. The measurements are indicated by the variable \\(y_k\\).
It's worth noting that another advantage of MPC is its ability to effectively handle the disturbance term \\(w\\). While it is referred to as a disturbance here, it can take various forms as long as it adheres to the equation's structure.
The state transition and measurement equations in (1) are iterative, moving from time \\(k\\) to time \\(k+1\\). By propagating the equation starting from an initial state and control pair \\((x_0, u_0)\\) along with a specified horizon of \\(N\\) steps, one can predict the trajectories of states and measurements.
For simplicity, let's assume the initial state is \\(x_0\\) with \\(k=0\\).
To begin, we can compute the state \\(x_1\\) at \\(k=1\\) using equation (1) by substituting the initial state into the equation. Since we are seeking a solution for the input sequence, we represent the inputs as decision variables in the symbolic expressions.
\\[ \\begin{align} x_{1} = Ax_{0} + Bu_{0} + w_{0} \\tag{2} \\end{align} \\]Then, when \\(k=2\\), using also equation (2), we get
\\[ \\begin{align} x_{2} & = Ax_{1} + Bu_{1} + w_{1} \\\\ & = A(Ax_{0} + Bu_{0} + w_{0}) + Bu_{1} + w_{1} \\\\ & = A^{2}x_{0} + ABu_{0} + Aw_{0} + Bu_{1} + w_{1} \\\\ & = A^{2}x_{0} + \\begin{bmatrix}AB & B \\end{bmatrix}\\begin{bmatrix}u_{0}\\\\ u_{1} \\end{bmatrix} + \\begin{bmatrix}A & I \\end{bmatrix}\\begin{bmatrix}w_{0}\\\\ w_{1} \\end{bmatrix} \\tag{3} \\end{align} \\]When \\(k=3\\) , from equation (3)
\\[ \\begin{align} x_{3} & = Ax_{2} + Bu_{2} + w_{2} \\\\ & = A(A^{2}x_{0} + ABu_{0} + Bu_{1} + Aw_{0} + w_{1} ) + Bu_{2} + w_{2} \\\\ & = A^{3}x_{0} + A^{2}Bu_{0} + ABu_{1} + A^{2}w_{0} + Aw_{1} + Bu_{2} + w_{2} \\\\ & = A^{3}x_{0} + \\begin{bmatrix}A^{2}B & AB & B \\end{bmatrix}\\begin{bmatrix}u_{0}\\\\ u_{1} \\\\ u_{2} \\end{bmatrix} + \\begin{bmatrix} A^{2} & A & I \\end{bmatrix}\\begin{bmatrix}w_{0}\\\\ w_{1} \\\\ w_{2} \\end{bmatrix} \\tag{4} \\end{align} \\]If \\(k=n\\) , then
\\[ \\begin{align} x_{n} = A^{n}x_{0} + \\begin{bmatrix}A^{n-1}B & A^{n-2}B & \\dots & B \\end{bmatrix}\\begin{bmatrix}u_{0}\\\\ u_{1} \\\\ \\vdots \\\\ u_{n-1} \\end{bmatrix} + \\begin{bmatrix} A^{n-1} & A^{n-2} & \\dots & I \\end{bmatrix}\\begin{bmatrix}w_{0}\\\\ w_{1} \\\\ \\vdots \\\\ w_{n-1} \\end{bmatrix} \\tag{5} \\end{align} \\]Putting all of them together with (2) to (5) yields the following matrix equation;
\\[ \\begin{align} \\begin{bmatrix}x_{1}\\\\ x_{2} \\\\ x_{3} \\\\ \\vdots \\\\ x_{n} \\end{bmatrix} = \\begin{bmatrix}A^{1}\\\\ A^{2} \\\\ A^{3} \\\\ \\vdots \\\\ A^{n} \\end{bmatrix}x_{0} + \\begin{bmatrix}B & 0 & \\dots & & 0 \\\\ AB & B & 0 & \\dots & 0 \\\\ A^{2}B & AB & B & \\dots & 0 \\\\ \\vdots & \\vdots & & & 0 \\\\ A^{n-1}B & A^{n-2}B & \\dots & AB & B \\end{bmatrix}\\begin{bmatrix}u_{0}\\\\ u_{1} \\\\ u_{2} \\\\ \\vdots \\\\ u_{n-1} \\end{bmatrix} \\\\ + \\begin{bmatrix}I & 0 & \\dots & & 0 \\\\ A & I & 0 & \\dots & 0 \\\\ A^{2} & A & I & \\dots & 0 \\\\ \\vdots & \\vdots & & & 0 \\\\ A^{n-1} & A^{n-2} & \\dots & A & I \\end{bmatrix}\\begin{bmatrix}w_{0}\\\\ w_{1} \\\\ w_{2} \\\\ \\vdots \\\\ w_{n-1} \\end{bmatrix} \\tag{6} \\end{align} \\]In this case, the measurements (outputs) become; \\(y_{k}=Cx_{k}\\), so
\\[ \\begin{align} \\begin{bmatrix}y_{1}\\\\ y_{2} \\\\ y_{3} \\\\ \\vdots \\\\ y_{n} \\end{bmatrix} = \\begin{bmatrix}C & 0 & \\dots & & 0 \\\\ 0 & C & 0 & \\dots & 0 \\\\ 0 & 0 & C & \\dots & 0 \\\\ \\vdots & & & \\ddots & 0 \\\\ 0 & \\dots & 0 & 0 & C \\end{bmatrix}\\begin{bmatrix}x_{1}\\\\ x_{2} \\\\ x_{3} \\\\ \\vdots \\\\ x_{n} \\end{bmatrix} \\tag{7} \\end{align} \\]We can combine equations (6) and (7) into the following form:
\\[ \\begin{align} X = Fx_{0} + GU +SW, Y=HX \\tag{8} \\end{align} \\]This form is similar to the original state-space equations (1), but it introduces new matrices: the state transition matrix \\(F\\), control matrix \\(G\\), disturbance matrix \\(W\\), and measurement matrix \\(H\\). In these equations, \\(X\\) represents the predicted states, given by \\(\\begin{bmatrix}x_{1} & x_{2} & \\dots & x_{n} \\end{bmatrix}^{T}\\).
Now that \\(G\\), \\(S\\), \\(W\\), and \\(H\\) are known, we can express the output behavior \\(Y\\) for the next \\(n\\) steps as a function of the input \\(U\\). This allows us to calculate the control input \\(U\\) so that \\(Y(U)\\) follows the target trajectory \\(Y_{ref}\\).
The next step is to define a cost function. The cost function generally uses the following quadratic form;
\\[ \\begin{align} J = (Y - Y_{ref})^{T}Q(Y - Y_{ref}) + (U - U_{ref})^{T}R(U - U_{ref}) \\tag{9} \\end{align} \\]where \\(U_{ref}\\) is the target or steady-state input around which the system is linearized for \\(U\\).
This cost function is the same as that of the LQR controller. The first term of \\(J\\) penalizes the deviation from the reference trajectory. The second term penalizes the deviation from the reference (or steady-state) control trajectory. The \\(Q\\) and \\(R\\) are the cost weights Positive and Positive semi-semidefinite matrices.
Note: in some cases, \\(U_{ref}=0\\) is used, but this can mean the steering angle should be set to \\(0\\) even if the vehicle is turning a curve. Thus \\(U_{ref}\\) is used for the explanation here. This \\(U_{ref}\\) can be pre-calculated from the curvature of the target trajectory or the steady-state analyses.
As the resulting trajectory output is now \\(Y=Y(x_{0}, U)\\), the cost function depends only on U and the initial state conditions which yields the cost \\(J=J(x_{0}, U)\\). Let\u2019s find the \\(U\\) that minimizes this.
Substituting equation (8) into equation (9) and tidying up the equation for \\(U\\).
\\[ \\begin{align} J(U) &= (H(Fx_{0}+GU+SW)-Y_{ref})^{T}Q(H(Fx_{0}+GU+SW)-Y_{ref})+(U-U_{ref})^{T}R(U-U_{ref}) \\\\ & =U^{T}(G^{T}H^{T}QHG+R)U+2\\left\\{(H(Fx_{0}+SW)-Y_{ref})^{T}QHG-U_{ref}^{T}R\\right\\}U +(\\rm{constant}) \\tag{10} \\end{align} \\]This equation is a quadratic form of \\(U\\) (i.e. \\(U^{T}AU+B^{T}U\\))
The coefficient matrix of the quadratic term of \\(U\\), \\(G^{T}C^{T}QCG+R\\) , is positive definite due to the positive and semi-positive definiteness requirement for \\(Q\\) and \\(R\\). Therefore, the cost function is a convex quadratic function in U, which can efficiently be solved by convex optimization.
"},{"location":"control/mpc_lateral_controller/model_predictive_control_algorithm/#apply-to-vehicle-path-following-problem-nonlinear-problem","title":"Apply to vehicle path-following problem (nonlinear problem)","text":"Because the path-following problem with a kinematic vehicle model is nonlinear, we cannot directly use the linear MPC methods described in the preceding section. There are several ways to deal with a nonlinearity such as using the nonlinear optimization solver. Here, the linearization is applied to the nonlinear vehicle model along the reference trajectory, and consequently, the nonlinear model is converted into a linear time-varying model.
For a nonlinear kinematic vehicle model, the discrete-time update equations are as follows:
\\[ \\begin{align} x_{k+1} &= x_{k} + v\\cos\\theta_{k} \\text{d}t \\\\ y_{k+1} &= y_{k} + v\\sin\\theta_{k} \\text{d}t \\\\ \\theta_{k+1} &= \\theta_{k} + \\frac{v\\tan\\delta_{k}}{L} \\text{d}t \\tag{11} \\\\ \\delta_{k+1} &= \\delta_{k} - \\tau^{-1}\\left(\\delta_{k}-\\delta_{des}\\right)\\text{d}t \\end{align} \\]The vehicle reference is the center of the rear axle and all states are measured at this point. The states, parameters, and control variables are shown in the following table.
Symbol Represent \\(v\\) Vehicle speed measured at the center of rear axle \\(\\theta\\) Yaw (heading angle) in global coordinate system \\(\\delta\\) Vehicle steering angle \\(\\delta_{des}\\) Vehicle target steering angle \\(L\\) Vehicle wheelbase (distance between the rear and front axles) \\(\\tau\\) Time constant for the first order steering dynamicsWe assume in this example that the MPC only generates the steering control, and the trajectory generator gives the vehicle speed along the trajectory.
The kinematic vehicle model discrete update equations contain trigonometric functions; sin and cos, and the vehicle coordinates \\(x\\), \\(y\\), and yaw angles are global coordinates. In path tracking applications, it is common to reformulate the model in error dynamics to convert the control into a regulator problem in which the targets become zero (zero error).
We make small angle assumptions for the following derivations of linear equations. Given the nonlinear dynamics and omitting the longitudinal coordinate \\(x\\), the resulting set of equations become;
\\[ \\begin{align} y_{k+1} &= y_{k} + v\\sin\\theta_{k} \\text{d}t \\\\ \\theta_{k+1} &= \\theta_{k} + \\frac{v\\tan\\delta_{k}}{L} \\text{d}t - \\kappa_{r}v\\cos\\theta_{k}\\text{d}t \\tag{12} \\\\ \\delta_{k+1} &= \\delta_{k} - \\tau^{-1}\\left(\\delta_{k}-\\delta_{des}\\right)\\text{d}t \\end{align} \\]Where \\(\\kappa_{r}\\left(s\\right)\\) is the curvature along the trajectory parametrized by the arc length.
There are three expressions in the update equations that are subject to linear approximation: the lateral deviation (or lateral coordinate) \\(y\\), the heading angle (or the heading angle error) \\(\\theta\\), and the steering \\(\\delta\\). We can make a small angle assumption on the heading angle \\(\\theta\\).
In the path tracking problem, the curvature of the trajectory \\(\\kappa_{r}\\) is known in advance. At the lower speeds, the Ackermann formula approximates the reference steering angle \\(\\theta_{r}\\)(this value corresponds to the \\(U_{ref}\\) mentioned above). The Ackermann steering expression can be written as;
\\[ \\begin{align} \\delta_{r} = \\arctan\\left(L\\kappa_{r}\\right) \\end{align} \\]When the vehicle is turning a path, its steer angle \\(\\delta\\) should be close to the value \\(\\delta_{r}\\). Therefore, \\(\\delta\\) can be expressed,
\\[ \\begin{align} \\delta = \\delta_{r} + \\Delta \\delta, \\Delta\\delta \\ll 1 \\end{align} \\]Substituting this equation into equation (12), and approximate \\(\\Delta\\delta\\) to be small.
\\[ \\begin{align} \\tan\\delta &\\simeq \\tan\\delta_{r} + \\frac{\\text{d}\\tan\\delta}{\\text{d}\\delta} \\Biggm|_{\\delta=\\delta_{r}}\\Delta\\delta \\\\ &= \\tan \\delta_{r} + \\frac{1}{\\cos^{2}\\delta_{r}}\\Delta\\delta \\\\ &= \\tan \\delta_{r} + \\frac{1}{\\cos^{2}\\delta_{r}}\\left(\\delta-\\delta_{r}\\right) \\\\ &= \\tan \\delta_{r} - \\frac{\\delta_{r}}{\\cos^{2}\\delta_{r}} + \\frac{1}{\\cos^{2}\\delta_{r}}\\delta \\end{align} \\]Using this, \\(\\theta_{k+1}\\) can be expressed
\\[ \\begin{align} \\theta_{k+1} &= \\theta_{k} + \\frac{v\\tan\\delta_{k}}{L}\\text{d}t - \\kappa_{r}v\\cos\\delta_{k}\\text{d}t \\\\ &\\simeq \\theta_{k} + \\frac{v}{L}\\text{d}t\\left(\\tan\\delta_{r} - \\frac{\\delta_{r}}{\\cos^{2}\\delta_{r}} + \\frac{1}{\\cos^{2}\\delta_{r}}\\delta_{k} \\right) - \\kappa_{r}v\\text{d}t \\\\ &= \\theta_{k} + \\frac{v}{L}\\text{d}t\\left(L\\kappa_{r} - \\frac{\\delta_{r}}{\\cos^{2}\\delta_{r}} + \\frac{1}{\\cos^{2}\\delta_{r}}\\delta_{k} \\right) - \\kappa_{r}v\\text{d}t \\\\ &= \\theta_{k} + \\frac{v}{L}\\frac{\\text{d}t}{\\cos^{2}\\delta_{r}}\\delta_{k} - \\frac{v}{L}\\frac{\\delta_{r}\\text{d}t}{\\cos^{2}\\delta_{r}} \\end{align} \\]Finally, the linearized time-varying model equation becomes;
\\[ \\begin{align} \\begin{bmatrix} y_{k+1} \\\\ \\theta_{k+1} \\\\ \\delta_{k+1} \\end{bmatrix} = \\begin{bmatrix} 1 & v\\text{d}t & 0 \\\\ 0 & 1 & \\frac{v}{L}\\frac{\\text{d}t}{\\cos^{2}\\delta_{r}} \\\\ 0 & 0 & 1 - \\tau^{-1}\\text{d}t \\end{bmatrix} \\begin{bmatrix} y_{k} \\\\ \\theta_{k} \\\\ \\delta_{k} \\end{bmatrix} + \\begin{bmatrix} 0 \\\\ 0 \\\\ \\tau^{-1}\\text{d}t \\end{bmatrix}\\delta_{des} + \\begin{bmatrix} 0 \\\\ -\\frac{v}{L}\\frac{\\delta_{r}\\text{d}t}{\\cos^{2}\\delta_{r}} \\\\ 0 \\end{bmatrix} \\end{align} \\]This equation has the same form as equation (1) of the linear MPC assumption, but the matrices \\(A\\), \\(B\\), and \\(w\\) change depending on the coordinate transformation. To make this explicit, the entire equation is written as follows
\\[ \\begin{align} x_{k+1} = A_{k}x_{k} + B_{k}u_{k}+w_{k} \\end{align} \\]Comparing equation (1), \\(A \\rightarrow A_{k}\\). This means that the \\(A\\) matrix is a linear approximation in the vicinity of the trajectory after \\(k\\) steps (i.e., \\(k* \\text{d}t\\) seconds), and it can be obtained if the trajectory is known in advance.
Using this equation, write down the update equation likewise (2) ~ (6)
\\[ \\begin{align} \\begin{bmatrix} x_{1} \\\\ x_{2} \\\\ x_{3} \\\\ \\vdots \\\\ x_{n} \\end{bmatrix} = \\begin{bmatrix} A_{1} \\\\ A_{1}A_{0} \\\\ A_{2}A_{1}A_{0} \\\\ \\vdots \\\\ \\prod_{i=0}^{n-1} A_{k} \\end{bmatrix} x_{0} + \\begin{bmatrix} B_{0} & 0 & \\dots & & 0 \\\\ A_{1}B_{0} & B_{1} & 0 & \\dots & 0 \\\\ A_{2}A_{1}B_{0} & A_{2}B_{1} & B_{2} & \\dots & 0 \\\\ \\vdots & \\vdots & &\\ddots & 0 \\\\ \\prod_{i=1}^{n-1} A_{k}B_{0} & \\prod_{i=2}^{n-1} A_{k}B_{1} & \\dots & A_{n-1}B_{n-1} & B_{n-1} \\end{bmatrix} \\begin{bmatrix} u_{0} \\\\ u_{1} \\\\ u_{2} \\\\ \\vdots \\\\ u_{n-1} \\end{bmatrix} + \\begin{bmatrix} I & 0 & \\dots & & 0 \\\\ A_{1} & I & 0 & \\dots & 0 \\\\ A_{2}A_{1} & A_{2} & I & \\dots & 0 \\\\ \\vdots & \\vdots & &\\ddots & 0 \\\\ \\prod_{i=1}^{n-1} A_{k} & \\prod_{i=2}^{n-1} A_{k} & \\dots & A_{n-1} & I \\end{bmatrix} \\begin{bmatrix} w_{0} \\\\ w_{1} \\\\ w_{2} \\\\ \\vdots \\\\ w_{n-1} \\end{bmatrix} \\end{align} \\]As it has the same form as equation (6), convex optimization is applicable for as much as the model in the former section.
"},{"location":"control/mpc_lateral_controller/model_predictive_control_algorithm/#the-cost-functions-and-constraints","title":"The cost functions and constraints","text":"In this section, we give the details on how to set up the cost function and constraint conditions.
"},{"location":"control/mpc_lateral_controller/model_predictive_control_algorithm/#the-cost-function","title":"The cost function","text":""},{"location":"control/mpc_lateral_controller/model_predictive_control_algorithm/#weight-for-error-and-input","title":"Weight for error and input","text":"MPC states and control weights appear in the cost function in a similar way as LQR (9). In the vehicle path following the problem described above, if C is the unit matrix, the output \\(y = x = \\left[y, \\theta, \\delta\\right]\\). (To avoid confusion with the y-directional deviation, here \\(e\\) is used for the lateral deviation.)
As an example, let's determine the weight matrix \\(Q_{1}\\) of the evaluation function for the number of prediction steps \\(n=2\\) system as follows.
\\[ \\begin{align} Q_{1} = \\begin{bmatrix} q_{e} & 0 & 0 & 0 & 0& 0 \\\\ 0 & q_{\\theta} & 0 & 0 & 0 & 0 \\\\ 0 & 0 & 0 & 0 & 0 & 0 \\\\ 0 & 0 & 0 & q_{e} & 0 & 0 \\\\ 0 & 0 & 0 & 0 & q_{\\theta} & 0 \\\\ 0 & 0 & 0 & 0 & 0 & 0 \\end{bmatrix} \\end{align} \\]The first term in the cost function (9) with \\(n=2\\), is shown as follow (\\(Y_{ref}\\) is set to \\(0\\))
\\[ \\begin{align} q_{e}\\left(e_{0}^{2} + e_{1}^{2} \\right) + q_{\\theta}\\left(\\theta_{0}^{2} + \\theta_{1}^{2} \\right) \\end{align} \\]This shows that \\(q_{e}\\) is the weight for the lateral error and \\(q\\) is for the angular error. In this example, \\(q_{e}\\) acts as the proportional - P gain and \\(q_{\\theta}\\) as the derivative - D gain for the lateral tracking error. The balance of these factors (including R) will be determined through actual experiments.
"},{"location":"control/mpc_lateral_controller/model_predictive_control_algorithm/#weight-for-non-diagonal-term","title":"Weight for non-diagonal term","text":"MPC can handle the non-diagonal term in its calculation (as long as the resulting matrix is positive definite).
For instance, write \\(Q_{2}\\) as follows for the \\(n=2\\) system.
\\[ \\begin{align} Q_{2} = \\begin{bmatrix} 0 & 0 & 0 & 0 & 0 & 0 \\\\ 0 & 0 & 0 & 0 & 0 & 0 \\\\ 0 & 0 & q_{d} & 0 & 0 & -q_{d} \\\\ 0 & 0 & 0 & 0 & 0 & 0 \\\\ 0 & 0 & 0 & 0 & 0 & 0 \\\\ 0 & 0 & -q_{d} & 0 & 0 & q_{d} \\end{bmatrix} \\end{align} \\]Expanding the first term of the evaluation function using \\(Q_{2}\\)
\\[ \\begin{align} q_{d}\\left(\\delta_{0}^{2} -2\\delta_{0}\\delta_{1} + \\delta_{1}^{2} \\right) = q_{d}\\left( \\delta_{0} - \\delta_{1}\\right)^{2} \\end{align} \\]The value of \\(q_{d}\\) is weighted by the amount of change in \\(\\delta\\), which will prevent the tire from moving quickly. By adding this section, the system can evaluate the balance between tracking accuracy and change of steering wheel angle.
Since the weight matrix can be added linearly, the final weight can be set as \\(Q = Q_{1} + Q_{2}\\).
Furthermore, MPC optimizes over a period of time, the time-varying weight can be considered in the optimization.
"},{"location":"control/mpc_lateral_controller/model_predictive_control_algorithm/#constraints","title":"Constraints","text":""},{"location":"control/mpc_lateral_controller/model_predictive_control_algorithm/#input-constraint","title":"Input constraint","text":"The main advantage of MPC controllers is the capability to deal with any state or input constraints. The constraints can be expressed as box constraints, such as \"the tire angle must be within \u00b130 degrees\", and can be put in the following form;
\\[ \\begin{align} u_{min} < u < u_{max} \\end{align} \\]The constraints must be linear and convex in the linear MPC applications.
"},{"location":"control/mpc_lateral_controller/model_predictive_control_algorithm/#constraints-on-the-derivative-of-the-input","title":"Constraints on the derivative of the input","text":"We can also put constraints on the input deviations. As the derivative of steering angle is \\(\\dot{u}\\), its box constraint is
\\[ \\begin{align} \\dot{u}_{min} < \\dot{u} < \\dot{u}_{max} \\end{align} \\]We discretize \\(\\dot{u}\\) as \\(\\left(u_{k} - u_{k-1}\\right)/\\text{d}t\\) and multiply both sides by dt, and the resulting constraint become linear and convex
\\[ \\begin{align} \\dot{u}_{min}\\text{d}t < u_{k} - u_{k-1} < \\dot{u}_{max}\\text{d}t \\end{align} \\]Along the prediction or control horizon, i.e for setting \\(n=3\\)
\\[ \\begin{align} \\dot{u}_{min}\\text{d}t < u_{1} - u_{0} < \\dot{u}_{max}\\text{d}t \\\\ \\dot{u}_{min}\\text{d}t < u_{2} - u_{1} < \\dot{u}_{max}\\text{d}t \\end{align} \\]and aligning the inequality signs
\\[ \\begin{align} u_{1} - u_{0} &< \\dot{u}_{max}\\text{d}t \\\\ + u_{1} + u_{0} &< -\\dot{u}_{min}\\text{d}t \\\\ u_{2} - u_{1} &< \\dot{u}_{max}\\text{d}t \\\\ + u_{2} + u_{1} &< - \\dot{u}_{min}\\text{d}t \\end{align} \\]We can obtain a matrix expression for the resulting constraint equation in the form of
\\[ \\begin{align} Ax \\leq b \\end{align} \\]Thus, putting this inequality to fit the form above, the constraints against \\(\\dot{u}\\) can be included at the first-order approximation level.
\\[ \\begin{align} \\begin{bmatrix} -1 & 1 & 0 \\\\ 1 & -1 & 0 \\\\ 0 & -1 & 1 \\\\ 0 & 1 & -1 \\end{bmatrix}\\begin{bmatrix} u_{0} \\\\ u_{1} \\\\ u_{2} \\end{bmatrix} \\leq \\begin{bmatrix} \\dot{u}_{max}\\text{d}t \\\\ -\\dot{u}_{min}\\text{d}t \\\\ \\dot{u}_{max}\\text{d}t \\\\ -\\dot{u}_{min}\\text{d}t \\end{bmatrix} \\end{align} \\]"},{"location":"control/obstacle_collision_checker/","title":"obstacle_collision_checker","text":""},{"location":"control/obstacle_collision_checker/#obstacle_collision_checker","title":"obstacle_collision_checker","text":""},{"location":"control/obstacle_collision_checker/#purpose","title":"Purpose","text":"obstacle_collision_checker
is a module to check obstacle collision for predicted trajectory and publish diagnostic errors if collision is found.
Check that obstacle_collision_checker
receives no ground pointcloud, predicted_trajectory, reference trajectory, and current velocity data.
If any collision is found on predicted path, this module sets ERROR
level as diagnostic status else sets OK
.
~/input/trajectory
autoware_auto_planning_msgs::msg::Trajectory
Reference trajectory ~/input/trajectory
autoware_auto_planning_msgs::msg::Trajectory
Predicted trajectory /perception/obstacle_segmentation/pointcloud
sensor_msgs::msg::PointCloud2
Pointcloud of obstacles which the ego-vehicle should stop or avoid /tf
tf2_msgs::msg::TFMessage
TF /tf_static
tf2_msgs::msg::TFMessage
TF static"},{"location":"control/obstacle_collision_checker/#output","title":"Output","text":"Name Type Description ~/debug/marker
visualization_msgs::msg::MarkerArray
Marker for visualization"},{"location":"control/obstacle_collision_checker/#parameters","title":"Parameters","text":"Name Type Description Default value delay_time
double
Delay time of vehicle [s] 0.3 footprint_margin
double
Foot print margin [m] 0.0 max_deceleration
double
Max deceleration for ego vehicle to stop [m/s^2] 2.0 resample_interval
double
Interval for resampling trajectory [m] 0.3 search_radius
double
Search distance from trajectory to point cloud [m] 5.0"},{"location":"control/obstacle_collision_checker/#assumptions-known-limits","title":"Assumptions / Known limits","text":"To perform proper collision check, it is necessary to get probably predicted trajectory and obstacle pointclouds without noise.
"},{"location":"control/operation_mode_transition_manager/","title":"operation_mode_transition_manager","text":""},{"location":"control/operation_mode_transition_manager/#operation_mode_transition_manager","title":"operation_mode_transition_manager","text":""},{"location":"control/operation_mode_transition_manager/#purpose-use-cases","title":"Purpose / Use cases","text":"This module is responsible for managing the different modes of operation for the Autoware system. The possible modes are:
Autonomous
: the vehicle is fully controlled by the autonomous driving systemLocal
: the vehicle is controlled by a physically connected control system such as a joy stickRemote
: the vehicle is controlled by a remote controllerStop
: the vehicle is stopped and there is no active control system.There is also an In Transition
state that occurs during each mode transitions. During this state, the transition to the new operator is not yet complete, and the previous operator is still responsible for controlling the system until the transition is complete. Some actions may be restricted during the In Transition
state, such as sudden braking or steering. (This is restricted by the vehicle_cmd_gate
).
Autonomous
, Local
, Remote
and Stop
based on the indication command.In Transition
mode (this is done with vehicle_cmd_gate
feature).Autonomous
, Local
, Remote
, and Stop
modes based on the indicated command.In Transition
mode (using the vehicle_cmd_gate
feature).A rough design of the relationship between `operation_mode_transition_manager`` and the other nodes is shown below.
A more detailed structure is below.
Here we see that operation_mode_transition_manager
has multiple state transitions as follows
For the mode transition:
tier4_system_msgs/srv/ChangeAutowareControl
]: change operation mode to Autonomoustier4_system_msgs/srv/ChangeOperationMode
]: change operation modeFor the transition availability/completion check:
autoware_auto_control_msgs/msg/AckermannControlCommand
]: vehicle control signalnav_msgs/msg/Odometry
]: ego vehicle stateautoware_auto_planning_msgs/msg/Trajectory
]: planning trajectoryautoware_auto_vehicle_msgs/msg/ControlModeReport
]: vehicle control mode (autonomous/manual)autoware_adapi_v1_msgs/msg/OperationModeState
]: the operation mode in the vehicle_cmd_gate
. (To be removed)For the backward compatibility (to be removed):
autoware_auto_vehicle_msgs/msg/Engage
]tier4_control_msgs/msg/GateMode
]tier4_control_msgs/msg/ExternalCommandSelectorMode
]autoware_adapi_v1_msgs/msg/OperationModeState
]: to inform the current operation modeoperation_mode_transition_manager/msg/OperationModeTransitionManagerDebug
]: detailed information about the operation mode transitiontier4_control_msgs/msg/GateMode
]: to change the vehicle_cmd_gate
state to use its features (to be removed)autoware_auto_vehicle_msgs/msg/Engage
]:autoware_auto_vehicle_msgs/srv/ControlModeCommand
]: to change the vehicle control mode (autonomous/manual)tier4_control_msgs/srv/ExternalCommandSelect
]:transition_timeout
double
If the state transition is not completed within this time, it is considered a transition failure. 10.0 frequency_hz
double
running hz 10.0 enable_engage_on_driving
bool
Set true if you want to engage the autonomous driving mode while the vehicle is driving. If set to false, it will deny Engage in any situation where the vehicle speed is not zero. Note that if you use this feature without adjusting the parameters, it may cause issues like sudden deceleration. Before using, please ensure the engage condition and the vehicle_cmd_gate transition filter are appropriately adjusted. 0.1 check_engage_condition
bool
If false, autonomous transition is always available 0.1 nearest_dist_deviation_threshold
double
distance threshold used to find nearest trajectory point 3.0 nearest_yaw_deviation_threshold
double
angle threshold used to find nearest trajectory point 1.57 For engage_acceptable_limits
related parameters:
allow_autonomous_in_stopped
bool
If true, autonomous transition is available when the vehicle is stopped even if other checks fail. true dist_threshold
double
the distance between the trajectory and ego vehicle must be within this distance for Autonomous
transition. 1.5 yaw_threshold
double
the yaw angle between trajectory and ego vehicle must be within this threshold for Autonomous
transition. 0.524 speed_upper_threshold
double
the velocity deviation between control command and ego vehicle must be within this threshold for Autonomous
transition. 10.0 speed_lower_threshold
double
the velocity deviation between the control command and ego vehicle must be within this threshold for Autonomous
transition. -10.0 acc_threshold
double
the control command acceleration must be less than this threshold for Autonomous
transition. 1.5 lateral_acc_threshold
double
the control command lateral acceleration must be less than this threshold for Autonomous
transition. 1.0 lateral_acc_diff_threshold
double
the lateral acceleration deviation between the control command must be less than this threshold for Autonomous
transition. 0.5 For stable_check
related parameters:
duration
double
the stable condition must be satisfied for this duration to complete the transition. 0.1 dist_threshold
double
the distance between the trajectory and ego vehicle must be within this distance to complete Autonomous
transition. 1.5 yaw_threshold
double
the yaw angle between trajectory and ego vehicle must be within this threshold to complete Autonomous
transition. 0.262 speed_upper_threshold
double
the velocity deviation between control command and ego vehicle must be within this threshold to complete Autonomous
transition. 2.0 speed_lower_threshold
double
the velocity deviation between control command and ego vehicle must be within this threshold to complete Autonomous
transition. 2.0"},{"location":"control/operation_mode_transition_manager/#engage-check-behavior-on-each-parameter-setting","title":"Engage check behavior on each parameter setting","text":"This matrix describes the scenarios in which the vehicle can be engaged based on the combinations of parameter settings:
enable_engage_on_driving
check_engage_condition
allow_autonomous_in_stopped
Scenarios where engage is permitted x x x Only when the vehicle is stationary. x x o Only when the vehicle is stationary. x o x When the vehicle is stationary and all engage conditions are met. x o o Only when the vehicle is stationary. o x x At any time (Caution: Not recommended). o x o At any time (Caution: Not recommended). o o x When all engage conditions are met, regardless of vehicle status. o o o When all engage conditions are met or the vehicle is stationary."},{"location":"control/operation_mode_transition_manager/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":"vehicle_cmd_gate
due to its strong connection.The longitudinal_controller computes the target acceleration to achieve the target velocity set at each point of the target trajectory using a feed-forward/back control.
It also contains a slope force correction that takes into account road slope information, and a delay compensation function. It is assumed that the target acceleration calculated here will be properly realized by the vehicle interface.
Note that the use of this module is not mandatory for Autoware if the vehicle supports the \"target speed\" interface.
"},{"location":"control/pid_longitudinal_controller/#design-inner-workings-algorithms","title":"Design / Inner-workings / Algorithms","text":""},{"location":"control/pid_longitudinal_controller/#states","title":"States","text":"This module has four state transitions as shown below in order to handle special processing in a specific situation.
The state transition diagram is shown below.
"},{"location":"control/pid_longitudinal_controller/#logics","title":"Logics","text":""},{"location":"control/pid_longitudinal_controller/#control-block-diagram","title":"Control Block Diagram","text":""},{"location":"control/pid_longitudinal_controller/#feedforward-ff","title":"FeedForward (FF)","text":"The reference acceleration set in the trajectory and slope compensation terms are output as a feedforward. Under ideal conditions with no modeling error, this FF term alone should be sufficient for velocity tracking.
Tracking errors causing modeling or discretization errors are removed by the feedback control (now using PID).
"},{"location":"control/pid_longitudinal_controller/#brake-keeping","title":"Brake keeping","text":"From the viewpoint of ride comfort, stopping with 0 acceleration is important because it reduces the impact of braking. However, if the target acceleration when stopping is 0, the vehicle may cross over the stop line or accelerate a little in front of the stop line due to vehicle model error or gradient estimation error.
For reliable stopping, the target acceleration calculated by the FeedForward system is limited to a negative acceleration when stopping.
"},{"location":"control/pid_longitudinal_controller/#slope-compensation","title":"Slope compensation","text":"Based on the slope information, a compensation term is added to the target acceleration.
There are two sources of the slope information, which can be switched by a parameter.
Notation: This function works correctly only in a vehicle system that does not have acceleration feedback in the low-level control system.
This compensation adds gravity correction to the target acceleration, resulting in an output value that is no longer equal to the target acceleration that the autonomous driving system desires. Therefore, it conflicts with the role of the acceleration feedback in the low-level controller. For instance, if the vehicle is attempting to start with an acceleration of 1.0 m/s^2
and a gravity correction of -1.0 m/s^2
is applied, the output value will be 0
. If this output value is mistakenly treated as the target acceleration, the vehicle will not start.
A suitable example of a vehicle system for the slope compensation function is one in which the output acceleration from the longitudinal_controller is converted into target accel/brake pedal input without any feedbacks. In this case, the output acceleration is just used as a feedforward term to calculate the target pedal, and hence the issue mentioned above does not arise.
Note: The angle of the slope is defined as positive for an uphill slope, while the pitch angle of the ego pose is defined as negative when facing upward. They have an opposite definition.
"},{"location":"control/pid_longitudinal_controller/#pid-control","title":"PID control","text":"For deviations that cannot be handled by FeedForward control, such as model errors, PID control is used to construct a feedback system.
This PID control calculates the target acceleration from the deviation between the current ego-velocity and the target velocity.
This PID logic has a maximum value for the output of each term. This is to prevent the following:
Note: by default, the integral term in the control system is not accumulated when the vehicle is stationary. This precautionary measure aims to prevent unintended accumulation of the integral term in scenarios where Autoware assumes the vehicle is engaged, but an external system has immobilized the vehicle to initiate startup procedures.
However, certain situations may arise, such as when the vehicle encounters a depression in the road surface during startup or if the slope compensation is inaccurately estimated (lower than necessary), leading to a failure to initiate motion. To address these scenarios, it is possible to activate error integration even when the vehicle is at rest by setting the enable_integration_at_low_speed
parameter to true.
When enable_integration_at_low_speed
is set to true, the PID controller will initiate integration of the acceleration error after a specified duration defined by the time_threshold_before_pid_integration
parameter has elapsed without the vehicle surpassing a minimum velocity set by the current_vel_threshold_pid_integration
parameter.
The presence of the time_threshold_before_pid_integration
parameter is important for practical PID tuning. Integrating the error when the vehicle is stationary or at low speed can complicate PID tuning. This parameter effectively introduces a delay before the integral part becomes active, preventing it from kicking in immediately. This delay allows for more controlled and effective tuning of the PID controller.
At present, PID control is implemented from the viewpoint of trade-off between development/maintenance cost and performance. This may be replaced by a higher performance controller (adaptive control or robust control) in future development.
"},{"location":"control/pid_longitudinal_controller/#time-delay-compensation","title":"Time delay compensation","text":"At high speeds, the delay of actuator systems such as gas pedals and brakes has a significant impact on driving accuracy. Depending on the actuating principle of the vehicle, the mechanism that physically controls the gas pedal and brake typically has a delay of about a hundred millisecond.
In this controller, the predicted ego-velocity and the target velocity after the delay time are calculated and used for the feedback to address the time delay problem.
"},{"location":"control/pid_longitudinal_controller/#slope-compensation_1","title":"Slope compensation","text":"Based on the slope information, a compensation term is added to the target acceleration.
There are two sources of the slope information, which can be switched by a parameter.
Set the following from the controller_node
autoware_auto_planning_msgs/Trajectory
: reference trajectory to follow.nav_msgs/Odometry
: current odometryReturn LongitudinalOutput which contains the following to the controller node
autoware_auto_control_msgs/LongitudinalCommand
: command to control the longitudinal motion of the vehicle. It contains the target velocity and target acceleration.The PIDController
class is straightforward to use. First, gains and limits must be set (using setGains()
and setLimits()
) for the proportional (P), integral (I), and derivative (D) components. Then, the velocity can be calculated by providing the current error and time step duration to the calculate()
function.
The default parameters defined in param/lateral_controller_defaults.param.yaml
are adjusted to the AutonomouStuff Lexus RX 450h for under 40 km/h driving.
emergency_state_overshoot_stop_dist
. true enable_large_tracking_error_emergency bool flag to enable transition to EMERGENCY when the closest trajectory point search is failed due to a large deviation between trajectory and ego pose. true enable_slope_compensation bool flag to modify output acceleration for slope compensation. The source of the slope angle can be selected from ego-pose or trajectory angle. See use_trajectory_for_pitch_calculation
. true enable_brake_keeping_before_stop bool flag to keep a certain acceleration during DRIVE state before the ego stops. See Brake keeping. false enable_keep_stopped_until_steer_convergence bool flag to keep stopped condition until until the steer converges. true max_acc double max value of output acceleration [m/s^2] 3.0 min_acc double min value of output acceleration [m/s^2] -5.0 max_jerk double max value of jerk of output acceleration [m/s^3] 2.0 min_jerk double min value of jerk of output acceleration [m/s^3] -5.0 use_trajectory_for_pitch_calculation bool If true, the slope is estimated from trajectory z-level. Otherwise the pitch angle of the ego pose is used. false lpf_pitch_gain double gain of low-pass filter for pitch estimation 0.95 max_pitch_rad double max value of estimated pitch [rad] 0.1 min_pitch_rad double min value of estimated pitch [rad] -0.1"},{"location":"control/pid_longitudinal_controller/#state-transition","title":"State transition","text":"Name Type Description Default value drive_state_stop_dist double The state will transit to DRIVE when the distance to the stop point is longer than drive_state_stop_dist
+ drive_state_offset_stop_dist
[m] 0.5 drive_state_offset_stop_dist double The state will transit to DRIVE when the distance to the stop point is longer than drive_state_stop_dist
+ drive_state_offset_stop_dist
[m] 1.0 stopping_state_stop_dist double The state will transit to STOPPING when the distance to the stop point is shorter than stopping_state_stop_dist
[m] 0.5 stopped_state_entry_vel double threshold of the ego velocity in transition to the STOPPED state [m/s] 0.01 stopped_state_entry_acc double threshold of the ego acceleration in transition to the STOPPED state [m/s^2] 0.1 emergency_state_overshoot_stop_dist double If enable_overshoot_emergency
is true and the ego is emergency_state_overshoot_stop_dist
-meter ahead of the stop point, the state will transit to EMERGENCY. [m] 1.5 emergency_state_traj_trans_dev double If the ego's position is emergency_state_traj_tran_dev
meter away from the nearest trajectory point, the state will transit to EMERGENCY. [m] 3.0 emergency_state_traj_rot_dev double If the ego's orientation is emergency_state_traj_rot_dev
rad away from the nearest trajectory point orientation, the state will transit to EMERGENCY. [rad] 0.784"},{"location":"control/pid_longitudinal_controller/#drive-parameter","title":"DRIVE Parameter","text":"Name Type Description Default value kp double p gain for longitudinal control 1.0 ki double i gain for longitudinal control 0.1 kd double d gain for longitudinal control 0.0 max_out double max value of PID's output acceleration during DRIVE state [m/s^2] 1.0 min_out double min value of PID's output acceleration during DRIVE state [m/s^2] -1.0 max_p_effort double max value of acceleration with p gain 1.0 min_p_effort double min value of acceleration with p gain -1.0 max_i_effort double max value of acceleration with i gain 0.3 min_i_effort double min value of acceleration with i gain -0.3 max_d_effort double max value of acceleration with d gain 0.0 min_d_effort double min value of acceleration with d gain 0.0 lpf_vel_error_gain double gain of low-pass filter for velocity error 0.9 enable_integration_at_low_speed bool Whether to enable integration of acceleration errors when the vehicle speed is lower than current_vel_threshold_pid_integration
or not. current_vel_threshold_pid_integration double Velocity error is integrated for I-term only when the absolute value of current velocity is larger than this parameter. [m/s] time_threshold_before_pid_integration double How much time without the vehicle moving must past to enable PID error integration. [s] 5.0 brake_keeping_acc double If enable_brake_keeping_before_stop
is true, a certain acceleration is kept during DRIVE state before the ego stops [m/s^2] See Brake keeping. 0.2"},{"location":"control/pid_longitudinal_controller/#stopping-parameter-smooth-stop","title":"STOPPING Parameter (smooth stop)","text":"Smooth stop is enabled if enable_smooth_stop
is true. In smooth stop, strong acceleration (strong_acc
) will be output first to decrease the ego velocity. Then weak acceleration (weak_acc
) will be output to stop smoothly by decreasing the ego jerk. If the ego does not stop in a certain time or some-meter over the stop point, weak acceleration to stop right (weak_stop_acc
) now will be output. If the ego is still running, strong acceleration (strong_stop_acc
) to stop right now will be output.
smooth_stop_strong_stop_dist
-meter over the stop point. [m/s^2] -3.4 smooth_stop_max_fast_vel double max fast vel to judge the ego is running fast [m/s]. If the ego is running fast, strong acceleration will be output. 0.5 smooth_stop_min_running_vel double min ego velocity to judge if the ego is running or not [m/s] 0.01 smooth_stop_min_running_acc double min ego acceleration to judge if the ego is running or not [m/s^2] 0.01 smooth_stop_weak_stop_time double max time to output weak acceleration [s]. After this, strong acceleration will be output. 0.8 smooth_stop_weak_stop_dist double Weak acceleration will be output when the ego is smooth_stop_weak_stop_dist
-meter before the stop point. [m] -0.3 smooth_stop_strong_stop_dist double Strong acceleration will be output when the ego is smooth_stop_strong_stop_dist
-meter over the stop point. [m] -0.5"},{"location":"control/pid_longitudinal_controller/#stopped-parameter","title":"STOPPED Parameter","text":"The STOPPED
state assumes that the vehicle is completely stopped with the brakes fully applied. Therefore, stopped_acc
should be set to a value that allows the vehicle to apply the strongest possible brake. If stopped_acc
is not sufficiently low, there is a possibility of sliding down on steep slopes.
The Predicted Path Checker package is designed for autonomous vehicles to check the predicted path generated by control modules. It handles potential collisions that the planning module might not be able to handle and that in the brake distance. In case of collision in brake distance, the package will send a diagnostic message labeled \"ERROR\" to alert the system to send emergency and in the case of collisions in outside reference trajectory, it sends pause request to pause interface to make the vehicle stop.
"},{"location":"control/predicted_path_checker/#algorithm","title":"Algorithm","text":"The package algorithm evaluates the predicted trajectory against the reference trajectory and the predicted objects in the environment. It checks for potential collisions and, if necessary, generates an appropriate response to avoid them ( emergency or pause request).
"},{"location":"control/predicted_path_checker/#inner-algorithm","title":"Inner Algorithm","text":"cutTrajectory() -> It cuts the predicted trajectory with input length. Length is calculated by multiplying the velocity of ego vehicle with \"trajectory_check_time\" parameter and \"min_trajectory_length\".
filterObstacles() -> It filters the predicted objects in the environment. It filters the objects which are not in front of the vehicle and far away from predicted trajectory.
checkTrajectoryForCollision() -> It checks the predicted trajectory for collision with the predicted objects. It calculates both polygon of trajectory points and predicted objects and checks intersection of both polygons. If there is an intersection, it calculates the nearest collision point. It returns the nearest collision point of polygon and the predicted object. It also checks predicted objects history which are intersect with the footprint before to avoid unexpected behaviors. Predicted objects history stores the objects if it was detected below the \"chattering_threshold\" seconds ago.
If the \"enable_z_axis_obstacle_filtering\" parameter is set to true, it filters the predicted objects in the Z-axis by using \"z_axis_filtering_buffer\". If the object does not intersect with the Z-axis, it is filtered out.
calculateProjectedVelAndAcc() -> It calculates the projected velocity and acceleration of the predicted object on predicted trajectory's collision point's axes.
isInBrakeDistance() -> It checks if the stop point is in brake distance. It gets relative velocity and acceleration of ego vehicle with respect to the predicted object. It calculates the brake distance, if the point in brake distance, it returns true.
isItDiscretePoint() -> It checks if the stop point on predicted trajectory is discrete point or not. If it is not discrete point, planning should handle the stop.
isThereStopPointOnRefTrajectory() -> It checks if there is a stop point on reference trajectory. If there is a stop point before the stop index, it returns true. Otherwise, it returns false, and node is going to call pause interface to make the vehicle stop.
"},{"location":"control/predicted_path_checker/#inputs","title":"Inputs","text":"Name Type Description~/input/reference_trajectory
autoware_auto_planning_msgs::msg::Trajectory
Reference trajectory ~/input/predicted_trajectory
autoware_auto_planning_msgs::msg::Trajectory
Predicted trajectory ~/input/objects
autoware_auto_perception_msgs::msg::PredictedObject
Dynamic objects in the environment ~/input/odometry
nav_msgs::msg::Odometry
Odometry message of vehicle to get current velocity ~/input/current_accel
geometry_msgs::msg::AccelWithCovarianceStamped
Current acceleration /control/vehicle_cmd_gate/is_paused
tier4_control_msgs::msg::IsPaused
Current pause state of the vehicle"},{"location":"control/predicted_path_checker/#outputs","title":"Outputs","text":"Name Type Description ~/debug/marker
visualization_msgs::msg::MarkerArray
Marker for visualization ~/debug/virtual_wall
visualization_msgs::msg::MarkerArray
Virtual wall marker for visualization /control/vehicle_cmd_gate/set_pause
tier4_control_msgs::srv::SetPause
Pause service to make the vehicle stop /diagnostics
diagnostic_msgs::msg::DiagnosticStatus
Diagnostic status of vehicle"},{"location":"control/predicted_path_checker/#parameters","title":"Parameters","text":""},{"location":"control/predicted_path_checker/#node-parameters","title":"Node Parameters","text":"Name Type Description Default value update_rate
double
The update rate [Hz] 10.0 delay_time
double
he time delay considered for the emergency response [s] 0.17 max_deceleration
double
Max deceleration for ego vehicle to stop [m/s^2] 1.5 resample_interval
double
Interval for resampling trajectory [m] 0.5 stop_margin
double
The stopping margin [m] 0.5 ego_nearest_dist_threshold
double
The nearest distance threshold for ego vehicle [m] 3.0 ego_nearest_yaw_threshold
double
The nearest yaw threshold for ego vehicle [rad] 1.046 min_trajectory_check_length
double
The minimum trajectory check length in meters [m] 1.5 trajectory_check_time
double
The trajectory check time in seconds. [s] 3.0 distinct_point_distance_threshold
double
The distinct point distance threshold [m] 0.3 distinct_point_yaw_threshold
double
The distinct point yaw threshold [deg] 5.0 filtering_distance_threshold
double
It ignores the objects if distance is higher than this [m] 1.5 use_object_prediction
bool
If true, node predicts current pose of the objects wrt delta time [-] true"},{"location":"control/predicted_path_checker/#collision-checker-parameters","title":"Collision Checker Parameters","text":"Name Type Description Default value width_margin
double
The width margin for collision checking [Hz] 0.2 chattering_threshold
double
The chattering threshold for collision detection [s] 0.2 z_axis_filtering_buffer
double
The Z-axis filtering buffer [m] 0.3 enable_z_axis_obstacle_filtering
bool
A boolean flag indicating if Z-axis obstacle filtering is enabled false"},{"location":"control/pure_pursuit/","title":"Pure Pursuit Controller","text":""},{"location":"control/pure_pursuit/#pure-pursuit-controller","title":"Pure Pursuit Controller","text":"The Pure Pursuit Controller module calculates the steering angle for tracking a desired trajectory using the pure pursuit algorithm. This is used as a lateral controller plugin in the trajectory_follower_node
.
Set the following from the controller_node
autoware_auto_planning_msgs/Trajectory
: reference trajectory to follow.nav_msgs/Odometry
: current ego pose and velocity informationReturn LateralOutput which contains the following to the controller node
autoware_auto_control_msgs/AckermannLateralCommand
: target steering angleautoware_auto_planning_msgs/Trajectory
: predicted path for ego vehicleshift_decider
is a module to decide shift from ackermann control command.
~/input/control_cmd
autoware_auto_control_msgs::msg::AckermannControlCommand
Control command for vehicle."},{"location":"control/shift_decider/#output","title":"Output","text":"Name Type Description ~output/gear_cmd
autoware_auto_vehicle_msgs::msg::GearCommand
Gear for drive forward / backward."},{"location":"control/shift_decider/#parameters","title":"Parameters","text":"none.
"},{"location":"control/shift_decider/#assumptions-known-limits","title":"Assumptions / Known limits","text":"TBD.
"},{"location":"control/trajectory_follower_base/","title":"Trajectory Follower","text":""},{"location":"control/trajectory_follower_base/#trajectory-follower","title":"Trajectory Follower","text":"This is the design document for the trajectory_follower
package.
This package provides the interface of longitudinal and lateral controllers used by the node of the trajectory_follower_node
package. We can implement a detailed controller by deriving the longitudinal and lateral base interfaces.
There are lateral and longitudinal base interface classes and each algorithm inherits from this class to implement. The interface class has the following base functions.
isReady()
: Check if the control is ready to compute.run()
: Compute control commands and return to Trajectory Follower Nodes. This must be implemented by inherited algorithms.sync()
: Input the result of running the other controller.See the Design of Trajectory Follower Nodes for how these functions work in the node.
"},{"location":"control/trajectory_follower_base/#separated-lateral-steering-and-longitudinal-velocity-controls","title":"Separated lateral (steering) and longitudinal (velocity) controls","text":"This longitudinal controller assumes that the roles of lateral and longitudinal control are separated as follows.
Ideally, dealing with the lateral and longitudinal control as a single mixed problem can achieve high performance. In contrast, there are two reasons to provide velocity controller as a stand-alone function, described below.
"},{"location":"control/trajectory_follower_base/#complex-requirements-for-longitudinal-motion","title":"Complex requirements for longitudinal motion","text":"The longitudinal vehicle behavior that humans expect is difficult to express in a single logic. For example, the expected behavior just before stopping differs depending on whether the ego-position is ahead/behind of the stop line, or whether the current speed is higher/lower than the target speed to achieve a human-like movement.
In addition, some vehicles have difficulty measuring the ego-speed at extremely low speeds. In such cases, a configuration that can improve the functionality of the longitudinal control without affecting the lateral control is important.
There are many characteristics and needs that are unique to longitudinal control. Designing them separately from the lateral control keeps the modules less coupled and improves maintainability.
"},{"location":"control/trajectory_follower_base/#nonlinear-coupling-of-lateral-and-longitudinal-motion","title":"Nonlinear coupling of lateral and longitudinal motion","text":"The lat-lon mixed control problem is very complex and uses nonlinear optimization to achieve high performance. Since it is difficult to guarantee the convergence of the nonlinear optimization, a simple control logic is also necessary for development.
Also, the benefits of simultaneous longitudinal and lateral control are small if the vehicle doesn't move at high speed.
"},{"location":"control/trajectory_follower_base/#related-issues","title":"Related issues","text":""},{"location":"control/trajectory_follower_node/","title":"Trajectory Follower Nodes","text":""},{"location":"control/trajectory_follower_node/#trajectory-follower-nodes","title":"Trajectory Follower Nodes","text":""},{"location":"control/trajectory_follower_node/#purpose","title":"Purpose","text":"Generate control commands to follow a given Trajectory.
"},{"location":"control/trajectory_follower_node/#design","title":"Design","text":"This is a node of the functionalities implemented in the controller class derived from trajectory_follower_base package. It has instances of those functionalities, gives them input data to perform calculations, and publishes control commands.
By default, the controller instance with the Controller
class as follows is used.
The process flow of Controller
class is as follows.
// 1. create input data\nconst auto input_data = createInputData(*get_clock());\nif (!input_data) {\nreturn;\n}\n\n// 2. check if controllers are ready\nconst bool is_lat_ready = lateral_controller_->isReady(*input_data);\nconst bool is_lon_ready = longitudinal_controller_->isReady(*input_data);\nif (!is_lat_ready || !is_lon_ready) {\nreturn;\n}\n\n// 3. run controllers\nconst auto lat_out = lateral_controller_->run(*input_data);\nconst auto lon_out = longitudinal_controller_->run(*input_data);\n\n// 4. sync with each other controllers\nlongitudinal_controller_->sync(lat_out.sync_data);\nlateral_controller_->sync(lon_out.sync_data);\n\n// 5. publish control command\ncontrol_cmd_pub_->publish(out);\n
Giving the longitudinal controller information about steer convergence allows it to control steer when stopped if following parameters are true
keep_steer_control_until_converged
enable_keep_stopped_until_steer_convergence
autoware_auto_planning_msgs/Trajectory
: reference trajectory to follow.nav_msgs/Odometry
: current odometryautoware_auto_vehicle_msgs/SteeringReport
current steeringautoware_auto_control_msgs/AckermannControlCommand
: message containing both lateral and longitudinal commands.ctrl_period
: control commands publishing periodtimeout_thr_sec
: duration in second after which input messages are discarded.AckermannControlCommand
if the following two conditions are met.timeout_thr_sec
.lateral_controller_mode
: mpc
or pure_pursuit
PID
for longitudinal controller)Debug information are published by the lateral and longitudinal controller using tier4_debug_msgs/Float32MultiArrayStamped
messages.
A configuration file for PlotJuggler is provided in the config
folder which, when loaded, allow to automatically subscribe and visualize information useful for debugging.
In addition, the predicted MPC trajectory is published on topic output/lateral/predicted_trajectory
and can be visualized in Rviz.
Provide a base trajectory follower code that is simple and flexible to use. This node calculates control command based on a reference trajectory and an ego vehicle kinematics.
"},{"location":"control/trajectory_follower_node/design/simple_trajectory_follower-design/#design","title":"Design","text":""},{"location":"control/trajectory_follower_node/design/simple_trajectory_follower-design/#inputs-outputs","title":"Inputs / Outputs","text":"Inputs
input/reference_trajectory
[autoware_auto_planning_msgs::msg::Trajectory] : reference trajectory to follow.input/current_kinematic_state
[nav_msgs::msg::Odometry] : current state of the vehicle (position, velocity, etc).output/control_cmd
[autoware_auto_control_msgs::msg::AckermannControlCommand] : generated control command.use_external_target_vel
is true. 0.0 lateral_deviation float target lateral deviation when following. 0.0"},{"location":"control/vehicle_cmd_gate/","title":"vehicle_cmd_gate","text":""},{"location":"control/vehicle_cmd_gate/#vehicle_cmd_gate","title":"vehicle_cmd_gate","text":""},{"location":"control/vehicle_cmd_gate/#purpose","title":"Purpose","text":"vehicle_cmd_gate
is the package to get information from emergency handler, planning module, external controller, and send a msg to vehicle.
~/input/steering
autoware_auto_vehicle_msgs::msg::SteeringReport
steering status ~/input/auto/control_cmd
autoware_auto_control_msgs::msg::AckermannControlCommand
command for lateral and longitudinal velocity from planning module ~/input/auto/turn_indicators_cmd
autoware_auto_vehicle_msgs::msg::TurnIndicatorsCommand
turn indicators command from planning module ~/input/auto/hazard_lights_cmd
autoware_auto_vehicle_msgs::msg::HazardLightsCommand
hazard lights command from planning module ~/input/auto/gear_cmd
autoware_auto_vehicle_msgs::msg::GearCommand
gear command from planning module ~/input/external/control_cmd
autoware_auto_control_msgs::msg::AckermannControlCommand
command for lateral and longitudinal velocity from external ~/input/external/turn_indicators_cmd
autoware_auto_vehicle_msgs::msg::TurnIndicatorsCommand
turn indicators command from external ~/input/external/hazard_lights_cmd
autoware_auto_vehicle_msgs::msg::HazardLightsCommand
hazard lights command from external ~/input/external/gear_cmd
autoware_auto_vehicle_msgs::msg::GearCommand
gear command from external ~/input/external_emergency_stop_heartbeat
tier4_external_api_msgs::msg::Heartbeat
heartbeat ~/input/gate_mode
tier4_control_msgs::msg::GateMode
gate mode (AUTO or EXTERNAL) ~/input/emergency/control_cmd
autoware_auto_control_msgs::msg::AckermannControlCommand
command for lateral and longitudinal velocity from emergency handler ~/input/emergency/hazard_lights_cmd
autoware_auto_vehicle_msgs::msg::HazardLightsCommand
hazard lights command from emergency handler ~/input/emergency/gear_cmd
autoware_auto_vehicle_msgs::msg::GearCommand
gear command from emergency handler ~/input/engage
autoware_auto_vehicle_msgs::msg::Engage
engage signal ~/input/operation_mode
autoware_adapi_v1_msgs::msg::OperationModeState
operation mode of Autoware"},{"location":"control/vehicle_cmd_gate/#output","title":"Output","text":"Name Type Description ~/output/vehicle_cmd_emergency
autoware_auto_system_msgs::msg::EmergencyState
emergency state which was originally in vehicle command ~/output/command/control_cmd
autoware_auto_control_msgs::msg::AckermannControlCommand
command for lateral and longitudinal velocity to vehicle ~/output/command/turn_indicators_cmd
autoware_auto_vehicle_msgs::msg::TurnIndicatorsCommand
turn indicators command to vehicle ~/output/command/hazard_lights_cmd
autoware_auto_vehicle_msgs::msg::HazardLightsCommand
hazard lights command to vehicle ~/output/command/gear_cmd
autoware_auto_vehicle_msgs::msg::GearCommand
gear command to vehicle ~/output/gate_mode
tier4_control_msgs::msg::GateMode
gate mode (AUTO or EXTERNAL) ~/output/engage
autoware_auto_vehicle_msgs::msg::Engage
engage signal ~/output/external_emergency
tier4_external_api_msgs::msg::Emergency
external emergency signal ~/output/operation_mode
tier4_system_msgs::msg::OperationMode
current operation mode of the vehicle_cmd_gate"},{"location":"control/vehicle_cmd_gate/#parameters","title":"Parameters","text":"Parameter Type Description update_period
double update period use_emergency_handling
bool true when emergency handler is used check_external_emergency_heartbeat
bool true when checking heartbeat for emergency stop system_emergency_heartbeat_timeout
double timeout for system emergency external_emergency_stop_heartbeat_timeout
double timeout for external emergency filter_activated_count_threshold
int threshold for filter activation filter_activated_velocity_threshold
double velocity threshold for filter activation stop_hold_acceleration
double longitudinal acceleration cmd when vehicle should stop emergency_acceleration
double longitudinal acceleration cmd when vehicle stop with emergency moderate_stop_service_acceleration
double longitudinal acceleration cmd when vehicle stop with moderate stop service nominal.vel_lim
double limit of longitudinal velocity (activated in AUTONOMOUS operation mode) nominal.reference_speed_point
velocity point used as a reference when calculate control command limit (activated in AUTONOMOUS operation mode). The size of this array must be equivalent to the size of the limit array. nominal.lon_acc_lim
array of limits of longitudinal acceleration (activated in AUTONOMOUS operation mode) nominal.lon_jerk_lim
array of limits of longitudinal jerk (activated in AUTONOMOUS operation mode) nominal.lat_acc_lim
array of limits of lateral acceleration (activated in AUTONOMOUS operation mode) nominal.lat_jerk_lim
array of limits of lateral jerk (activated in AUTONOMOUS operation mode) on_transition.vel_lim
double limit of longitudinal velocity (activated in TRANSITION operation mode) on_transition.reference_speed_point
velocity point used as a reference when calculate control command limit (activated in TRANSITION operation mode). The size of this array must be equivalent to the size of the limit array. on_transition.lon_acc_lim
array of limits of longitudinal acceleration (activated in TRANSITION operation mode) on_transition.lon_jerk_lim
array of limits of longitudinal jerk (activated in TRANSITION operation mode) on_transition.lat_acc_lim
array of limits of lateral acceleration (activated in TRANSITION operation mode) on_transition.lat_jerk_lim
array of limits of lateral jerk (activated in TRANSITION operation mode)"},{"location":"control/vehicle_cmd_gate/#filter-function","title":"Filter function","text":"This module incorporates a limitation filter to the control command right before its published. Primarily for safety, this filter restricts the output range of all control commands published through Autoware.
The limitation values are calculated based on the 1D interpolation of the limitation array parameters. Here is an example for the longitudinal jerk limit.
Notation: this filter is not designed to enhance ride comfort. Its main purpose is to detect and remove abnormal values in the control outputs during the final stages of Autoware. If this filter is frequently active, it implies the control module may need tuning. If you're aiming to smoothen the signal via a low-pass filter or similar techniques, that should be handled in the control module. When the filter is activated, the topic ~/is_filter_activated
is published.
The parameter check_external_emergency_heartbeat
(true by default) enables an emergency stop request from external modules. This feature requires a ~/input/external_emergency_stop_heartbeat
topic for health monitoring of the external module, and the vehicle_cmd_gate module will not start without the topic. The check_external_emergency_heartbeat
parameter must be false when the \"external emergency stop\" function is not used.
This package provides a node to convert diagnostic_msgs::msg::DiagnosticArray
messages into tier4_simulation_msgs::msg::UserDefinedValue
messages.
The node subscribes to all topics listed in the parameters and assumes they publish DiagnosticArray
messages. Each time such message is received, it is converted into as many UserDefinedValue
messages as the number of KeyValue
objects. The format of the output topic is detailed in the output section.
The node listens to DiagnosticArray
messages on the topics specified in the parameters.
The node outputs UserDefinedValue
messages that are converted from the received DiagnosticArray
.
The name of the output topics are generated from the corresponding input topic, the name of the diagnostic status, and the key of the diagnostic. For example, we might listen to topic /diagnostic_topic
and receive a DiagnosticArray
with 2 status:
name: \"x\"
.a
.b
.name: \"y\"
.a
.c
.The resulting topics to publish the UserDefinedValue
are as follows:
/metrics_x_a
./metrics_x_b
./metrics_y_a
./metrics_y_c
.diagnostic_topics
list of string
list of DiagnosticArray topics to convert to UserDefinedValue"},{"location":"evaluator/diagnostic_converter/#assumptions-known-limits","title":"Assumptions / Known limits","text":"Values in the KeyValue
objects of a DiagnosticStatus
are assumed to be of type double
.
TBD
"},{"location":"evaluator/localization_evaluator/","title":"Localization Evaluator","text":""},{"location":"evaluator/localization_evaluator/#localization-evaluator","title":"Localization Evaluator","text":"TBD
"},{"location":"evaluator/planning_evaluator/","title":"Planning Evaluator","text":""},{"location":"evaluator/planning_evaluator/#planning-evaluator","title":"Planning Evaluator","text":""},{"location":"evaluator/planning_evaluator/#purpose","title":"Purpose","text":"This package provides nodes that generate metrics to evaluate the quality of planning and control.
"},{"location":"evaluator/planning_evaluator/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"The evaluation node calculates metrics each time it receives a trajectory T(0)
. Metrics are calculated using the following information:
T(0)
itself.T(-1)
.T(0)
.These information are maintained by an instance of class MetricsCalculator
which is also responsible for calculating metrics.
Each metric is calculated using a Stat
instance which contains the minimum, maximum, and mean values calculated for the metric as well as the number of values measured.
All possible metrics are defined in the Metric
enumeration defined include/planning_evaluator/metrics/metric.hpp
. This file also defines conversions from/to string as well as human readable descriptions to be used as header of the output file.
The MetricsCalculator
is responsible for calculating metric statistics through calls to function:
Stat<double> MetricsCalculator::calculate(const Metric metric, const Trajectory & traj) const;\n
Adding a new metric M
requires the following steps:
metrics/metric.hpp
: add M
to the enum
, to the from/to string conversion maps, and to the description map.metrics_calculator.cpp
: add M
to the switch/case
statement of the calculate
function.M
to the selected_metrics
parameters.~/input/trajectory
autoware_auto_planning_msgs::msg::Trajectory
Main trajectory to evaluate ~/input/reference_trajectory
autoware_auto_planning_msgs::msg::Trajectory
Reference trajectory to use for deviation metrics ~/input/objects
autoware_auto_perception_msgs::msg::PredictedObjects
Obstacles"},{"location":"evaluator/planning_evaluator/#outputs","title":"Outputs","text":"Each metric is published on a topic named after the metric name.
Name Type Description~/metrics
diagnostic_msgs::msg::DiagnosticArray
DiagnosticArray with a DiagnosticStatus for each metric When shut down, the evaluation node writes the values of the metrics measured during its lifetime to a file as specified by the output_file
parameter.
output_file
string
file used to write metrics ego_frame
string
frame used for the ego pose selected_metrics
List metrics to measure and publish trajectory.min_point_dist_m
double
minimum distance between two successive points to use for angle calculation trajectory.lookahead.max_dist_m
double
maximum distance from ego along the trajectory to use for calculation trajectory.lookahead.max_time_m
double
maximum time ahead of ego along the trajectory to use for calculation obstacle.dist_thr_m
double
distance between ego and the obstacle below which a collision is considered"},{"location":"evaluator/planning_evaluator/#assumptions-known-limits","title":"Assumptions / Known limits","text":"There is a strong assumption that when receiving a trajectory T(0)
, it has been generated using the last received reference trajectory and objects. This can be wrong if a new reference trajectory or objects are published while T(0)
is being calculated.
Precision is currently limited by the resolution of the trajectories. It is possible to interpolate the trajectory and reference trajectory to increase precision but would make computation significantly more expensive.
"},{"location":"evaluator/planning_evaluator/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":"Route
or Path
messages as reference trajectory.min
and max
metric values. For now only the mean
value is published.motion_evaluator_node
.This plugin panel to visualize planning_evaluator
output.
/diagnostic/planning_evaluator/metrics
diagnostic_msgs::msg::DiagnosticArray
Subscribe planning_evaluator
output"},{"location":"evaluator/tier4_metrics_rviz_plugin/#howtouse","title":"HowToUse","text":"This package contains launch files that run nodes to convert Autoware internal topics into consistent API used by external software (e.g., fleet management system, simulator).
"},{"location":"launch/tier4_autoware_api_launch/#package-dependencies","title":"Package Dependencies","text":"Please see <exec_depend>
in package.xml
.
You can include as follows in *.launch.xml
to use autoware_api.launch.xml
.
<include file=\"$(find-pkg-share tier4_autoware_api_launch)/launch/autoware_api.launch.xml\"/>\n
"},{"location":"launch/tier4_autoware_api_launch/#notes","title":"Notes","text":"For reducing processing load, we use the Component feature in ROS 2 (similar to Nodelet in ROS 1 )
"},{"location":"launch/tier4_control_launch/","title":"tier4_control_launch","text":""},{"location":"launch/tier4_control_launch/#tier4_control_launch","title":"tier4_control_launch","text":""},{"location":"launch/tier4_control_launch/#structure","title":"Structure","text":""},{"location":"launch/tier4_control_launch/#package-dependencies","title":"Package Dependencies","text":"Please see <exec_depend>
in package.xml
.
You can include as follows in *.launch.xml
to use control.launch.py
.
Note that you should provide parameter paths as PACKAGE_param_path
. The list of parameter paths you should provide is written at the top of planning.launch.xml
.
<include file=\"$(find-pkg-share tier4_control_launch)/launch/control.launch.py\">\n<!-- options for lateral_controller_mode: mpc_follower, pure_pursuit -->\n<!-- Parameter files -->\n<arg name=\"FOO_NODE_param_path\" value=\"...\"/>\n<arg name=\"BAR_NODE_param_path\" value=\"...\"/>\n...\n <arg name=\"lateral_controller_mode\" value=\"mpc_follower\" />\n</include>\n
"},{"location":"launch/tier4_control_launch/#notes","title":"Notes","text":"For reducing processing load, we use the Component feature in ROS 2 (similar to Nodelet in ROS 1 )
"},{"location":"launch/tier4_localization_launch/","title":"tier4_localization_launch","text":""},{"location":"launch/tier4_localization_launch/#tier4_localization_launch","title":"tier4_localization_launch","text":""},{"location":"launch/tier4_localization_launch/#structure","title":"Structure","text":""},{"location":"launch/tier4_localization_launch/#package-dependencies","title":"Package Dependencies","text":"Please see <exec_depend>
in package.xml
.
Include localization.launch.xml
in other launch files as follows.
You can select which methods in localization to launch as pose_estimator
or twist_estimator
by specifying pose_source
and twist_source
.
In addition, you should provide parameter paths as PACKAGE_param_path
. The list of parameter paths you should provide is written at the top of localization.launch.xml
.
<include file=\"$(find-pkg-share tier4_localization_launch)/launch/localization.launch.xml\">\n<!-- Localization methods -->\n<arg name=\"pose_source\" value=\"...\"/>\n<arg name=\"twist_source\" value=\"...\"/>\n\n<!-- Parameter files -->\n<arg name=\"FOO_param_path\" value=\"...\"/>\n<arg name=\"BAR_param_path\" value=\"...\"/>\n...\n </include>\n
"},{"location":"launch/tier4_map_launch/","title":"tier4_map_launch","text":""},{"location":"launch/tier4_map_launch/#tier4_map_launch","title":"tier4_map_launch","text":""},{"location":"launch/tier4_map_launch/#structure","title":"Structure","text":""},{"location":"launch/tier4_map_launch/#package-dependencies","title":"Package Dependencies","text":"Please see <exec_depend>
in package.xml
.
You can include as follows in *.launch.xml
to use map.launch.py
.
Note that you should provide parameter paths as PACKAGE_param_path
. The list of parameter paths you should provide is written at the top of map.launch.xml
.
<arg name=\"map_path\" description=\"point cloud and lanelet2 map directory path\"/>\n<arg name=\"lanelet2_map_file\" default=\"lanelet2_map.osm\" description=\"lanelet2 map file name\"/>\n<arg name=\"pointcloud_map_file\" default=\"pointcloud_map.pcd\" description=\"pointcloud map file name\"/>\n\n<include file=\"$(find-pkg-share tier4_map_launch)/launch/map.launch.py\">\n<arg name=\"lanelet2_map_path\" value=\"$(var map_path)/$(var lanelet2_map_file)\" />\n<arg name=\"pointcloud_map_path\" value=\"$(var map_path)/$(var pointcloud_map_file)\"/>\n\n<!-- Parameter files -->\n<arg name=\"FOO_param_path\" value=\"...\"/>\n<arg name=\"BAR_param_path\" value=\"...\"/>\n...\n</include>\n
"},{"location":"launch/tier4_map_launch/#notes","title":"Notes","text":"For reducing processing load, we use the Component feature in ROS 2 (similar to Nodelet in ROS 1 )
"},{"location":"launch/tier4_perception_launch/","title":"tier4_perception_launch","text":""},{"location":"launch/tier4_perception_launch/#tier4_perception_launch","title":"tier4_perception_launch","text":""},{"location":"launch/tier4_perception_launch/#structure","title":"Structure","text":""},{"location":"launch/tier4_perception_launch/#package-dependencies","title":"Package Dependencies","text":"Please see <exec_depend>
in package.xml
.
You can include as follows in *.launch.xml
to use perception.launch.xml
.
Note that you should provide parameter paths as PACKAGE_param_path
. The list of parameter paths you should provide is written at the top of perception.launch.xml
.
<include file=\"$(find-pkg-share tier4_perception_launch)/launch/perception.launch.xml\">\n<!-- options for mode: camera_lidar_fusion, lidar, camera -->\n<arg name=\"mode\" value=\"lidar\" />\n\n<!-- Parameter files -->\n<arg name=\"FOO_param_path\" value=\"...\"/>\n<arg name=\"BAR_param_path\" value=\"...\"/>\n...\n </include>\n
"},{"location":"launch/tier4_planning_launch/","title":"tier4_planning_launch","text":""},{"location":"launch/tier4_planning_launch/#tier4_planning_launch","title":"tier4_planning_launch","text":""},{"location":"launch/tier4_planning_launch/#structure","title":"Structure","text":""},{"location":"launch/tier4_planning_launch/#package-dependencies","title":"Package Dependencies","text":"Please see <exec_depend>
in package.xml
.
Note that you should provide parameter paths as PACKAGE_param_path
. The list of parameter paths you should provide is written at the top of planning.launch.xml
.
<include file=\"$(find-pkg-share tier4_planning_launch)/launch/planning.launch.xml\">\n<!-- Parameter files -->\n<arg name=\"FOO_NODE_param_path\" value=\"...\"/>\n<arg name=\"BAR_NODE_param_path\" value=\"...\"/>\n...\n</include>\n
"},{"location":"launch/tier4_sensing_launch/","title":"tier4_sensing_launch","text":""},{"location":"launch/tier4_sensing_launch/#tier4_sensing_launch","title":"tier4_sensing_launch","text":""},{"location":"launch/tier4_sensing_launch/#structure","title":"Structure","text":""},{"location":"launch/tier4_sensing_launch/#package-dependencies","title":"Package Dependencies","text":"Please see <exec_depend>
in package.xml
.
You can include as follows in *.launch.xml
to use sensing.launch.xml
.
<include file=\"$(find-pkg-share tier4_sensing_launch)/launch/sensing.launch.xml\">\n<arg name=\"launch_driver\" value=\"true\"/>\n<arg name=\"sensor_model\" value=\"$(var sensor_model)\"/>\n<arg name=\"vehicle_param_file\" value=\"$(find-pkg-share $(var vehicle_model)_description)/config/vehicle_info.param.yaml\"/>\n<arg name=\"vehicle_mirror_param_file\" value=\"$(find-pkg-share $(var vehicle_model)_description)/config/mirror.param.yaml\"/>\n</include>\n
"},{"location":"launch/tier4_sensing_launch/#launch-directory-structure","title":"Launch Directory Structure","text":"This package finds sensor settings of specified sensor model in launch
.
launch/\n\u251c\u2500\u2500 aip_x1 # Sensor model name\n\u2502 \u251c\u2500\u2500 camera.launch.xml # Camera\n\u2502 \u251c\u2500\u2500 gnss.launch.xml # GNSS\n\u2502 \u251c\u2500\u2500 imu.launch.xml # IMU\n\u2502 \u251c\u2500\u2500 lidar.launch.xml # LiDAR\n\u2502 \u2514\u2500\u2500 pointcloud_preprocessor.launch.py # for preprocessing pointcloud\n...\n
"},{"location":"launch/tier4_sensing_launch/#notes","title":"Notes","text":"This package finds settings with variables.
ex.)
<include file=\"$(find-pkg-share tier4_sensing_launch)/launch/$(var sensor_model)/lidar.launch.xml\">\n
"},{"location":"launch/tier4_simulator_launch/","title":"tier4_simulator_launch","text":""},{"location":"launch/tier4_simulator_launch/#tier4_simulator_launch","title":"tier4_simulator_launch","text":""},{"location":"launch/tier4_simulator_launch/#structure","title":"Structure","text":""},{"location":"launch/tier4_simulator_launch/#package-dependencies","title":"Package Dependencies","text":"Please see <exec_depend>
in package.xml
.
<include file=\"$(find-pkg-share tier4_simulator_launch)/launch/simulator.launch.xml\">\n<arg name=\"vehicle_info_param_file\" value=\"VEHICLE_INFO_PARAM_FILE\" />\n<arg name=\"vehicle_model\" value=\"VEHICLE_MODEL\"/>\n</include>\n
The simulator model used in simple_planning_simulator is loaded from \"config/simulator_model.param.yaml\" in the \"VEHICLE_MODEL
_description\" package.
Please see <exec_depend>
in package.xml
.
Note that you should provide parameter paths as PACKAGE_param_path
. The list of parameter paths you should provide is written at the top of system.launch.xml
.
<include file=\"$(find-pkg-share tier4_system_launch)/launch/system.launch.xml\">\n<arg name=\"run_mode\" value=\"online\"/>\n<arg name=\"sensor_model\" value=\"SENSOR_MODEL\"/>\n\n<!-- Parameter files -->\n<arg name=\"FOO_param_path\" value=\"...\"/>\n<arg name=\"BAR_param_path\" value=\"...\"/>\n...\n </include>\n
The sensing configuration parameters used in system_error_monitor are loaded from \"config/diagnostic_aggregator/sensor_kit.param.yaml\" in the \"SENSOR_MODEL
_description\" package.
Please see <exec_depend>
in package.xml
.
You can include as follows in *.launch.xml
to use vehicle.launch.xml
.
<arg name=\"vehicle_model\" default=\"sample_vehicle\" description=\"vehicle model name\"/>\n<arg name=\"sensor_model\" default=\"sample_sensor_kit\" description=\"sensor model name\"/>\n\n<include file=\"$(find-pkg-share tier4_vehicle_launch)/launch/vehicle.launch.xml\">\n<arg name=\"vehicle_model\" value=\"$(var vehicle_model)\"/>\n<arg name=\"sensor_model\" value=\"$(var sensor_model)\"/>\n</include>\n
"},{"location":"launch/tier4_vehicle_launch/#notes","title":"Notes","text":"This package finds some external packages and settings with variables and package names.
ex.)
<let name=\"vehicle_model_pkg\" value=\"$(find-pkg-share $(var vehicle_model)_description)\"/>\n
<arg name=\"config_dir\" default=\"$(find-pkg-share individual_params)/config/$(var vehicle_id)/$(var sensor_model)\"/>\n
"},{"location":"launch/tier4_vehicle_launch/#vehiclexacro","title":"vehicle.xacro","text":""},{"location":"launch/tier4_vehicle_launch/#arguments","title":"Arguments","text":"Name Type Description Default sensor_model String sensor model name \"\" vehicle_model String vehicle model name \"\""},{"location":"launch/tier4_vehicle_launch/#usage_1","title":"Usage","text":"You can write as follows in *.launch.xml
.
<arg name=\"vehicle_model\" default=\"sample_vehicle\" description=\"vehicle model name\"/>\n<arg name=\"sensor_model\" default=\"sample_sensor_kit\" description=\"sensor model name\"/>\n<arg name=\"model\" default=\"$(find-pkg-share tier4_vehicle_launch)/urdf/vehicle.xacro\"/>\n\n<node name=\"robot_state_publisher\" pkg=\"robot_state_publisher\" exec=\"robot_state_publisher\">\n<param name=\"robot_description\" value=\"$(command 'xacro $(var model) vehicle_model:=$(var vehicle_model) sensor_model:=$(var sensor_model)')\"/>\n</node>\n
"},{"location":"localization/ekf_localizer/","title":"Overview","text":""},{"location":"localization/ekf_localizer/#overview","title":"Overview","text":"The Extend Kalman Filter Localizer estimates robust and less noisy robot pose and twist by integrating the 2D vehicle dynamics model with input ego-pose and ego-twist messages. The algorithm is designed especially for fast-moving robots such as autonomous driving systems.
"},{"location":"localization/ekf_localizer/#flowchart","title":"Flowchart","text":"The overall flowchart of the ekf_localizer is described below.
"},{"location":"localization/ekf_localizer/#features","title":"Features","text":"
This package includes the following features:
"},{"location":"localization/ekf_localizer/#node","title":"Node","text":""},{"location":"localization/ekf_localizer/#subscribed-topics","title":"Subscribed Topics","text":"Name Type Description
measured_pose_with_covariance
geometry_msgs::msg::PoseWithCovarianceStamped
Input pose source with the measurement covariance matrix. measured_twist_with_covariance
geometry_msgs::msg::TwistWithCovarianceStamped
Input twist source with the measurement covariance matrix. initialpose
geometry_msgs::msg::PoseWithCovarianceStamped
Initial pose for EKF. The estimated pose is initialized with zeros at the start. It is initialized with this message whenever published."},{"location":"localization/ekf_localizer/#published-topics","title":"Published Topics","text":"Name Type Description ekf_odom
nav_msgs::msg::Odometry
Estimated odometry. ekf_pose
geometry_msgs::msg::PoseStamped
Estimated pose. ekf_pose_with_covariance
geometry_msgs::msg::PoseWithCovarianceStamped
Estimated pose with covariance. ekf_biased_pose
geometry_msgs::msg::PoseStamped
Estimated pose including the yaw bias ekf_biased_pose_with_covariance
geometry_msgs::msg::PoseWithCovarianceStamped
Estimated pose with covariance including the yaw bias ekf_twist
geometry_msgs::msg::TwistStamped
Estimated twist. ekf_twist_with_covariance
geometry_msgs::msg::TwistWithCovarianceStamped
The estimated twist with covariance. diagnostics
diagnostics_msgs::msg::DiagnosticArray
The diagnostic information."},{"location":"localization/ekf_localizer/#published-tf","title":"Published TF","text":"map
coordinate to estimated pose.The current robot state is predicted from previously estimated data using a given prediction model. This calculation is called at a constant interval (predict_frequency [Hz]
). The prediction equation is described at the end of this page.
Before the update, the Mahalanobis distance is calculated between the measured input and the predicted state, the measurement update is not performed for inputs where the Mahalanobis distance exceeds the given threshold.
The predicted state is updated with the latest measured inputs, measured_pose, and measured_twist. The updates are performed with the same frequency as prediction, usually at a high frequency, in order to enable smooth state estimation.
"},{"location":"localization/ekf_localizer/#parameter-description","title":"Parameter description","text":"The parameters are set in launch/ekf_localizer.launch
.
note: process noise for positions x & y are calculated automatically from nonlinear dynamics.
"},{"location":"localization/ekf_localizer/#simple-1d-filter-parameters","title":"Simple 1D Filter Parameters","text":"Name Type Description Default value z_filter_proc_dev double Simple1DFilter - Z filter process deviation 1.0 roll_filter_proc_dev double Simple1DFilter - Roll filter process deviation 0.01 pitch_filter_proc_dev double Simple1DFilter - Pitch filter process deviation 0.01"},{"location":"localization/ekf_localizer/#for-diagnostics","title":"For diagnostics","text":"Name Type Description Default value pose_no_update_count_threshold_warn size_t The threshold at which a WARN state is triggered due to the Pose Topic update not happening continuously for a certain number of times. 50 pose_no_update_count_threshold_error size_t The threshold at which an ERROR state is triggered due to the Pose Topic update not happening continuously for a certain number of times. 250 twist_no_update_count_threshold_warn size_t The threshold at which a WARN state is triggered due to the Twist Topic update not happening continuously for a certain number of times. 50 twist_no_update_count_threshold_error size_t The threshold at which an ERROR state is triggered due to the Twist Topic update not happening continuously for a certain number of times. 250"},{"location":"localization/ekf_localizer/#misc","title":"Misc","text":"Name Type Description Default value threshold_observable_velocity_mps double Minimum value for velocity that will be used for EKF. Mainly used for dead zone in velocity sensor 0.0 (disabled)"},{"location":"localization/ekf_localizer/#how-to-tune-ekf-parameters","title":"How to tune EKF parameters","text":""},{"location":"localization/ekf_localizer/#0-preliminaries","title":"0. Preliminaries","text":"twist_additional_delay
and pose_additional_delay
to correct the time.Set standard deviation for each sensor. The pose_measure_uncertainty_time
is for the uncertainty of the header timestamp data. You can also tune a number of steps for smoothing for each observed sensor data by tuning *_smoothing_steps
. Increasing the number will improve the smoothness of the estimation, but may have an adverse effect on the estimation performance.
pose_measure_uncertainty_time
pose_smoothing_steps
twist_smoothing_steps
proc_stddev_vx_c
: set to maximum linear accelerationproc_stddev_wz_c
: set to maximum angular accelerationproc_stddev_yaw_c
: This parameter describes the correlation between the yaw and yaw rate. A large value means the change in yaw does not correlate to the estimated yaw rate. If this is set to 0, it means the change in estimated yaw is equal to yaw rate. Usually, this should be set to 0.proc_stddev_yaw_bias_c
: This parameter is the standard deviation for the rate of change in yaw bias. In most cases, yaw bias is constant, so it can be very small, but must be non-zero.where, \\(\\theta_k\\) represents the vehicle's heading angle, including the mounting angle bias. \\(b_k\\) is a correction term for the yaw bias, and it is modeled so that \\((\\theta_k+b_k)\\) becomes the heading angle of the base_link. The pose_estimator is expected to publish the base_link in the map coordinate system. However, the yaw angle may be offset due to calibration errors. This model compensates this error and improves estimation accuracy.
"},{"location":"localization/ekf_localizer/#time-delay-model","title":"time delay model","text":"The measurement time delay is handled by an augmented state [1] (See, Section 7.3 FIXED-LAG SMOOTHING).
Note that, although the dimension gets larger since the analytical expansion can be applied based on the specific structures of the augmented states, the computational complexity does not significantly change.
"},{"location":"localization/ekf_localizer/#test-result-with-autoware-ndt","title":"Test Result with Autoware NDT","text":""},{"location":"localization/ekf_localizer/#diagnostics","title":"Diagnostics","text":""},{"location":"localization/ekf_localizer/#the-conditions-that-result-in-a-warn-state","title":"The conditions that result in a WARN state","text":"pose_no_update_count_threshold_warn
/twist_no_update_count_threshold_warn
.pose_no_update_count_threshold_error
/twist_no_update_count_threshold_error
.b_k
in the current EKF state would not make any sense and cannot correctly handle these multiple yaw biases. Thus, future work includes introducing yaw bias for each sensor with yaw estimation.[1] Anderson, B. D. O., & Moore, J. B. (1979). Optimal filtering. Englewood Cliffs, NJ: Prentice-Hall.
"},{"location":"localization/geo_pose_projector/","title":"geo_pose_projector","text":""},{"location":"localization/geo_pose_projector/#geo_pose_projector","title":"geo_pose_projector","text":""},{"location":"localization/geo_pose_projector/#overview","title":"Overview","text":"This node is a simple node that subscribes to the geo-referenced pose topic and publishes the pose in the map frame.
"},{"location":"localization/geo_pose_projector/#subscribed-topics","title":"Subscribed Topics","text":"Name Type Descriptioninput_geo_pose
geographic_msgs::msg::GeoPoseWithCovarianceStamped
geo-referenced pose /map/map_projector_info
tier4_map_msgs::msg::MapProjectedObjectInfo
map projector info"},{"location":"localization/geo_pose_projector/#published-topics","title":"Published Topics","text":"Name Type Description output_pose
geometry_msgs::msg::PoseWithCovarianceStamped
pose in map frame /tf
tf2_msgs::msg::TFMessage
tf from parent link to the child link"},{"location":"localization/geo_pose_projector/#parameters","title":"Parameters","text":"Name Type Description Default Range publish_tf boolean whether to publish tf True N/A parent_frame string parent frame for published tf map N/A child_frame string child frame for published tf pose_estimator_base_link N/A"},{"location":"localization/geo_pose_projector/#limitations","title":"Limitations","text":"The covariance conversion may be incorrect depending on the projection type you are using. The covariance of input topic is expressed in (Latitude, Longitude, Altitude) as a diagonal matrix. Currently, we assume that the x axis is the east direction and the y axis is the north direction. Thus, the conversion may be incorrect when this assumption breaks, especially when the covariance of latitude and longitude is different.
"},{"location":"localization/gyro_odometer/","title":"gyro_odometer","text":""},{"location":"localization/gyro_odometer/#gyro_odometer","title":"gyro_odometer","text":""},{"location":"localization/gyro_odometer/#purpose","title":"Purpose","text":"gyro_odometer
is the package to estimate twist by combining imu and vehicle speed.
vehicle/twist_with_covariance
geometry_msgs::msg::TwistWithCovarianceStamped
twist with covariance from vehicle imu
sensor_msgs::msg::Imu
imu from sensor"},{"location":"localization/gyro_odometer/#output","title":"Output","text":"Name Type Description twist_with_covariance
geometry_msgs::msg::TwistWithCovarianceStamped
estimated twist with covariance"},{"location":"localization/gyro_odometer/#parameters","title":"Parameters","text":"Name Type Description Default Range output_frame string output's frame id base_link N/A message_timeout_sec float delay tolerance time for message 0.2 N/A"},{"location":"localization/gyro_odometer/#assumptions-known-limits","title":"Assumptions / Known limits","text":"This directory contains packages for landmark-based localization.
Landmarks are, for example
etc.
Since these landmarks are easy to detect and estimate pose, the ego pose can be calculated from the pose of the detected landmark if the pose of the landmark is written on the map in advance.
Currently, landmarks are assumed to be flat.
The following figure shows the principle of localization in the case of ar_tag_based_localizer
.
This calculated ego pose is passed to the EKF, where it is fused with the twist information and used to estimate a more accurate ego pose.
"},{"location":"localization/landmark_based_localizer/#node-diagram","title":"Node diagram","text":""},{"location":"localization/landmark_based_localizer/#landmark_manager","title":"landmark_manager
","text":"The definitions of the landmarks written to the map are introduced in the next section. See Map Specifications
.
The landmark_manager
is a utility package to load landmarks from the map.
Users can define landmarks as Lanelet2 4-vertex polygons. In this case, it is possible to define an arrangement in which the four vertices cannot be considered to be on the same plane. The direction of the landmark in that case is difficult to calculate. So, if the 4 vertices are considered as forming a tetrahedron and its volume exceeds the volume_threshold
parameter, the landmark will not publish tf_static.
See https://github.com/autowarefoundation/autoware_common/blob/main/tmp/lanelet2_extension/docs/lanelet2_format_extension.md#localization-landmarks
"},{"location":"localization/landmark_based_localizer/#about-consider_orientation","title":"Aboutconsider_orientation
","text":"The calculate_new_self_pose
function in the LandmarkManager
class includes a boolean argument named consider_orientation
. This argument determines the method used to calculate the new self pose based on detected and mapped landmarks. The following image illustrates the difference between the two methods.
consider_orientation = true
","text":"In this mode, the new self pose is calculated so that the relative Pose of the \"landmark detected from the current self pose\" is equal to the relative Pose of the \"landmark mapped from the new self pose\". This method can correct for orientation, but is strongly affected by the orientation error of the landmark detection.
"},{"location":"localization/landmark_based_localizer/#consider_orientation-false","title":"consider_orientation = false
","text":"In this mode, the new self pose is calculated so that only the relative position is correct for x, y, and z.
This method can not correct for orientation, but it is not affected by the orientation error of the landmark detection.
"},{"location":"localization/landmark_based_localizer/ar_tag_based_localizer/","title":"AR Tag Based Localizer","text":""},{"location":"localization/landmark_based_localizer/ar_tag_based_localizer/#ar-tag-based-localizer","title":"AR Tag Based Localizer","text":"ArTagBasedLocalizer is a vision-based localization node.
This node uses the ArUco library to detect AR-Tags from camera images and calculates and publishes the pose of the ego vehicle based on these detections. The positions and orientations of the AR-Tags are assumed to be written in the Lanelet2 format.
"},{"location":"localization/landmark_based_localizer/ar_tag_based_localizer/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"localization/landmark_based_localizer/ar_tag_based_localizer/#ar_tag_based_localizer-node","title":"ar_tag_based_localizer
node","text":""},{"location":"localization/landmark_based_localizer/ar_tag_based_localizer/#input","title":"Input","text":"Name Type Description ~/input/lanelet2_map
autoware_auto_mapping_msgs::msg::HADMapBin
Data of lanelet2 ~/input/image
sensor_msgs::msg::Image
Camera Image ~/input/camera_info
sensor_msgs::msg::CameraInfo
Camera Info ~/input/ekf_pose
geometry_msgs::msg::PoseWithCovarianceStamped
EKF Pose without IMU correction. It is used to validate detected AR tags by filtering out False Positives. Only if the EKF Pose and the AR tag-detected Pose are within a certain temporal and spatial range, the AR tag-detected Pose is considered valid and published."},{"location":"localization/landmark_based_localizer/ar_tag_based_localizer/#output","title":"Output","text":"Name Type Description ~/output/pose_with_covariance
geometry_msgs::msg::PoseWithCovarianceStamped
Estimated Pose ~/debug/result
sensor_msgs::msg::Image
[debug topic] Image in which marker detection results are superimposed on the input image ~/debug/marker
visualization_msgs::msg::MarkerArray
[debug topic] Loaded landmarks to visualize in Rviz as thin boards /tf
geometry_msgs::msg::TransformStamped
[debug topic] TF from camera to detected tag /diagnostics
diagnostic_msgs::msg::DiagnosticArray
Diagnostics outputs"},{"location":"localization/landmark_based_localizer/ar_tag_based_localizer/#parameters","title":"Parameters","text":"Name Type Description Default Range marker_size float marker_size 0.6 N/A target_tag_ids array target_tag_ids ['0','1','2','3','4','5','6'] N/A base_covariance array base_covariance [0.2, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.2, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.2, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.02, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.02, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.02] N/A distance_threshold float distance_threshold(m) 13.0 N/A consider_orientation boolean consider_orientation false N/A detection_mode string detection_mode select from [DM_NORMAL, DM_FAST, DM_VIDEO_FAST] DM_NORMAL N/A min_marker_size float min_marker_size 0.02 N/A ekf_time_tolerance float ekf_time_tolerance(sec) 5.0 N/A ekf_position_tolerance float ekf_position_tolerance(m) 10.0 N/A"},{"location":"localization/landmark_based_localizer/ar_tag_based_localizer/#how-to-launch","title":"How to launch","text":"When launching Autoware, set artag
for pose_source
.
ros2 launch autoware_launch ... \\\npose_source:=artag \\\n...\n
"},{"location":"localization/landmark_based_localizer/ar_tag_based_localizer/#rosbag","title":"Rosbag","text":""},{"location":"localization/landmark_based_localizer/ar_tag_based_localizer/#sample-rosbag-and-map-awsim-data","title":"Sample rosbag and map (AWSIM data)","text":"This data is simulated data created by AWSIM. Essentially, AR tag-based self-localization is not intended for such public road driving, but for driving in a smaller area, so the max driving speed is set at 15 km/h.
It is a known problem that the timing of when each AR tag begins to be detected can cause significant changes in estimation.
"},{"location":"localization/landmark_based_localizer/ar_tag_based_localizer/#sample-rosbag-and-map-real-world-data","title":"Sample rosbag and map (Real world data)","text":"Please remap the topic names and play it.
ros2 bag play /path/to/ar_tag_based_localizer_sample_bag/ -r 0.5 -s sqlite3 \\\n--remap /sensing/camera/front/image:=/sensing/camera/traffic_light/image_raw \\\n/sensing/camera/front/image/info:=/sensing/camera/traffic_light/camera_info\n
This dataset contains issues such as missing IMU data, and overall the accuracy is low. Even when running AR tag-based self-localization, significant difference from the true trajectory can be observed.
The image below shows the trajectory when the sample is executed and plotted.
The pull request video below should also be helpful.
https://github.com/autowarefoundation/autoware.universe/pull/4347#issuecomment-1663155248
"},{"location":"localization/landmark_based_localizer/ar_tag_based_localizer/#principle","title":"Principle","text":""},{"location":"localization/localization_error_monitor/","title":"localization_error_monitor","text":""},{"location":"localization/localization_error_monitor/#localization_error_monitor","title":"localization_error_monitor","text":""},{"location":"localization/localization_error_monitor/#purpose","title":"Purpose","text":"localization_error_monitor is a package for diagnosing localization errors by monitoring uncertainty of the localization results. The package monitors the following two values:
input/pose_with_cov
geometry_msgs::msg::PoseWithCovarianceStamped
localization result"},{"location":"localization/localization_error_monitor/#output","title":"Output","text":"Name Type Description debug/ellipse_marker
visualization_msgs::msg::Marker
ellipse marker diagnostics
diagnostic_msgs::msg::DiagnosticArray
diagnostics outputs"},{"location":"localization/localization_error_monitor/#parameters","title":"Parameters","text":"Name Type Description Default Range scale float scale factor for monitored values 3 N/A error_ellipse_size float error threshold for long radius of confidence ellipse [m] 1.5 N/A warn_ellipse_size float warning threshold for long radius of confidence ellipse [m] 1.2 N/A error_ellipse_size_lateral_direction float error threshold for size of confidence ellipse along lateral direction [m] 0.3 N/A warn_ellipse_size_lateral_direction float warning threshold for size of confidence ellipse along lateral direction [m] 0.25 N/A"},{"location":"localization/localization_util/","title":"localization_util","text":""},{"location":"localization/localization_util/#localization_util","title":"localization_util","text":"`localization_util`` is a localization utility package.
This package does not have a node, it is just a library.
"},{"location":"localization/ndt_scan_matcher/","title":"ndt_scan_matcher","text":""},{"location":"localization/ndt_scan_matcher/#ndt_scan_matcher","title":"ndt_scan_matcher","text":""},{"location":"localization/ndt_scan_matcher/#purpose","title":"Purpose","text":"ndt_scan_matcher is a package for position estimation using the NDT scan matching method.
There are two main functions in this package:
One optional function is regularization. Please see the regularization chapter in the back for details. It is disabled by default.
"},{"location":"localization/ndt_scan_matcher/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"localization/ndt_scan_matcher/#input","title":"Input","text":"Name Type Descriptionekf_pose_with_covariance
geometry_msgs::msg::PoseWithCovarianceStamped
initial pose points_raw
sensor_msgs::msg::PointCloud2
sensor pointcloud sensing/gnss/pose_with_covariance
sensor_msgs::msg::PoseWithCovarianceStamped
base position for regularization term sensing/gnss/pose_with_covariance
is required only when regularization is enabled.
ndt_pose
geometry_msgs::msg::PoseStamped
estimated pose ndt_pose_with_covariance
geometry_msgs::msg::PoseWithCovarianceStamped
estimated pose with covariance /diagnostics
diagnostic_msgs::msg::DiagnosticArray
diagnostics points_aligned
sensor_msgs::msg::PointCloud2
[debug topic] pointcloud aligned by scan matching points_aligned_no_ground
sensor_msgs::msg::PointCloud2
[debug topic] no ground pointcloud aligned by scan matching initial_pose_with_covariance
geometry_msgs::msg::PoseWithCovarianceStamped
[debug topic] initial pose used in scan matching multi_ndt_pose
geometry_msgs::msg::PoseArray
[debug topic] estimated poses from multiple initial poses in real-time covariance estimation multi_initial_pose
geometry_msgs::msg::PoseArray
[debug topic] initial poses for real-time covariance estimation exe_time_ms
tier4_debug_msgs::msg::Float32Stamped
[debug topic] execution time for scan matching [ms] transform_probability
tier4_debug_msgs::msg::Float32Stamped
[debug topic] score of scan matching no_ground_transform_probability
tier4_debug_msgs::msg::Float32Stamped
[debug topic] score of scan matching based on no ground LiDAR scan iteration_num
tier4_debug_msgs::msg::Int32Stamped
[debug topic] number of scan matching iterations initial_to_result_relative_pose
geometry_msgs::msg::PoseStamped
[debug topic] relative pose between the initial point and the convergence point initial_to_result_distance
tier4_debug_msgs::msg::Float32Stamped
[debug topic] distance difference between the initial point and the convergence point [m] initial_to_result_distance_old
tier4_debug_msgs::msg::Float32Stamped
[debug topic] distance difference between the older of the two initial points used in linear interpolation and the convergence point [m] initial_to_result_distance_new
tier4_debug_msgs::msg::Float32Stamped
[debug topic] distance difference between the newer of the two initial points used in linear interpolation and the convergence point [m] ndt_marker
visualization_msgs::msg::MarkerArray
[debug topic] markers for debugging monte_carlo_initial_pose_marker
visualization_msgs::msg::MarkerArray
[debug topic] particles used in initial position estimation"},{"location":"localization/ndt_scan_matcher/#service","title":"Service","text":"Name Type Description ndt_align_srv
autoware_localization_srvs::srv::PoseWithCovarianceStamped
service to estimate initial pose"},{"location":"localization/ndt_scan_matcher/#parameters","title":"Parameters","text":""},{"location":"localization/ndt_scan_matcher/#core-parameters","title":"Core Parameters","text":"Name Type Description base_frame
string Vehicle reference frame ndt_base_frame
string NDT reference frame map_frame
string map frame input_sensor_points_queue_size
int Subscriber queue size trans_epsilon
double The max difference between two consecutive transformations to consider convergence step_size
double The newton line search maximum step length resolution
double The ND voxel grid resolution [m] max_iterations
int The number of iterations required to calculate alignment converged_param_type
int The type of indicators for scan matching score (0: TP, 1: NVTL) converged_param_transform_probability
double TP threshold for deciding whether to trust the estimation result (when converged_param_type = 0) converged_param_nearest_voxel_transformation_likelihood
double NVTL threshold for deciding whether to trust the estimation result (when converged_param_type = 1) initial_estimate_particles_num
int The number of particles to estimate initial pose n_startup_trials
int The number of initial random trials in the TPE (Tree-Structured Parzen Estimator). lidar_topic_timeout_sec
double Tolerance of timestamp difference between current time and sensor pointcloud initial_pose_timeout_sec
int Tolerance of timestamp difference between initial_pose and sensor pointcloud. [sec] initial_pose_distance_tolerance_m
double Tolerance of distance difference between two initial poses used for linear interpolation. [m] num_threads
int Number of threads used for parallel computing output_pose_covariance
std::array The covariance of output pose (TP: Transform Probability, NVTL: Nearest Voxel Transform Probability)
"},{"location":"localization/ndt_scan_matcher/#regularization","title":"Regularization","text":""},{"location":"localization/ndt_scan_matcher/#abstract","title":"Abstract","text":"This is a function that adds the regularization term to the NDT optimization problem as follows.
\\[ \\begin{align} \\min_{\\mathbf{R},\\mathbf{t}} \\mathrm{NDT}(\\mathbf{R},\\mathbf{t}) +\\mathrm{scale\\ factor}\\cdot \\left| \\mathbf{R}^\\top (\\mathbf{t_{base}-\\mathbf{t}}) \\cdot \\begin{pmatrix} 1\\\\ 0\\\\ 0 \\end{pmatrix} \\right|^2 \\end{align} \\], where t_base is base position measured by GNSS or other means. NDT(R,t) stands for the pure NDT cost function. The regularization term shifts the optimal solution to the base position in the longitudinal direction of the vehicle. Only errors along the longitudinal direction with respect to the base position are considered; errors along Z-axis and lateral-axis error are not considered.
Although the regularization term has rotation as a parameter, the gradient and hessian associated with it is not computed to stabilize the optimization. Specifically, the gradients are computed as follows.
\\[ \\begin{align} &g_x=\\nabla_x \\mathrm{NDT}(\\mathbf{R},\\mathbf{t}) + 2 \\mathrm{scale\\ factor} \\cos\\theta_z\\cdot e_{\\mathrm{longitudinal}} \\\\ &g_y=\\nabla_y \\mathrm{NDT}(\\mathbf{R},\\mathbf{t}) + 2 \\mathrm{scale\\ factor} \\sin\\theta_z\\cdot e_{\\mathrm{longitudinal}} \\\\ &g_z=\\nabla_z \\mathrm{NDT}(\\mathbf{R},\\mathbf{t}) \\\\ &g_\\mathbf{R}=\\nabla_\\mathbf{R} \\mathrm{NDT}(\\mathbf{R},\\mathbf{t}) \\end{align} \\]Regularization is disabled by default. If you wish to use it, please edit the following parameters to enable it.
"},{"location":"localization/ndt_scan_matcher/#where-is-regularization-available","title":"Where is regularization available","text":"This feature is effective on feature-less roads where GNSS is available, such as
By remapping the base position topic to something other than GNSS, as described below, it can be valid outside of these.
"},{"location":"localization/ndt_scan_matcher/#using-other-base-position","title":"Using other base position","text":"Other than GNSS, you can give other global position topics obtained from magnetic markers, visual markers or etc. if they are available in your environment. (Currently Autoware does not provide a node that gives such pose.) To use your topic for regularization, you need to remap the input_regularization_pose_topic
with your topic in ndt_scan_matcher.launch.xml
. By default, it is remapped with /sensing/gnss/pose_with_covariance
.
Since this function determines the base position by linear interpolation from the recently subscribed poses, topics that are published at a low frequency relative to the driving speed cannot be used. Inappropriate linear interpolation may result in bad optimization results.
When using GNSS for base location, the regularization can have negative effects in tunnels, indoors, and near skyscrapers. This is because if the base position is far off from the true value, NDT scan matching may converge to inappropriate optimal position.
"},{"location":"localization/ndt_scan_matcher/#parameters_1","title":"Parameters","text":"Name Type Descriptionregularization_enabled
bool Flag to add regularization term to NDT optimization (FALSE by default) regularization_scale_factor
double Coefficient of the regularization term. Regularization is disabled by default because GNSS is not always accurate enough to serve the appropriate base position in any scenes.
If the scale_factor is too large, the NDT will be drawn to the base position and scan matching may fail. Conversely, if it is too small, the regularization benefit will be lost.
Note that setting scale_factor to 0 is equivalent to disabling regularization.
"},{"location":"localization/ndt_scan_matcher/#example","title":"Example","text":"The following figures show tested maps.
The following figures show the trajectories estimated on the feature-less map with standard NDT and regularization-enabled NDT, respectively. The color of the trajectory indicates the error (meter) from the reference trajectory, which is computed with the feature-rich map.
"},{"location":"localization/ndt_scan_matcher/#dynamic-map-loading","title":"Dynamic map loading","text":"
Autoware supports dynamic map loading feature for ndt_scan_matcher
. Using this feature, NDT dynamically requests for the surrounding pointcloud map to pointcloud_map_loader
, and then receive and preprocess the map in an online fashion.
Using the feature, ndt_scan_matcher
can theoretically handle any large size maps in terms of memory usage. (Note that it is still possible that there exists a limitation due to other factors, e.g. floating-point error)
debug/loaded_pointcloud_map
sensor_msgs::msg::PointCloud2
pointcloud maps used for localization (for debug)"},{"location":"localization/ndt_scan_matcher/#additional-client","title":"Additional client","text":"Name Type Description client_map_loader
autoware_map_msgs::srv::GetDifferentialPointCloudMap
map loading client"},{"location":"localization/ndt_scan_matcher/#parameters_2","title":"Parameters","text":"Name Type Description dynamic_map_loading_update_distance
double Distance traveled to load new map(s) dynamic_map_loading_map_radius
double Map loading radius for every update lidar_radius
double LiDAR radius used for localization (only used for diagnosis)"},{"location":"localization/ndt_scan_matcher/#notes-for-dynamic-map-loading","title":"Notes for dynamic map loading","text":"To use dynamic map loading feature for ndt_scan_matcher
, you also need to split the PCD files into grids (recommended size: 20[m] x 20[m])
Note that the dynamic map loading may FAIL if the map is split into two or more large size map (e.g. 1000[m] x 1000[m]). Please provide either of
Here is a split PCD map for sample-map-rosbag
from Autoware tutorial: sample-map-rosbag_split.zip
This is a function that uses no ground LiDAR scan to estimate the scan matching score. This score can reflect the current localization performance more accurately. related issue.
"},{"location":"localization/ndt_scan_matcher/#parameters_3","title":"Parameters","text":"Name Type Descriptionestimate_scores_by_no_ground_points
bool Flag for using scan matching score based on no ground LiDAR scan (FALSE by default) z_margin_for_ground_removal
double Z-value margin for removal ground points"},{"location":"localization/ndt_scan_matcher/#2d-real-time-covariance-estimation","title":"2D real-time covariance estimation","text":""},{"location":"localization/ndt_scan_matcher/#abstract_2","title":"Abstract","text":"Calculate 2D covariance (xx, xy, yx, yy) in real time using the NDT convergence from multiple initial poses. The arrangement of multiple initial poses is efficiently limited by the Hessian matrix of the NDT score function. In this implementation, the number of initial positions is fixed to simplify the code. The covariance can be seen as error ellipse from ndt_pose_with_covariance setting on rviz2. original paper.
Note that this function may spoil healthy system behavior if it consumes much calculation resources.
"},{"location":"localization/ndt_scan_matcher/#parameters_4","title":"Parameters","text":"initial_pose_offset_model is rotated around (x,y) = (0,0) in the direction of the first principal component of the Hessian matrix. initial_pose_offset_model_x & initial_pose_offset_model_y must have the same number of elements.
Name Type Descriptionuse_covariance_estimation
bool Flag for using real-time covariance estimation (FALSE by default) initial_pose_offset_model_x
std::vector X-axis offset [m] initial_pose_offset_model_y
std::vector Y-axis offset [m]"},{"location":"localization/pose2twist/","title":"pose2twist","text":""},{"location":"localization/pose2twist/#pose2twist","title":"pose2twist","text":""},{"location":"localization/pose2twist/#purpose","title":"Purpose","text":"This pose2twist
calculates the velocity from the input pose history. In addition to the computed twist, this node outputs the linear-x and angular-z components as a float message to simplify debugging.
The twist.linear.x
is calculated as sqrt(dx * dx + dy * dy + dz * dz) / dt
, and the values in the y
and z
fields are zero. The twist.angular
is calculated as d_roll / dt
, d_pitch / dt
and d_yaw / dt
for each field.
none.
"},{"location":"localization/pose2twist/#assumptions-known-limits","title":"Assumptions / Known limits","text":"none.
"},{"location":"localization/pose_initializer/","title":"pose_initializer","text":""},{"location":"localization/pose_initializer/#pose_initializer","title":"pose_initializer","text":""},{"location":"localization/pose_initializer/#purpose","title":"Purpose","text":"The pose_initializer
is the package to send an initial pose to ekf_localizer
. It receives roughly estimated initial pose from GNSS/user. Passing the pose to ndt_scan_matcher
, and it gets a calculated ego pose from ndt_scan_matcher
via service. Finally, it publishes the initial pose to ekf_localizer
. This node depends on the map height fitter library. See here for more details.
ekf_enabled
bool If true, EKF localizer is activated. ndt_enabled
bool If true, the pose will be estimated by NDT scan matcher, otherwise it is passed through. stop_check_enabled
bool If true, initialization is accepted only when the vehicle is stopped. stop_check_duration
bool The duration used for the stop check above. gnss_enabled
bool If true, use the GNSS pose when no pose is specified. gnss_pose_timeout
bool The duration that the GNSS pose is valid."},{"location":"localization/pose_initializer/#services","title":"Services","text":"Name Type Description /localization/initialize
autoware_adapi_v1_msgs::srv::InitializeLocalization initial pose from api"},{"location":"localization/pose_initializer/#clients","title":"Clients","text":"Name Type Description /localization/pose_estimator/ndt_align_srv
tier4_localization_msgs::srv::PoseWithCovarianceStamped pose estimation service"},{"location":"localization/pose_initializer/#subscriptions","title":"Subscriptions","text":"Name Type Description /sensing/gnss/pose_with_covariance
geometry_msgs::msg::PoseWithCovarianceStamped pose from gnss /sensing/vehicle_velocity_converter/twist_with_covariance
geometry_msgs::msg::TwistStamped twist for stop check"},{"location":"localization/pose_initializer/#publications","title":"Publications","text":"Name Type Description /localization/initialization_state
autoware_adapi_v1_msgs::msg::LocalizationInitializationState pose initialization state /initialpose3d
geometry_msgs::msg::PoseWithCovarianceStamped calculated initial ego pose"},{"location":"localization/pose_initializer/#connection-with-default-ad-api","title":"Connection with Default AD API","text":"This pose_initializer
is used via default AD API. For detailed description of the API description, please refer to the description of default_ad_api
.
The pose_instability_detector
package includes a node designed to monitor the stability of /localization/kinematic_state
, which is an output topic of the Extended Kalman Filter (EKF).
This node triggers periodic timer callbacks to compare two poses:
/localization/kinematic_state
over a duration specified by interval_sec
./localization/kinematic_state
.The results of this comparison are then output to the /diagnostics
topic.
If this node outputs WARN messages to /diagnostics
, it means that the EKF output is significantly different from the integrated twist values. This discrepancy suggests that there may be an issue with either the estimated pose or the input twist.
The following diagram provides an overview of what the timeline of this process looks like:
"},{"location":"localization/pose_instability_detector/#parameters","title":"Parameters","text":"Name Type Description Default Range interval_sec float The interval of timer_callback in seconds. 1 >0 threshold_diff_position_x float The threshold of diff_position x (m). 1 \u22650.0 threshold_diff_position_y float The threshold of diff_position y (m). 1 \u22650.0 threshold_diff_position_z float The threshold of diff_position z (m). 1 \u22650.0 threshold_diff_angle_x float The threshold of diff_angle x (rad). 1 \u22650.0 threshold_diff_angle_y float The threshold of diff_angle y (rad). 1 \u22650.0 threshold_diff_angle_z float The threshold of diff_angle z (rad). 1 \u22650.0"},{"location":"localization/pose_instability_detector/#input","title":"Input","text":"Name Type Description~/input/odometry
nav_msgs::msg::Odometry Pose estimated by EKF ~/input/twist
geometry_msgs::msg::TwistWithCovarianceStamped Twist"},{"location":"localization/pose_instability_detector/#output","title":"Output","text":"Name Type Description ~/debug/diff_pose
geometry_msgs::msg::PoseStamped diff_pose /diagnostics
diagnostic_msgs::msg::DiagnosticArray Diagnostics"},{"location":"localization/stop_filter/","title":"stop_filter","text":""},{"location":"localization/stop_filter/#stop_filter","title":"stop_filter","text":""},{"location":"localization/stop_filter/#purpose","title":"Purpose","text":"When this function did not exist, each node used a different criterion to determine whether the vehicle is stopping or not, resulting that some nodes were in operation of stopping the vehicle and some nodes continued running in the drive mode. This node aims to:
input/odom
nav_msgs::msg::Odometry
localization odometry"},{"location":"localization/stop_filter/#output","title":"Output","text":"Name Type Description output/odom
nav_msgs::msg::Odometry
odometry with suppressed longitudinal and yaw twist debug/stop_flag
tier4_debug_msgs::msg::BoolStamped
flag to represent whether the vehicle is stopping or not"},{"location":"localization/stop_filter/#parameters","title":"Parameters","text":"Name Type Description Default Range vx_threshold float Longitudinal velocity threshold to determine if the vehicle is stopping. [m/s] 0.01 \u22650.0 wz_threshold float Yaw velocity threshold to determine if the vehicle is stopping. [rad/s] 0.01 \u22650.0"},{"location":"localization/tree_structured_parzen_estimator/","title":"tree_structured_parzen_estimator","text":""},{"location":"localization/tree_structured_parzen_estimator/#tree_structured_parzen_estimator","title":"tree_structured_parzen_estimator","text":"`tree_structured_parzen_estimator`` is a package for black-box optimization.
This package does not have a node, it is just a library.
"},{"location":"localization/twist2accel/","title":"twist2accel","text":""},{"location":"localization/twist2accel/#twist2accel","title":"twist2accel","text":""},{"location":"localization/twist2accel/#purpose","title":"Purpose","text":"This package is responsible for estimating acceleration using the output of ekf_localizer
. It uses lowpass filter to mitigate the noise.
input/odom
nav_msgs::msg::Odometry
localization odometry input/twist
geometry_msgs::msg::TwistWithCovarianceStamped
twist"},{"location":"localization/twist2accel/#output","title":"Output","text":"Name Type Description output/accel
geometry_msgs::msg::AccelWithCovarianceStamped
estimated acceleration"},{"location":"localization/twist2accel/#parameters","title":"Parameters","text":"Name Type Description use_odom
bool use odometry if true, else use twist input (default: true) accel_lowpass_gain
double lowpass gain for lowpass filter in estimating acceleration (default: 0.9)"},{"location":"localization/twist2accel/#future-work","title":"Future work","text":"Future work includes integrating acceleration into the EKF state.
"},{"location":"localization/yabloc/","title":"YabLoc","text":""},{"location":"localization/yabloc/#yabloc","title":"YabLoc","text":"YabLoc is vision-based localization with vector map. https://youtu.be/Eaf6r_BNFfk
It estimates position by matching road surface markings extracted from images with a vector map. Point cloud maps and LiDAR are not required. YabLoc enables users localize vehicles that are not equipped with LiDAR and in environments where point cloud maps are not available.
"},{"location":"localization/yabloc/#packages","title":"Packages","text":"When launching autoware, if you set pose_source:=yabloc
as an argument, YabLoc will be launched instead of NDT. By default, pose_source
is ndt
.
A sample command to run YabLoc is as follows
ros2 launch autoware_launch logging_simulator.launch.xml \\\nmap_path:=$HOME/autoware_map/sample-map-rosbag\\\nvehicle_model:=sample_vehicle \\\nsensor_model:=sample_sensor_kit \\\npose_source:=yabloc\n
"},{"location":"localization/yabloc/#architecture","title":"Architecture","text":""},{"location":"localization/yabloc/#principle","title":"Principle","text":"The diagram below illustrates the basic principle of YabLoc. It extracts road surface markings by extracting the line segments using the road area obtained from graph-based segmentation. The red line at the center-top of the diagram represents the line segments identified as road surface markings. YabLoc transforms these segments for each particle and determines the particle's weight by comparing them with the cost map generated from Lanelet2.
"},{"location":"localization/yabloc/#visualization","title":"Visualization","text":""},{"location":"localization/yabloc/#core-visualization-topics","title":"Core visualization topics","text":"These topics are not visualized by default.
index topic name description 1/localization/yabloc/pf/predicted_particle_marker
particle distribution of particle filter. Red particles are probable candidate. 2 /localization/yabloc/pf/scored_cloud
3D projected line segments. the color indicates how well they match the map. 3 /localization/yabloc/image_processing/lanelet2_overlay_image
overlay of lanelet2 (yellow lines) onto image based on estimated pose. If they match well with the actual road markings, it means that the localization performs well."},{"location":"localization/yabloc/#image-topics-for-debug","title":"Image topics for debug","text":"These topics are not visualized by default.
index topic name description 1/localization/yabloc/pf/cost_map_image
cost map made from lanelet2 2 /localization/yabloc/pf/match_image
projected line segments 3 /localization/yabloc/image_processing/image_with_colored_line_segment
classified line segments. green line segments are used in particle correction 4 /localization/yabloc/image_processing/lanelet2_overlay_image
overlay of lanelet2 5 /localization/yabloc/image_processing/segmented_image
graph based segmentation result"},{"location":"localization/yabloc/#limitation","title":"Limitation","text":"This package contains some executable nodes related to map. Also, This provides some yabloc common library.
It estimates the height and tilt of the ground from lanelet2.
"},{"location":"localization/yabloc/yabloc_common/#input-outputs","title":"Input / Outputs","text":""},{"location":"localization/yabloc/yabloc_common/#input","title":"Input","text":"Name Type Descriptioninput/vector_map
autoware_auto_mapping_msgs::msg::HADMapBin
vector map input/pose
geometry_msgs::msg::PoseStamped
estimated self pose"},{"location":"localization/yabloc/yabloc_common/#output","title":"Output","text":"Name Type Description output/ground
std_msgs::msg::Float32MultiArray
estimated ground parameters. it contains x, y, z, normal_x, normal_y, normal_z. output/ground_markers
visualization_msgs::msg::Marker
visualization of estimated ground plane output/ground_status
std_msgs::msg::String
status log of ground plane estimation output/height
std_msgs::msg::Float32
altitude output/near_cloud
sensor_msgs::msg::PointCloud2
point cloud extracted from lanelet2 and used for ground tilt estimation"},{"location":"localization/yabloc/yabloc_common/#parameters","title":"Parameters","text":"Name Type Description Default Range force_zero_tilt boolean if true, the tilt is always determined to be horizontal False N/A K float the number of neighbors for ground search on a map 50 N/A R float radius for ground search on a map [m] 10 N/A"},{"location":"localization/yabloc/yabloc_common/#ll2_decomposer","title":"ll2_decomposer","text":""},{"location":"localization/yabloc/yabloc_common/#purpose_1","title":"Purpose","text":"This node extracts the elements related to the road surface markings and yabloc from lanelet2.
"},{"location":"localization/yabloc/yabloc_common/#input-outputs_1","title":"Input / Outputs","text":""},{"location":"localization/yabloc/yabloc_common/#input_1","title":"Input","text":"Name Type Descriptioninput/vector_map
autoware_auto_mapping_msgs::msg::HADMapBin
vector map"},{"location":"localization/yabloc/yabloc_common/#output_1","title":"Output","text":"Name Type Description output/ll2_bounding_box
sensor_msgs::msg::PointCloud2
bounding boxes extracted from lanelet2 output/ll2_road_marking
sensor_msgs::msg::PointCloud2
road surface markings extracted from lanelet2 output/ll2_sign_board
sensor_msgs::msg::PointCloud2
traffic sign boards extracted from lanelet2 output/sign_board_marker
visualization_msgs::msg::MarkerArray
visualized traffic sign boards"},{"location":"localization/yabloc/yabloc_common/#parameters_1","title":"Parameters","text":"Name Type Description Default Range road_marking_labels array line string types that indicating road surface markings in lanelet2 ['cross_walk', 'zebra_marking', 'line_thin', 'line_thick', 'pedestrian_marking', 'stop_line', 'road_border'] N/A sign_board_labels array line string types that indicating traffic sign boards in lanelet2 ['sign-board'] N/A bounding_box_labels array line string types that indicating not mapped areas in lanelet2 ['none'] N/A"},{"location":"localization/yabloc/yabloc_image_processing/","title":"yabloc_image_processing","text":""},{"location":"localization/yabloc/yabloc_image_processing/#yabloc_image_processing","title":"yabloc_image_processing","text":"This package contains some executable nodes related to image processing.
This node extract all line segments from gray scale image.
"},{"location":"localization/yabloc/yabloc_image_processing/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"localization/yabloc/yabloc_image_processing/#input","title":"Input","text":"Name Type Descriptioninput/image_raw
sensor_msgs::msg::Image
undistorted image"},{"location":"localization/yabloc/yabloc_image_processing/#output","title":"Output","text":"Name Type Description output/image_with_line_segments
sensor_msgs::msg::Image
image with line segments highlighted output/line_segments_cloud
sensor_msgs::msg::PointCloud2
detected line segments as point cloud. each point contains x,y,z, normal_x, normal_y, normal_z and z, and normal_z are always empty."},{"location":"localization/yabloc/yabloc_image_processing/#graph_segmentation","title":"graph_segmentation","text":""},{"location":"localization/yabloc/yabloc_image_processing/#purpose_1","title":"Purpose","text":"This node extract road surface region by graph-based-segmentation.
"},{"location":"localization/yabloc/yabloc_image_processing/#inputs-outputs_1","title":"Inputs / Outputs","text":""},{"location":"localization/yabloc/yabloc_image_processing/#input_1","title":"Input","text":"Name Type Descriptioninput/image_raw
sensor_msgs::msg::Image
undistorted image"},{"location":"localization/yabloc/yabloc_image_processing/#output_1","title":"Output","text":"Name Type Description output/mask_image
sensor_msgs::msg::Image
image with masked segments determined as road surface area output/segmented_image
sensor_msgs::msg::Image
segmented image for visualization"},{"location":"localization/yabloc/yabloc_image_processing/#parameters","title":"Parameters","text":"Name Type Description Default Range target_height_ratio float height on the image to retrieve the candidate road surface 0.85 N/A target_candidate_box_width float size of the square area to search for candidate road surfaces 15 N/A pickup_additional_graph_segment boolean if this is true, additional regions of similar color are retrieved 1 N/A similarity_score_threshold float threshold for picking up additional areas 0.8 N/A sigma float parameter for cv::ximgproc::segmentation::GraphSegmentation 0.5 N/A k float parameter for cv::ximgproc::segmentation::GraphSegmentation 300 N/A min_size float parameter for cv::ximgproc::segmentation::GraphSegmentation 100 N/A"},{"location":"localization/yabloc/yabloc_image_processing/#segment_filter","title":"segment_filter","text":""},{"location":"localization/yabloc/yabloc_image_processing/#purpose_2","title":"Purpose","text":"This is a node that integrates the results of graph_segment and lsd to extract road surface markings.
"},{"location":"localization/yabloc/yabloc_image_processing/#inputs-outputs_2","title":"Inputs / Outputs","text":""},{"location":"localization/yabloc/yabloc_image_processing/#input_2","title":"Input","text":"Name Type Descriptioninput/line_segments_cloud
sensor_msgs::msg::PointCloud2
detected line segment input/mask_image
sensor_msgs::msg::Image
image with masked segments determined as road surface area input/camera_info
sensor_msgs::msg::CameraInfo
undistorted camera info"},{"location":"localization/yabloc/yabloc_image_processing/#output_2","title":"Output","text":"Name Type Description output/line_segments_cloud
sensor_msgs::msg::PointCloud2
filtered line segments for visualization output/projected_image
sensor_msgs::msg::Image
projected filtered line segments for visualization output/projected_line_segments_cloud
sensor_msgs::msg::PointCloud2
projected filtered line segments"},{"location":"localization/yabloc/yabloc_image_processing/#parameters_1","title":"Parameters","text":"Name Type Description Default Range min_segment_length float min length threshold (if it is negative, it is unlimited) 1.5 N/A max_segment_distance float max distance threshold (if it is negative, it is unlimited) 30 N/A max_lateral_distance float max lateral distance threshold (if it is negative, it is unlimited) 10 N/A publish_image_with_segment_for_debug boolean toggle whether to publish the filtered line segment for debug 1 N/A max_range float range of debug projection visualization 20 N/A image_size float image size of debug projection visualization 800 N/A"},{"location":"localization/yabloc/yabloc_image_processing/#undistort","title":"undistort","text":""},{"location":"localization/yabloc/yabloc_image_processing/#purpose_3","title":"Purpose","text":"This node performs image resizing and undistortion at the same time.
"},{"location":"localization/yabloc/yabloc_image_processing/#inputs-outputs_3","title":"Inputs / Outputs","text":""},{"location":"localization/yabloc/yabloc_image_processing/#input_3","title":"Input","text":"Name Type Descriptioninput/camera_info
sensor_msgs::msg::CameraInfo
camera info input/image_raw
sensor_msgs::msg::Image
raw camera image input/image_raw/compressed
sensor_msgs::msg::CompressedImage
compressed camera image This node subscribes to both compressed image and raw image topics. If raw image is subscribed to even once, compressed image will no longer be subscribed to. This is to avoid redundant decompression within Autoware.
"},{"location":"localization/yabloc/yabloc_image_processing/#output_3","title":"Output","text":"Name Type Descriptionoutput/camera_info
sensor_msgs::msg::CameraInfo
resized camera info output/image_raw
sensor_msgs::msg::CompressedImage
undistorted and resized image"},{"location":"localization/yabloc/yabloc_image_processing/#parameters_2","title":"Parameters","text":"Name Type Description Default Range use_sensor_qos boolean whether to use sensor qos or not True N/A width float resized image width size 800 N/A override_frame_id string value for overriding the camera's frame_id. if blank, frame_id of static_tf is not overwritten N/A"},{"location":"localization/yabloc/yabloc_image_processing/#about-tf_static-overriding","title":"about tf_static overriding","text":"click to open Some nodes requires `/tf_static` from `/base_link` to the frame_id of `/sensing/camera/traffic_light/image_raw/compressed` (e.g. `/traffic_light_left_camera/camera_optical_link`). You can verify that the tf_static is correct with the following command. ros2 run tf2_ros tf2_echo base_link traffic_light_left_camera/camera_optical_link\n
If the wrong `/tf_static` are broadcasted due to using a prototype vehicle, not having accurate calibration data, or some other unavoidable reason, it is useful to give the frame_id in `override_camera_frame_id`. If you give it a non-empty string, `/image_processing/undistort_node` will rewrite the frame_id in camera_info. For example, you can give a different tf_static as follows. ros2 launch yabloc_launch sample_launch.xml override_camera_frame_id:=fake_camera_optical_link\nros2 run tf2_ros static_transform_publisher \\\n--frame-id base_link \\\n--child-frame-id fake_camera_optical_link \\\n--roll -1.57 \\\n--yaw -1.570\n
"},{"location":"localization/yabloc/yabloc_image_processing/#lanelet2_overlay","title":"lanelet2_overlay","text":""},{"location":"localization/yabloc/yabloc_image_processing/#purpose_4","title":"Purpose","text":"This node overlays lanelet2 on the camera image based on the estimated self-position.
"},{"location":"localization/yabloc/yabloc_image_processing/#inputs-outputs_4","title":"Inputs / Outputs","text":""},{"location":"localization/yabloc/yabloc_image_processing/#input_4","title":"Input","text":"Name Type Descriptioninput/pose
geometry_msgs::msg::PoseStamped
estimated self pose input/projected_line_segments_cloud
sensor_msgs::msg::PointCloud2
projected line segments including non-road markings input/camera_info
sensor_msgs::msg::CameraInfo
undistorted camera info input/image_raw
sensor_msgs::msg::Image
undistorted camera image input/ground
std_msgs::msg::Float32MultiArray
ground tilt input/ll2_road_marking
sensor_msgs::msg::PointCloud2
lanelet2 elements regarding road surface markings input/ll2_sign_board
sensor_msgs::msg::PointCloud2
lanelet2 elements regarding traffic sign boards"},{"location":"localization/yabloc/yabloc_image_processing/#output_4","title":"Output","text":"Name Type Description output/lanelet2_overlay_image
sensor_msgs::msg::Image
lanelet2 overlaid image output/projected_marker
visualization_msgs::msg::Marker
3d projected line segments including non-road markings"},{"location":"localization/yabloc/yabloc_image_processing/#line_segments_overlay","title":"line_segments_overlay","text":""},{"location":"localization/yabloc/yabloc_image_processing/#purpose_5","title":"Purpose","text":"This node visualize classified line segments on the camera image
"},{"location":"localization/yabloc/yabloc_image_processing/#inputs-outputs_5","title":"Inputs / Outputs","text":""},{"location":"localization/yabloc/yabloc_image_processing/#input_5","title":"Input","text":"Name Type Descriptioninput/line_segments_cloud
sensor_msgs::msg::PointCloud2
classified line segments input/image_raw
sensor_msgs::msg::Image
undistorted camera image"},{"location":"localization/yabloc/yabloc_image_processing/#output_5","title":"Output","text":"Name Type Description output/image_with_colored_line_segments
sensor_msgs::msg::Image
image with highlighted line segments"},{"location":"localization/yabloc/yabloc_monitor/","title":"yabloc_monitor","text":""},{"location":"localization/yabloc/yabloc_monitor/#yabloc_monitor","title":"yabloc_monitor","text":"YabLoc monitor is a node that monitors the status of the YabLoc localization system. It is a wrapper node that monitors the status of the YabLoc localization system and publishes the status as diagnostics.
"},{"location":"localization/yabloc/yabloc_monitor/#feature","title":"Feature","text":""},{"location":"localization/yabloc/yabloc_monitor/#availability","title":"Availability","text":"The node monitors the final output pose of YabLoc to verify the availability of YabLoc.
"},{"location":"localization/yabloc/yabloc_monitor/#others","title":"Others","text":"To be added,
"},{"location":"localization/yabloc/yabloc_monitor/#interfaces","title":"Interfaces","text":""},{"location":"localization/yabloc/yabloc_monitor/#input","title":"Input","text":"Name Type Description~/input/yabloc_pose
geometry_msgs/PoseStamped
The final output pose of YabLoc"},{"location":"localization/yabloc/yabloc_monitor/#output","title":"Output","text":"Name Type Description /diagnostics
diagnostic_msgs/DiagnosticArray
Diagnostics outputs"},{"location":"localization/yabloc/yabloc_monitor/#parameters","title":"Parameters","text":"Name Type Description Default Range availability/timestamp_tolerance float tolerable time difference between current time and latest estimated pose 1 N/A"},{"location":"localization/yabloc/yabloc_particle_filter/","title":"yabLoc_particle_filter","text":""},{"location":"localization/yabloc/yabloc_particle_filter/#yabloc_particle_filter","title":"yabLoc_particle_filter","text":"This package contains some executable nodes related to particle filter.
input/initialpose
geometry_msgs::msg::PoseWithCovarianceStamped
to specify the initial position of particles input/twist_with_covariance
geometry_msgs::msg::TwistWithCovarianceStamped
linear velocity and angular velocity of prediction update input/height
std_msgs::msg::Float32
ground height input/weighted_particles
yabloc_particle_filter::msg::ParticleArray
particles weighted by corrector nodes"},{"location":"localization/yabloc/yabloc_particle_filter/#output","title":"Output","text":"Name Type Description output/pose_with_covariance
geometry_msgs::msg::PoseWithCovarianceStamped
particle centroid with covariance output/pose
geometry_msgs::msg::PoseStamped
particle centroid with covariance output/predicted_particles
yabloc_particle_filter::msg::ParticleArray
particles weighted by predictor nodes debug/init_marker
visualization_msgs::msg::Marker
debug visualization of initial position debug/particles_marker_array
visualization_msgs::msg::MarkerArray
particles visualization. published if visualize
is true"},{"location":"localization/yabloc/yabloc_particle_filter/#parameters","title":"Parameters","text":"Name Type Description Default Range visualize boolean whether particles are also published in visualization_msgs or not True N/A static_linear_covariance float overriding covariance of /twist_with_covariance
0.04 N/A static_angular_covariance float overriding covariance of /twist_with_covariance
0.006 N/A resampling_interval_seconds float the interval of particle resampling 1.0 N/A num_of_particles float the number of particles 500 N/A prediction_rate float frequency of forecast updates, in Hz 50.0 N/A cov_xx_yy array the covariance of initial pose [2.0, 0.25] N/A"},{"location":"localization/yabloc/yabloc_particle_filter/#services","title":"Services","text":"Name Type Description yabloc_trigger_srv
std_srvs::srv::SetBool
activation and deactivation of yabloc estimation"},{"location":"localization/yabloc/yabloc_particle_filter/#gnss_particle_corrector","title":"gnss_particle_corrector","text":""},{"location":"localization/yabloc/yabloc_particle_filter/#purpose_1","title":"Purpose","text":"ublox_msgs::msg::NavPVT
and geometry_msgs::msg::PoseWithCovarianceStamped
.input/height
std_msgs::msg::Float32
ground height input/predicted_particles
yabloc_particle_filter::msg::ParticleArray
predicted particles input/pose_with_covariance
geometry_msgs::msg::PoseWithCovarianceStamped
gnss measurement. used if use_ublox_msg
is false input/navpvt
ublox_msgs::msg::NavPVT
gnss measurement. used if use_ublox_msg
is true"},{"location":"localization/yabloc/yabloc_particle_filter/#output_1","title":"Output","text":"Name Type Description output/weighted_particles
yabloc_particle_filter::msg::ParticleArray
weighted particles debug/gnss_range_marker
visualization_msgs::msg::MarkerArray
gnss weight distribution debug/particles_marker_array
visualization_msgs::msg::MarkerArray
particles visualization. published if visualize
is true"},{"location":"localization/yabloc/yabloc_particle_filter/#parameters_1","title":"Parameters","text":"Name Type Description Default Range acceptable_max_delay float how long to hold the predicted particles 1 N/A visualize boolean whether publish particles as marker_array or not 0 N/A mahalanobis_distance_threshold float if the Mahalanobis distance to the GNSS for particle exceeds this, the correction skips. 30 N/A for_fixed/max_weight float gnss weight distribution used when observation is fixed 5 N/A for_fixed/flat_radius float gnss weight distribution used when observation is fixed 0.5 N/A for_fixed/max_radius float gnss weight distribution used when observation is fixed 10 N/A for_fixed/min_weight float gnss weight distribution used when observation is fixed 0.5 N/A for_not_fixed/max_weight float gnss weight distribution used when observation is not fixed 1 N/A for_not_fixed/flat_radius float gnss weight distribution used when observation is not fixed 5 N/A for_not_fixed/max_radius float gnss weight distribution used when observation is not fixed 20 N/A for_not_fixed/min_weight float gnss weight distribution used when observation is not fixed 0.5 N/A"},{"location":"localization/yabloc/yabloc_particle_filter/#camera_particle_corrector","title":"camera_particle_corrector","text":""},{"location":"localization/yabloc/yabloc_particle_filter/#purpose_2","title":"Purpose","text":"input/predicted_particles
yabloc_particle_filter::msg::ParticleArray
predicted particles input/ll2_bounding_box
sensor_msgs::msg::PointCloud2
road surface markings converted to line segments input/ll2_road_marking
sensor_msgs::msg::PointCloud2
road surface markings converted to line segments input/projected_line_segments_cloud
sensor_msgs::msg::PointCloud2
projected line segments input/pose
geometry_msgs::msg::PoseStamped
reference to retrieve the area map around the self location"},{"location":"localization/yabloc/yabloc_particle_filter/#output_2","title":"Output","text":"Name Type Description output/weighted_particles
yabloc_particle_filter::msg::ParticleArray
weighted particles debug/cost_map_image
sensor_msgs::msg::Image
cost map created from lanelet2 debug/cost_map_range
visualization_msgs::msg::MarkerArray
cost map boundary debug/match_image
sensor_msgs::msg::Image
projected line segments image debug/scored_cloud
sensor_msgs::msg::PointCloud2
weighted 3d line segments debug/scored_post_cloud
sensor_msgs::msg::PointCloud2
weighted 3d line segments which are iffy debug/state_string
std_msgs::msg::String
string describing the node state debug/particles_marker_array
visualization_msgs::msg::MarkerArray
particles visualization. published if visualize
is true"},{"location":"localization/yabloc/yabloc_particle_filter/#parameters_2","title":"Parameters","text":"Name Type Description Default Range acceptable_max_delay float how long to hold the predicted particles 1 N/A visualize boolean whether publish particles as marker_array or not 0 N/A image_size float image size of debug/cost_map_image 800 N/A max_range float width of hierarchical cost map 40 N/A gamma float gamma value of the intensity gradient of the cost map 5 N/A min_prob float minimum particle weight the corrector node gives 0.1 N/A far_weight_gain float exp(-far_weight_gain_ * squared_distance_from_camera)
is weight gain. if this is large, the nearby road markings will be more important 0.001 N/A enabled_at_first boolean if it is false, this node is not activated at first. you can activate by service call 1 N/A"},{"location":"localization/yabloc/yabloc_particle_filter/#services_1","title":"Services","text":"Name Type Description switch_srv
std_srvs::srv::SetBool
activation and deactivation of correction"},{"location":"localization/yabloc/yabloc_pose_initializer/","title":"yabloc_pose_initializer","text":""},{"location":"localization/yabloc/yabloc_pose_initializer/#yabloc_pose_initializer","title":"yabloc_pose_initializer","text":"This package contains a node related to initial pose estimation.
This package requires the pre-trained semantic segmentation model for runtime. This model is usually downloaded by ansible
during env preparation phase of the installation. It is also possible to download it manually. Even if the model is not downloaded, initialization will still complete, but the accuracy may be compromised.
To download and extract the model manually:
$ mkdir -p ~/autoware_data/yabloc_pose_initializer/\n$ wget -P ~/autoware_data/yabloc_pose_initializer/ \\\nhttps://s3.ap-northeast-2.wasabisys.com/pinto-model-zoo/136_road-segmentation-adas-0001/resources.tar.gz\n$ tar xzf ~/autoware_data/yabloc_pose_initializer/resources.tar.gz -C ~/autoware_data/yabloc_pose_initializer/\n
"},{"location":"localization/yabloc/yabloc_pose_initializer/#note","title":"Note","text":"This package makes use of external code. The trained files are provided by apollo. The trained files are automatically downloaded during env preparation.
Original model URL
https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/intel/road-segmentation-adas-0001
Open Model Zoo is licensed under Apache License Version 2.0.
Converted model URL
https://github.com/PINTO0309/PINTO_model_zoo/tree/main/136_road-segmentation-adas-0001
model conversion scripts are released under the MIT license
"},{"location":"localization/yabloc/yabloc_pose_initializer/#special-thanks","title":"Special thanks","text":"input/camera_info
sensor_msgs::msg::CameraInfo
undistorted camera info input/image_raw
sensor_msgs::msg::Image
undistorted camera image input/vector_map
autoware_auto_mapping_msgs::msg::HADMapBin
vector map"},{"location":"localization/yabloc/yabloc_pose_initializer/#output","title":"Output","text":"Name Type Description output/candidates
visualization_msgs::msg::MarkerArray
initial pose candidates"},{"location":"localization/yabloc/yabloc_pose_initializer/#parameters","title":"Parameters","text":"Name Type Description Default Range angle_resolution float how many divisions of 1 sigma angle range 30 N/A"},{"location":"localization/yabloc/yabloc_pose_initializer/#services","title":"Services","text":"Name Type Description yabloc_align_srv
tier4_localization_msgs::srv::PoseWithCovarianceStamped
initial pose estimation request"},{"location":"map/map_height_fitter/","title":"map_height_fitter","text":""},{"location":"map/map_height_fitter/#map_height_fitter","title":"map_height_fitter","text":"This library fits the given point with the ground of the point cloud map. The map loading operation is switched by the parameter enable_partial_load
of the node specified by map_loader_name
. The node using this library must use multi thread executor.
This package provides the features of loading various maps.
"},{"location":"map/map_loader/#pointcloud_map_loader","title":"pointcloud_map_loader","text":""},{"location":"map/map_loader/#feature","title":"Feature","text":"pointcloud_map_loader
provides pointcloud maps to the other Autoware nodes in various configurations. Currently, it supports the following two types:
NOTE: We strongly recommend to use divided maps when using large pointcloud map to enable the latter two features (partial and differential load). Please go through the prerequisites section for more details, and follow the instruction for dividing the map and preparing the metadata.
"},{"location":"map/map_loader/#prerequisites","title":"Prerequisites","text":""},{"location":"map/map_loader/#prerequisites-on-pointcloud-map-files","title":"Prerequisites on pointcloud map file(s)","text":"You may provide either a single .pcd file or multiple .pcd files. If you are using multiple PCD data, it MUST obey the following rules:
map_projection_loader
, in order to be consistent with the lanelet2 map and other packages that converts between local and geodetic coordinates. For more information, please refer to the readme of map_projection_loader
.The metadata should look like this:
x_resolution: 20.0\ny_resolution: 20.0\nA.pcd: [1200, 2500] # -> 1200 < x < 1220, 2500 < y < 2520\nB.pcd: [1220, 2500] # -> 1220 < x < 1240, 2500 < y < 2520\nC.pcd: [1200, 2520] # -> 1200 < x < 1220, 2520 < y < 2540\nD.pcd: [1240, 2520] # -> 1240 < x < 1260, 2520 < y < 2540\n
where,
x_resolution
and y_resolution
A.pcd
, B.pcd
, etc, are the names of PCD files.[1200, 2500]
are the values indicate that for this PCD file, x coordinates are between 1200 and 1220 (x_resolution
+ x_coordinate
) and y coordinates are between 2500 and 2520 (y_resolution
+ y_coordinate
).You may use pointcloud_divider from MAP IV for dividing pointcloud map as well as generating the compatible metadata.yaml.
"},{"location":"map/map_loader/#directory-structure-of-these-files","title":"Directory structure of these files","text":"If you only have one pointcloud map, Autoware will assume the following directory structure by default.
sample-map-rosbag\n\u251c\u2500\u2500 lanelet2_map.osm\n\u251c\u2500\u2500 pointcloud_map.pcd\n
If you have multiple rosbags, an example directory structure would be as follows. Note that you need to have a metadata when you have multiple pointcloud map files.
sample-map-rosbag\n\u251c\u2500\u2500 lanelet2_map.osm\n\u251c\u2500\u2500 pointcloud_map.pcd\n\u2502 \u251c\u2500\u2500 A.pcd\n\u2502 \u251c\u2500\u2500 B.pcd\n\u2502 \u251c\u2500\u2500 C.pcd\n\u2502 \u2514\u2500\u2500 ...\n\u251c\u2500\u2500 map_projector_info.yaml\n\u2514\u2500\u2500 pointcloud_map_metadata.yaml\n
"},{"location":"map/map_loader/#specific-features","title":"Specific features","text":""},{"location":"map/map_loader/#publish-raw-pointcloud-map-ros-2-topic","title":"Publish raw pointcloud map (ROS 2 topic)","text":"The node publishes the raw pointcloud map loaded from the .pcd
file(s).
The node publishes the downsampled pointcloud map loaded from the .pcd
file(s). You can specify the downsample resolution by changing the leaf_size
parameter.
The node publishes the pointcloud metadata attached with an ID. Metadata is loaded from the .yaml
file. Please see the description of PointCloudMapMetaData.msg
for details.
Here, we assume that the pointcloud maps are divided into grids.
Given a query from a client node, the node sends a set of pointcloud maps that overlaps with the queried area. Please see the description of GetPartialPointCloudMap.srv
for details.
Here, we assume that the pointcloud maps are divided into grids.
Given a query and set of map IDs, the node sends a set of pointcloud maps that overlap with the queried area and are not included in the set of map IDs. Please see the description of GetDifferentialPointCloudMap.srv
for details.
Here, we assume that the pointcloud maps are divided into grids.
Given IDs query from a client node, the node sends a set of pointcloud maps (each of which attached with unique ID) specified by query. Please see the description of GetSelectedPointCloudMap.srv
for details.
output/pointcloud_map
(sensor_msgs/msg/PointCloud2) : Raw pointcloud mapoutput/pointcloud_map_metadata
(autoware_map_msgs/msg/PointCloudMapMetaData) : Metadata of pointcloud mapoutput/debug/downsampled_pointcloud_map
(sensor_msgs/msg/PointCloud2) : Downsampled pointcloud mapservice/get_partial_pcd_map
(autoware_map_msgs/srv/GetPartialPointCloudMap) : Partial pointcloud mapservice/get_differential_pcd_map
(autoware_map_msgs/srv/GetDifferentialPointCloudMap) : Differential pointcloud mapservice/get_selected_pcd_map
(autoware_map_msgs/srv/GetSelectedPointCloudMap) : Selected pointcloud maplanelet2_map_loader loads Lanelet2 file and publishes the map data as autoware_auto_mapping_msgs/HADMapBin message. The node projects lan/lon coordinates into arbitrary coordinates defined in /map/map_projector_info
from map_projection_loader
. Please see tier4_autoware_msgs/msg/MapProjectorInfo.msg for supported projector types.
ros2 run map_loader lanelet2_map_loader --ros-args -p lanelet2_map_path:=path/to/map.osm
lanelet2_map_visualization visualizes autoware_auto_mapping_msgs/HADMapBin messages into visualization_msgs/MarkerArray.
"},{"location":"map/map_loader/#how-to-run_1","title":"How to Run","text":"ros2 run map_loader lanelet2_map_visualization
map_projection_loader
is responsible for publishing map_projector_info
that defines in which kind of coordinate Autoware is operating. This is necessary information especially when you want to convert from global (geoid) to local coordinate or the other way around.
map_projector_info_path
DOES exist, this node loads it and publishes the map projection information accordingly.map_projector_info_path
does NOT exist, the node assumes that you are using the MGRS
projection type, and loads the lanelet2 map instead to extract the MGRS grid.You need to provide a YAML file, namely map_projector_info.yaml
under the map_path
directory. For pointcloud_map_metadata.yaml
, please refer to the Readme of map_loader
.
sample-map-rosbag\n\u251c\u2500\u2500 lanelet2_map.osm\n\u251c\u2500\u2500 pointcloud_map.pcd\n\u251c\u2500\u2500 map_projector_info.yaml\n\u2514\u2500\u2500 pointcloud_map_metadata.yaml\n
"},{"location":"map/map_projection_loader/#using-local-coordinate","title":"Using local coordinate","text":"# map_projector_info.yaml\nprojector_type: local\n
"},{"location":"map/map_projection_loader/#limitation","title":"Limitation","text":"The functionality that requires latitude and longitude will become unavailable.
The currently identified unavailable functionalities are:
If you want to use MGRS, please specify the MGRS grid as well.
# map_projector_info.yaml\nprojector_type: MGRS\nvertical_datum: WGS84\nmgrs_grid: 54SUE\n
"},{"location":"map/map_projection_loader/#limitation_1","title":"Limitation","text":"It cannot be used with maps that span across two or more MGRS grids. Please use it only when it falls within the scope of a single MGRS grid.
"},{"location":"map/map_projection_loader/#using-localcartesianutm","title":"Using LocalCartesianUTM","text":"If you want to use local cartesian UTM, please specify the map origin as well.
# map_projector_info.yaml\nprojector_type: LocalCartesianUTM\nvertical_datum: WGS84\nmap_origin:\nlatitude: 35.6762 # [deg]\nlongitude: 139.6503 # [deg]\naltitude: 0.0 # [m]\n
"},{"location":"map/map_projection_loader/#using-transversemercator","title":"Using TransverseMercator","text":"If you want to use Transverse Mercator projection, please specify the map origin as well.
# map_projector_info.yaml\nprojector_type: TransverseMercator\nvertical_datum: WGS84\nmap_origin:\nlatitude: 35.6762 # [deg]\nlongitude: 139.6503 # [deg]\naltitude: 0.0 # [m]\n
"},{"location":"map/map_projection_loader/#published-topics","title":"Published Topics","text":"map_projector_info_path
does not exist)"},{"location":"map/map_tf_generator/Readme/","title":"map_tf_generator","text":""},{"location":"map/map_tf_generator/Readme/#map_tf_generator","title":"map_tf_generator","text":""},{"location":"map/map_tf_generator/Readme/#purpose","title":"Purpose","text":"The nodes in this package broadcast the viewer
frame for visualization of the map in RViz.
Note that there is no module to need the viewer
frame and this is used only for visualization.
The following are the supported methods to calculate the position of the viewer
frame:
pcd_map_tf_generator_node
outputs the geometric center of all points in the PCD.vector_map_tf_generator_node
outputs the geometric center of all points in the point layer./map/pointcloud_map
sensor_msgs::msg::PointCloud2
Subscribe pointcloud map to calculate position of viewer
frames"},{"location":"map/map_tf_generator/Readme/#vector_map_tf_generator","title":"vector_map_tf_generator","text":"Name Type Description /map/vector_map
autoware_auto_mapping_msgs::msg::HADMapBin
Subscribe vector map to calculate position of viewer
frames"},{"location":"map/map_tf_generator/Readme/#output","title":"Output","text":"Name Type Description /tf_static
tf2_msgs/msg/TFMessage
Broadcast viewer
frames"},{"location":"map/map_tf_generator/Readme/#parameters","title":"Parameters","text":""},{"location":"map/map_tf_generator/Readme/#node-parameters","title":"Node Parameters","text":"None
"},{"location":"map/map_tf_generator/Readme/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Explanationviewer_frame
string viewer Name of viewer
frame map_frame
string map The parent frame name of viewer frame"},{"location":"map/map_tf_generator/Readme/#assumptions-known-limits","title":"Assumptions / Known limits","text":"TBD.
"},{"location":"perception/bytetrack/","title":"bytetrack","text":""},{"location":"perception/bytetrack/#bytetrack","title":"bytetrack","text":""},{"location":"perception/bytetrack/#purpose","title":"Purpose","text":"The core algorithm, named ByteTrack
, mainly aims to perform multi-object tracking. Because the algorithm associates almost every detection box including ones with low detection scores, the number of false negatives is expected to decrease by using it.
demo video
"},{"location":"perception/bytetrack/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"perception/bytetrack/#cite","title":"Cite","text":"The paper just says that the 2d tracking algorithm is a simple Kalman filter. Original codes use the top-left-corner
and aspect ratio
and size
as the state vector.
This is sometimes unstable because the aspectratio can be changed by the occlusion. So, we use the top-left
and size
as the state vector.
Kalman filter settings can be controlled by the parameters in config/bytetrack_node.param.yaml
.
in/rect
tier4_perception_msgs/DetectedObjectsWithFeature
The detected objects with 2D bounding boxes"},{"location":"perception/bytetrack/#output","title":"Output","text":"Name Type Description out/objects
tier4_perception_msgs/DetectedObjectsWithFeature
The detected objects with 2D bounding boxes out/objects/debug/uuid
tier4_perception_msgs/DynamicObjectArray
The universally unique identifiers (UUID) for each object"},{"location":"perception/bytetrack/#bytetrack_visualizer","title":"bytetrack_visualizer","text":""},{"location":"perception/bytetrack/#input_1","title":"Input","text":"Name Type Description in/image
sensor_msgs/Image
or sensor_msgs/CompressedImage
The input image on which object detection is performed in/rect
tier4_perception_msgs/DetectedObjectsWithFeature
The detected objects with 2D bounding boxes in/uuid
tier4_perception_msgs/DynamicObjectArray
The universally unique identifiers (UUID) for each object"},{"location":"perception/bytetrack/#output_1","title":"Output","text":"Name Type Description out/image
sensor_msgs/Image
The image that detection bounding boxes and their UUIDs are drawn"},{"location":"perception/bytetrack/#parameters","title":"Parameters","text":""},{"location":"perception/bytetrack/#bytetrack_node_1","title":"bytetrack_node","text":"Name Type Default Value Description track_buffer_length
int 30 The frame count that a tracklet is considered to be lost"},{"location":"perception/bytetrack/#bytetrack_visualizer_1","title":"bytetrack_visualizer","text":"Name Type Default Value Description use_raw
bool false The flag for the node to switch sensor_msgs/Image
or sensor_msgs/CompressedImage
as input"},{"location":"perception/bytetrack/#assumptionsknown-limits","title":"Assumptions/Known limits","text":""},{"location":"perception/bytetrack/#reference-repositories","title":"Reference repositories","text":"The codes under the lib
directory are copied from the original codes and modified. The original codes belong to the MIT license stated as follows, while this ported packages are provided with Apache License 2.0:
MIT License
Copyright (c) 2021 Yifu Zhang
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
"},{"location":"perception/cluster_merger/","title":"cluster merger","text":""},{"location":"perception/cluster_merger/#cluster-merger","title":"cluster merger","text":""},{"location":"perception/cluster_merger/#purpose","title":"Purpose","text":"cluster_merger is a package for merging pointcloud clusters as detected objects with feature type.
"},{"location":"perception/cluster_merger/#inner-working-algorithms","title":"Inner-working / Algorithms","text":"The clusters of merged topics are simply concatenated from clusters of input topics.
"},{"location":"perception/cluster_merger/#input-output","title":"Input / Output","text":""},{"location":"perception/cluster_merger/#input","title":"Input","text":"Name Type Descriptioninput/cluster0
tier4_perception_msgs::msg::DetectedObjectsWithFeature
pointcloud clusters input/cluster1
tier4_perception_msgs::msg::DetectedObjectsWithFeature
pointcloud clusters"},{"location":"perception/cluster_merger/#output","title":"Output","text":"Name Type Description output/clusters
autoware_auto_perception_msgs::msg::DetectedObjects
merged clusters"},{"location":"perception/cluster_merger/#parameters","title":"Parameters","text":"Name Type Description Default value output_frame_id
string The header frame_id of output topic. base_link"},{"location":"perception/cluster_merger/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/cluster_merger/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/cluster_merger/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/cluster_merger/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/cluster_merger/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/compare_map_segmentation/","title":"compare_map_segmentation","text":""},{"location":"perception/compare_map_segmentation/#compare_map_segmentation","title":"compare_map_segmentation","text":""},{"location":"perception/compare_map_segmentation/#purpose","title":"Purpose","text":"The compare_map_segmentation
is a node that filters the ground points from the input pointcloud by using map info (e.g. pcd, elevation map or split map pointcloud from map_loader interface).
Compare the z of the input points with the value of elevation_map. The height difference is calculated by the binary integration of neighboring cells. Remove points whose height difference is below the height_diff_thresh
.
"},{"location":"perception/compare_map_segmentation/#distance-based-compare-map-filter","title":"Distance Based Compare Map Filter","text":"
This filter compares the input pointcloud with the map pointcloud using the nearestKSearch
function of kdtree
and removes points that are close to the map point cloud. The map pointcloud can be loaded statically at once at the beginning or dynamically as the vehicle moves.
The filter loads the map point cloud, which can be loaded statically at the beginning or dynamically during vehicle movement, and creates a voxel grid of the map point cloud. The filter uses the getCentroidIndexAt function in combination with the getGridCoordinates function from the VoxelGrid class to find input points that are inside the voxel grid and removes them.
"},{"location":"perception/compare_map_segmentation/#voxel-based-compare-map-filter","title":"Voxel Based Compare Map Filter","text":"The filter loads the map pointcloud (static loading whole map at once at beginning or dynamic loading during vehicle moving) and utilizes VoxelGrid to downsample map pointcloud.
For each point of input pointcloud, the filter use getCentroidIndexAt
combine with getGridCoordinates
function from VoxelGrid class to check if the downsampled map point existing surrounding input points. Remove the input point which has downsampled map point in voxels containing or being close to the point.
This filter is a combination of the distance_based_compare_map_filter and voxel_based_approximate_compare_map_filter. The filter loads the map point cloud, which can be loaded statically at the beginning or dynamically during vehicle movement, and creates a voxel grid and a k-d tree of the map point cloud. The filter uses the getCentroidIndexAt function in combination with the getGridCoordinates function from the VoxelGrid class to find input points that are inside the voxel grid and removes them. For points that do not belong to any voxel grid, they are compared again with the map point cloud using the radiusSearch function of the k-d tree and are removed if they are close enough to the map.
"},{"location":"perception/compare_map_segmentation/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/compare_map_segmentation/#compare-elevation-map-filter_1","title":"Compare Elevation Map Filter","text":""},{"location":"perception/compare_map_segmentation/#input","title":"Input","text":"Name Type Description~/input/points
sensor_msgs::msg::PointCloud2
reference points ~/input/elevation_map
grid_map::msg::GridMap
elevation map"},{"location":"perception/compare_map_segmentation/#output","title":"Output","text":"Name Type Description ~/output/points
sensor_msgs::msg::PointCloud2
filtered points"},{"location":"perception/compare_map_segmentation/#parameters","title":"Parameters","text":"Name Type Description Default value map_layer_name
string elevation map layer name elevation map_frame
float frame_id of the map that is temporarily used before elevation_map is subscribed map height_diff_thresh
float Remove points whose height difference is below this value [m] 0.15"},{"location":"perception/compare_map_segmentation/#other-filters","title":"Other Filters","text":""},{"location":"perception/compare_map_segmentation/#input_1","title":"Input","text":"Name Type Description ~/input/points
sensor_msgs::msg::PointCloud2
reference points ~/input/map
sensor_msgs::msg::PointCloud2
map (in case static map loading) /localization/kinematic_state
nav_msgs::msg::Odometry
current ego-vehicle pose (in case dynamic map loading)"},{"location":"perception/compare_map_segmentation/#output_1","title":"Output","text":"Name Type Description ~/output/points
sensor_msgs::msg::PointCloud2
filtered points"},{"location":"perception/compare_map_segmentation/#parameters_1","title":"Parameters","text":"Name Type Description Default value use_dynamic_map_loading
bool map loading mode selection, true
for dynamic map loading, false
for static map loading, recommended for no-split map pointcloud true distance_threshold
float Threshold distance to compare input points with map points [m] 0.5 map_update_distance_threshold
float Threshold of vehicle movement distance when map update is necessary (in dynamic map loading) [m] 10.0 map_loader_radius
float Radius of map need to be loaded (in dynamic map loading) [m] 150.0 timer_interval_ms
int Timer interval to check if the map update is necessary (in dynamic map loading) [ms] 100 publish_debug_pcd
bool Enable to publish voxelized updated map in debug/downsampled_map/pointcloud
for debugging. It might cause additional computation cost false downsize_ratio_z_axis
double Positive ratio to reduce voxel_leaf_size and neighbor point distance threshold in z axis 0.5"},{"location":"perception/compare_map_segmentation/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/compare_map_segmentation/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/compare_map_segmentation/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/compare_map_segmentation/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/compare_map_segmentation/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/crosswalk_traffic_light_estimator/","title":"crosswalk_traffic_light_estimator","text":""},{"location":"perception/crosswalk_traffic_light_estimator/#crosswalk_traffic_light_estimator","title":"crosswalk_traffic_light_estimator","text":""},{"location":"perception/crosswalk_traffic_light_estimator/#purpose","title":"Purpose","text":"crosswalk_traffic_light_estimator
is a module that estimates pedestrian traffic signals from HDMap and detected vehicle traffic signals.
~/input/vector_map
autoware_auto_mapping_msgs::msg::HADMapBin
vector map ~/input/route
autoware_planning_msgs::msg::LaneletRoute
route ~/input/classified/traffic_signals
tier4_perception_msgs::msg::TrafficSignalArray
classified signals"},{"location":"perception/crosswalk_traffic_light_estimator/#output","title":"Output","text":"Name Type Description ~/output/traffic_signals
tier4_perception_msgs::msg::TrafficSignalArray
output that contains estimated pedestrian traffic signals"},{"location":"perception/crosswalk_traffic_light_estimator/#parameters","title":"Parameters","text":"Name Type Description Default value use_last_detect_color
bool
If this parameter is true
, this module estimates pedestrian's traffic signal as RED not only when vehicle's traffic signal is detected as GREEN/AMBER but also when detection results change GREEN/AMBER to UNKNOWN. (If detection results change RED or AMBER to UNKNOWN, this module estimates pedestrian's traffic signal as UNKNOWN.) If this parameter is false
, this module use only latest detection results for estimation. (Only when the detection result is GREEN/AMBER, this module estimates pedestrian's traffic signal as RED.) true
last_detect_color_hold_time
double
The time threshold to hold for last detect color. 2.0
"},{"location":"perception/crosswalk_traffic_light_estimator/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"If traffic between pedestrians and vehicles is controlled by traffic signals, the crosswalk traffic signal maybe RED in order to prevent pedestrian from crossing when the following conditions are satisfied.
"},{"location":"perception/crosswalk_traffic_light_estimator/#situation1","title":"Situation1","text":"The detected_object_feature_remover
is a package to convert topic-type from DetectedObjectWithFeatureArray
to DetectedObjects
.
~/input
tier4_perception_msgs::msg::DetectedObjectWithFeatureArray
detected objects with feature field"},{"location":"perception/detected_object_feature_remover/#output","title":"Output","text":"Name Type Description ~/output
autoware_auto_perception_msgs::msg::DetectedObjects
detected objects"},{"location":"perception/detected_object_feature_remover/#parameters","title":"Parameters","text":"None
"},{"location":"perception/detected_object_feature_remover/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/detected_object_validation/","title":"detected_object_validation","text":""},{"location":"perception/detected_object_validation/#detected_object_validation","title":"detected_object_validation","text":""},{"location":"perception/detected_object_validation/#purpose","title":"Purpose","text":"The purpose of this package is to eliminate obvious false positives of DetectedObjects.
"},{"location":"perception/detected_object_validation/#referencesexternal-links","title":"References/External links","text":"The object_lanelet_filter
is a node that filters detected object by using vector map. The objects only inside of the vector map will be published.
input/vector_map
autoware_auto_mapping_msgs::msg::HADMapBin
vector map input/object
autoware_auto_perception_msgs::msg::DetectedObjects
input detected objects"},{"location":"perception/detected_object_validation/object-lanelet-filter/#output","title":"Output","text":"Name Type Description output/object
autoware_auto_perception_msgs::msg::DetectedObjects
filtered detected objects"},{"location":"perception/detected_object_validation/object-lanelet-filter/#parameters","title":"Parameters","text":""},{"location":"perception/detected_object_validation/object-lanelet-filter/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description filter_target_label.UNKNOWN
bool false If true, unknown objects are filtered. filter_target_label.CAR
bool false If true, car objects are filtered. filter_target_label.TRUCK
bool false If true, truck objects are filtered. filter_target_label.BUS
bool false If true, bus objects are filtered. filter_target_label.TRAILER
bool false If true, trailer objects are filtered. filter_target_label.MOTORCYCLE
bool false If true, motorcycle objects are filtered. filter_target_label.BICYCLE
bool false If true, bicycle objects are filtered. filter_target_label.PEDESTRIAN
bool false If true, pedestrian objects are filtered."},{"location":"perception/detected_object_validation/object-lanelet-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":"The lanelet filter is performed based on the shape polygon and bounding box of the objects.
"},{"location":"perception/detected_object_validation/object-lanelet-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/detected_object_validation/object-lanelet-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/detected_object_validation/object-lanelet-filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/detected_object_validation/object-lanelet-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/detected_object_validation/object-position-filter/","title":"object_position_filter","text":""},{"location":"perception/detected_object_validation/object-position-filter/#object_position_filter","title":"object_position_filter","text":""},{"location":"perception/detected_object_validation/object-position-filter/#purpose","title":"Purpose","text":"The object_position_filter
is a node that filters detected object based on x,y values. The objects only inside of the x, y bound will be published.
input/object
autoware_auto_perception_msgs::msg::DetectedObjects
input detected objects"},{"location":"perception/detected_object_validation/object-position-filter/#output","title":"Output","text":"Name Type Description output/object
autoware_auto_perception_msgs::msg::DetectedObjects
filtered detected objects"},{"location":"perception/detected_object_validation/object-position-filter/#parameters","title":"Parameters","text":""},{"location":"perception/detected_object_validation/object-position-filter/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description filter_target_label.UNKNOWN
bool false If true, unknown objects are filtered. filter_target_label.CAR
bool false If true, car objects are filtered. filter_target_label.TRUCK
bool false If true, truck objects are filtered. filter_target_label.BUS
bool false If true, bus objects are filtered. filter_target_label.TRAILER
bool false If true, trailer objects are filtered. filter_target_label.MOTORCYCLE
bool false If true, motorcycle objects are filtered. filter_target_label.BICYCLE
bool false If true, bicycle objects are filtered. filter_target_label.PEDESTRIAN
bool false If true, pedestrian objects are filtered. upper_bound_x
float 100.00 Bound for filtering. Only used if filter_by_xy_position is true lower_bound_x
float 0.00 Bound for filtering. Only used if filter_by_xy_position is true upper_bound_y
float 50.00 Bound for filtering. Only used if filter_by_xy_position is true lower_bound_y
float -50.00 Bound for filtering. Only used if filter_by_xy_position is true"},{"location":"perception/detected_object_validation/object-position-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":"Filtering is performed based on the center position of the object.
"},{"location":"perception/detected_object_validation/object-position-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/detected_object_validation/object-position-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/detected_object_validation/object-position-filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/detected_object_validation/object-position-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/detected_object_validation/obstacle-pointcloud-based-validator/","title":"obstacle pointcloud based validator","text":""},{"location":"perception/detected_object_validation/obstacle-pointcloud-based-validator/#obstacle-pointcloud-based-validator","title":"obstacle pointcloud based validator","text":""},{"location":"perception/detected_object_validation/obstacle-pointcloud-based-validator/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"If the number of obstacle point groups in the DetectedObjects is small, it is considered a false positive and removed. The obstacle point cloud can be a point cloud after compare map filtering or a ground filtered point cloud.
In the debug image above, the red DetectedObject is the validated object. The blue object is the deleted object.
"},{"location":"perception/detected_object_validation/obstacle-pointcloud-based-validator/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/detected_object_validation/obstacle-pointcloud-based-validator/#input","title":"Input","text":"Name Type Description~/input/detected_objects
autoware_auto_perception_msgs::msg::DetectedObjects
DetectedObjects ~/input/obstacle_pointcloud
sensor_msgs::msg::PointCloud2
Obstacle point cloud of dynamic objects"},{"location":"perception/detected_object_validation/obstacle-pointcloud-based-validator/#output","title":"Output","text":"Name Type Description ~/output/objects
autoware_auto_perception_msgs::msg::DetectedObjects
validated DetectedObjects"},{"location":"perception/detected_object_validation/obstacle-pointcloud-based-validator/#parameters","title":"Parameters","text":"Name Type Description using_2d_validator
bool The xy-plane projected (2D) obstacle point clouds will be used for validation min_points_num
int The minimum number of obstacle point clouds in DetectedObjects max_points_num
int The max number of obstacle point clouds in DetectedObjects min_points_and_distance_ratio
float Threshold value of the number of point clouds per object when the distance from baselink is 1m, because the number of point clouds varies with the distance from baselink. enable_debugger
bool Whether to create debug topics or not?"},{"location":"perception/detected_object_validation/obstacle-pointcloud-based-validator/#assumptions-known-limits","title":"Assumptions / Known limits","text":"Currently, only represented objects as BoundingBox or Cylinder are supported.
"},{"location":"perception/detected_object_validation/occupancy-grid-based-validator/","title":"occupancy grid based validator","text":""},{"location":"perception/detected_object_validation/occupancy-grid-based-validator/#occupancy-grid-based-validator","title":"occupancy grid based validator","text":""},{"location":"perception/detected_object_validation/occupancy-grid-based-validator/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"Compare the occupancy grid map with the DetectedObject, and if a larger percentage of obstacles are in freespace, delete them.
Basically, it takes an occupancy grid map as input and generates a binary image of freespace or other.
A mask image is generated for each DetectedObject and the average value (percentage) in the mask image is calculated. If the percentage is low, it is deleted.
"},{"location":"perception/detected_object_validation/occupancy-grid-based-validator/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/detected_object_validation/occupancy-grid-based-validator/#input","title":"Input","text":"Name Type Description~/input/detected_objects
autoware_auto_perception_msgs::msg::DetectedObjects
DetectedObjects ~/input/occupancy_grid_map
nav_msgs::msg::OccupancyGrid
OccupancyGrid with no time series calculation is preferred."},{"location":"perception/detected_object_validation/occupancy-grid-based-validator/#output","title":"Output","text":"Name Type Description ~/output/objects
autoware_auto_perception_msgs::msg::DetectedObjects
validated DetectedObjects"},{"location":"perception/detected_object_validation/occupancy-grid-based-validator/#parameters","title":"Parameters","text":"Name Type Description mean_threshold
float The percentage threshold of allowed non-freespace. enable_debug
bool Whether to display debug images or not?"},{"location":"perception/detected_object_validation/occupancy-grid-based-validator/#assumptions-known-limits","title":"Assumptions / Known limits","text":"Currently, only vehicle represented as BoundingBox are supported.
"},{"location":"perception/detection_by_tracker/","title":"detection_by_tracker","text":""},{"location":"perception/detection_by_tracker/#detection_by_tracker","title":"detection_by_tracker","text":""},{"location":"perception/detection_by_tracker/#purpose","title":"Purpose","text":"This package feeds back the tracked objects to the detection module to keep it stable and keep detecting objects.
The detection by tracker takes as input an unknown object containing a cluster of points and a tracker. The unknown object is optimized to fit the size of the tracker so that it can continue to be detected.
"},{"location":"perception/detection_by_tracker/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"The detection by tracker receives an unknown object containing a point cloud and a tracker, where the unknown object is mainly shape-fitted using euclidean clustering. Shape fitting using euclidean clustering and other methods has a problem called under segmentation and over segmentation.
Adapted from [3]
Simply looking at the overlap between the unknown object and the tracker does not work. We need to take measures for under segmentation and over segmentation.
"},{"location":"perception/detection_by_tracker/#policy-for-dealing-with-over-segmentation","title":"Policy for dealing with over segmentation","text":"~/input/initial_objects
tier4_perception_msgs::msg::DetectedObjectsWithFeature
unknown objects ~/input/tracked_objects
tier4_perception_msgs::msg::TrackedObjects
trackers"},{"location":"perception/detection_by_tracker/#output","title":"Output","text":"Name Type Description ~/output
autoware_auto_perception_msgs::msg::DetectedObjects
objects"},{"location":"perception/detection_by_tracker/#parameters","title":"Parameters","text":"Name Type Description Default value tracker_ignore_label.UNKNOWN
bool
If true, the node will ignore the tracker if its label is unknown. true
tracker_ignore_label.CAR
bool
If true, the node will ignore the tracker if its label is CAR. false
tracker_ignore_label.PEDESTRIAN
bool
If true, the node will ignore the tracker if its label is pedestrian. false
tracker_ignore_label.BICYCLE
bool
If true, the node will ignore the tracker if its label is bicycle. false
tracker_ignore_label.MOTORCYCLE
bool
If true, the node will ignore the tracker if its label is MOTORCYCLE. false
tracker_ignore_label.BUS
bool
If true, the node will ignore the tracker if its label is bus. false
tracker_ignore_label.TRUCK
bool
If true, the node will ignore the tracker if its label is truck. false
tracker_ignore_label.TRAILER
bool
If true, the node will ignore the tracker if its label is TRAILER. false
"},{"location":"perception/detection_by_tracker/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/detection_by_tracker/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/detection_by_tracker/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/detection_by_tracker/#optional-referencesexternal-links","title":"(Optional) References/External links","text":"[1] M. Himmelsbach, et al. \"Tracking and classification of arbitrary objects with bottom-up/top-down detection.\" (2012).
[2] Arya Senna Abdul Rachman, Arya. \"3D-LIDAR Multi Object Tracking for Autonomous Driving: Multi-target Detection and Tracking under Urban Road Uncertainties.\" (2017).
[3] David Held, et al. \"A Probabilistic Framework for Real-time 3D Segmentation using Spatial, Temporal, and Semantic Cues.\" (2016).
"},{"location":"perception/detection_by_tracker/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/elevation_map_loader/","title":"elevation_map_loader","text":""},{"location":"perception/elevation_map_loader/#elevation_map_loader","title":"elevation_map_loader","text":""},{"location":"perception/elevation_map_loader/#purpose","title":"Purpose","text":"This package provides elevation map for compare_map_segmentation.
"},{"location":"perception/elevation_map_loader/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"Generate elevation_map from subscribed pointcloud_map and vector_map and publish it. Save the generated elevation_map locally and load it from next time.
The elevation value of each cell is the average value of z of the points of the lowest cluster. Cells with No elevation value can be inpainted using the values of neighboring cells.
"},{"location":"perception/elevation_map_loader/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/elevation_map_loader/#input","title":"Input","text":"Name Type Description
input/pointcloud_map
sensor_msgs::msg::PointCloud2
The point cloud map input/vector_map
autoware_auto_mapping_msgs::msg::HADMapBin
(Optional) The binary data of lanelet2 map input/pointcloud_map_metadata
autoware_map_msgs::msg::PointCloudMapMetaData
(Optional) The metadata of point cloud map"},{"location":"perception/elevation_map_loader/#output","title":"Output","text":"Name Type Description output/elevation_map
grid_map_msgs::msg::GridMap
The elevation map output/elevation_map_cloud
sensor_msgs::msg::PointCloud2
(Optional) The point cloud generated from the value of elevation map"},{"location":"perception/elevation_map_loader/#service","title":"Service","text":"Name Type Description service/get_selected_pcd_map
autoware_map_msgs::srv::GetSelectedPointCloudMap
(Optional) service to request point cloud map. If pointcloud_map_loader uses selected pointcloud map loading via ROS 2 service, use this."},{"location":"perception/elevation_map_loader/#parameters","title":"Parameters","text":""},{"location":"perception/elevation_map_loader/#node-parameters","title":"Node parameters","text":"Name Type Description Default value map_layer_name std::string elevation_map layer name elevation param_file_path std::string GridMap parameters config path_default elevation_map_directory std::string elevation_map file (bag2) path_default map_frame std::string map_frame when loading elevation_map file map use_inpaint bool Whether to inpaint empty cells true inpaint_radius float Radius of a circular neighborhood of each point inpainted that is considered by the algorithm [m] 0.3 use_elevation_map_cloud_publisher bool Whether to publish output/elevation_map_cloud
false use_lane_filter bool Whether to filter elevation_map with vector_map false lane_margin float Margin distance from the lane polygon of the area to be included in the inpainting mask [m]. Used only when use_lane_filter=True. 0.0 use_sequential_load bool Whether to get point cloud map by service false sequential_map_load_num int The number of point cloud maps to load at once (only used when use_sequential_load is set true). This should not be larger than number of all point cloud map cells. 1"},{"location":"perception/elevation_map_loader/#gridmap-parameters","title":"GridMap parameters","text":"The parameters are described on config/elevation_map_parameters.yaml
.
See: https://github.com/ANYbotics/grid_map/tree/ros2/grid_map_pcl
Resulting grid map parameters.
Name Type Description Default value pcl_grid_map_extraction/grid_map/min_num_points_per_cell int Minimum number of points in the point cloud that have to fall within any of the grid map cells. Otherwise the cell elevation will be set to NaN. 3 pcl_grid_map_extraction/grid_map/resolution float Resolution of the grid map. Width and length are computed automatically. 0.3 pcl_grid_map_extraction/grid_map/height_type int The parameter that determine the elevation of a cell0: Smallest value among the average values of each cluster
, 1: Mean value of the cluster with the most points
1 pcl_grid_map_extraction/grid_map/height_thresh float Height range from the smallest cluster (Only for height_type 1) 1.0"},{"location":"perception/elevation_map_loader/#point-cloud-pre-processing-parameters","title":"Point Cloud Pre-processing Parameters","text":""},{"location":"perception/elevation_map_loader/#rigid-body-transform-parameters","title":"Rigid body transform parameters","text":"Rigid body transform that is applied to the point cloud before computing elevation.
Name Type Description Default value pcl_grid_map_extraction/cloud_transform/translation float Translation (xyz) that is applied to the input point cloud before computing elevation. 0.0 pcl_grid_map_extraction/cloud_transform/rotation float Rotation (intrinsic rotation, convention X-Y'-Z'') that is applied to the input point cloud before computing elevation. 0.0"},{"location":"perception/elevation_map_loader/#cluster-extraction-parameters","title":"Cluster extraction parameters","text":"Cluster extraction is based on pcl algorithms. See https://pointclouds.org/documentation/tutorials/cluster_extraction.html for more details.
Name Type Description Default value pcl_grid_map_extraction/cluster_extraction/cluster_tolerance float Distance between points below which they will still be considered part of one cluster. 0.2 pcl_grid_map_extraction/cluster_extraction/min_num_points int Min number of points that a cluster needs to have (otherwise it will be discarded). 3 pcl_grid_map_extraction/cluster_extraction/max_num_points int Max number of points that a cluster can have (otherwise it will be discarded). 1000000"},{"location":"perception/elevation_map_loader/#outlier-removal-parameters","title":"Outlier removal parameters","text":"See https://pointclouds.org/documentation/tutorials/statistical_outlier.html for more explanation on outlier removal.
Name Type Description Default value pcl_grid_map_extraction/outlier_removal/is_remove_outliers float Whether to perform statistical outlier removal. false pcl_grid_map_extraction/outlier_removal/mean_K float Number of neighbors to analyze for estimating statistics of a point. 10 pcl_grid_map_extraction/outlier_removal/stddev_threshold float Number of standard deviations under which points are considered to be inliers. 1.0"},{"location":"perception/elevation_map_loader/#subsampling-parameters","title":"Subsampling parameters","text":"See https://pointclouds.org/documentation/tutorials/voxel_grid.html for more explanation on point cloud downsampling.
Name Type Description Default value pcl_grid_map_extraction/downsampling/is_downsample_cloud bool Whether to perform downsampling or not. false pcl_grid_map_extraction/downsampling/voxel_size float Voxel sizes (xyz) in meters. 0.02"},{"location":"perception/euclidean_cluster/","title":"euclidean_cluster","text":""},{"location":"perception/euclidean_cluster/#euclidean_cluster","title":"euclidean_cluster","text":""},{"location":"perception/euclidean_cluster/#purpose","title":"Purpose","text":"euclidean_cluster is a package for clustering points into smaller parts to classify objects.
This package has two clustering methods: euclidean_cluster
and voxel_grid_based_euclidean_cluster
.
pcl::EuclideanClusterExtraction
is applied to points. See official document for details.
pcl::VoxelGrid
.pcl::EuclideanClusterExtraction
.input
sensor_msgs::msg::PointCloud2
input pointcloud"},{"location":"perception/euclidean_cluster/#output","title":"Output","text":"Name Type Description output
tier4_perception_msgs::msg::DetectedObjectsWithFeature
cluster pointcloud debug/clusters
sensor_msgs::msg::PointCloud2
colored cluster pointcloud for visualization"},{"location":"perception/euclidean_cluster/#parameters","title":"Parameters","text":""},{"location":"perception/euclidean_cluster/#core-parameters","title":"Core Parameters","text":""},{"location":"perception/euclidean_cluster/#euclidean_cluster_2","title":"euclidean_cluster","text":"Name Type Description use_height
bool use point.z for clustering min_cluster_size
int the minimum number of points that a cluster needs to contain in order to be considered valid max_cluster_size
int the maximum number of points that a cluster needs to contain in order to be considered valid tolerance
float the spatial cluster tolerance as a measure in the L2 Euclidean space"},{"location":"perception/euclidean_cluster/#voxel_grid_based_euclidean_cluster_1","title":"voxel_grid_based_euclidean_cluster","text":"Name Type Description use_height
bool use point.z for clustering min_cluster_size
int the minimum number of points that a cluster needs to contain in order to be considered valid max_cluster_size
int the maximum number of points that a cluster needs to contain in order to be considered valid tolerance
float the spatial cluster tolerance as a measure in the L2 Euclidean space voxel_leaf_size
float the voxel leaf size of x and y min_points_number_per_voxel
int the minimum number of points for a voxel"},{"location":"perception/euclidean_cluster/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/euclidean_cluster/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/euclidean_cluster/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/euclidean_cluster/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/euclidean_cluster/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":"The use_height
option of voxel_grid_based_euclidean_cluster
isn't implemented yet.
This package contains a front vehicle velocity estimation for offline perception module analysis. This package can:
~/input/objects
autoware_auto_perception_msgs/msg/DetectedObject.msg 3D detected objects. ~/input/pointcloud
sensor_msgs/msg/PointCloud2.msg LiDAR pointcloud. ~/input/odometry
nav_msgs::msg::Odometry.msg Odometry data."},{"location":"perception/front_vehicle_velocity_estimator/#output","title":"Output","text":"Name Type Description ~/output/objects
autoware_auto_perception_msgs/msg/DetectedObjects.msg 3D detected object with twist. ~/debug/nearest_neighbor_pointcloud
sensor_msgs/msg/PointCloud2.msg The pointcloud msg of nearest neighbor point."},{"location":"perception/front_vehicle_velocity_estimator/#node-parameter","title":"Node parameter","text":"Name Type Description Default value update_rate_hz
double The update rate [hz]. 10.0"},{"location":"perception/front_vehicle_velocity_estimator/#core-parameter","title":"Core parameter","text":"Name Type Description Default value moving_average_num
int The moving average number for velocity estimation. 1 threshold_pointcloud_z_high
float The threshold for z position value of point when choosing nearest neighbor point within front vehicle [m]. If z > threshold_pointcloud_z_high
, the point is considered to noise. 1.0f threshold_pointcloud_z_low
float The threshold for z position value of point when choosing nearest neighbor point within front vehicle [m]. If z < threshold_pointcloud_z_low
, the point is considered to noise like ground. 0.6f threshold_relative_velocity
double The threshold for min and max of estimated relative velocity (\\(v_{re}\\)) [m/s]. If \\(v_{re}\\) < - threshold_relative_velocity
, then \\(v_{re}\\) = - threshold_relative_velocity
. If \\(v_{re}\\) > threshold_relative_velocity
, then \\(v_{re}\\) = threshold_relative_velocity
. 10.0 threshold_absolute_velocity
double The threshold for max of estimated absolute velocity (\\(v_{ae}\\)) [m/s]. If \\(v_{ae}\\) > threshold_absolute_velocity
, then \\(v_{ae}\\) = threshold_absolute_velocity
. 20.0"},{"location":"perception/ground_segmentation/","title":"ground_segmentation","text":""},{"location":"perception/ground_segmentation/#ground_segmentation","title":"ground_segmentation","text":""},{"location":"perception/ground_segmentation/#purpose","title":"Purpose","text":"The ground_segmentation
is a node that remove the ground points from the input pointcloud.
Detail description of each ground segmentation algorithm is in the following links.
Filter Name Description Detail ray_ground_filter A method of removing the ground based on the geometrical relationship between points lined up on radiation link scan_ground_filter Almost the same method asray_ground_filter
, but with slightly improved performance link ransac_ground_filter A method of removing the ground by approximating the ground to a plane link"},{"location":"perception/ground_segmentation/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/ground_segmentation/#input","title":"Input","text":"Name Type Description ~/input/points
sensor_msgs::msg::PointCloud2
reference points ~/input/indices
pcl_msgs::msg::Indices
reference indices"},{"location":"perception/ground_segmentation/#output","title":"Output","text":"Name Type Description ~/output/points
sensor_msgs::msg::PointCloud2
filtered points"},{"location":"perception/ground_segmentation/#parameters","title":"Parameters","text":""},{"location":"perception/ground_segmentation/#node-parameters","title":"Node Parameters","text":"Name Type Default Value Description input_frame
string \" \" input frame id output_frame
string \" \" output frame id max_queue_size
int 5 max queue size of input/output topics use_indices
bool false flag to use pointcloud indices latched_indices
bool false flag to latch pointcloud indices approximate_sync
bool false flag to use approximate sync option"},{"location":"perception/ground_segmentation/#assumptions-known-limits","title":"Assumptions / Known limits","text":"pointcloud_preprocessor::Filter
is implemented based on pcl_perception [1] because of this issue.
[1] https://github.com/ros-perception/perception_pcl/blob/ros2/pcl_ros/src/pcl_ros/filters/filter.cpp
"},{"location":"perception/ground_segmentation/docs/ransac-ground-filter/","title":"RANSAC Ground Filter","text":""},{"location":"perception/ground_segmentation/docs/ransac-ground-filter/#ransac-ground-filter","title":"RANSAC Ground Filter","text":""},{"location":"perception/ground_segmentation/docs/ransac-ground-filter/#purpose","title":"Purpose","text":"The purpose of this node is that remove the ground points from the input pointcloud.
"},{"location":"perception/ground_segmentation/docs/ransac-ground-filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"Apply the input points to the plane, and set the points at a certain distance from the plane as points other than the ground. Normally, whn using this method, the input points is filtered so that it is almost flat before use. Since the drivable area is often flat, there are methods such as filtering by lane.
"},{"location":"perception/ground_segmentation/docs/ransac-ground-filter/#inputs-outputs","title":"Inputs / Outputs","text":"This implementation inherits pointcloud_preprocessor::Filter
class, please refer README.
This implementation inherits pointcloud_preprocessor::Filter
class, please refer README.
base_frame
string base_link frame unit_axis
string The axis which we need to search ground plane max_iterations
int The maximum number of iterations outlier_threshold
double The distance threshold to the model [m] plane_slope_threshold
double The slope threshold to prevent mis-fitting [deg] voxel_size_x
double voxel size x [m] voxel_size_y
double voxel size y [m] voxel_size_z
double voxel size z [m] height_threshold
double The height threshold from ground plane for no ground points [m] debug
bool whether to output debug information"},{"location":"perception/ground_segmentation/docs/ransac-ground-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":"https://pcl.readthedocs.io/projects/tutorials/en/latest/planar_segmentation.html
"},{"location":"perception/ground_segmentation/docs/ransac-ground-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/ground_segmentation/docs/ray-ground-filter/","title":"Ray Ground Filter","text":""},{"location":"perception/ground_segmentation/docs/ray-ground-filter/#ray-ground-filter","title":"Ray Ground Filter","text":""},{"location":"perception/ground_segmentation/docs/ray-ground-filter/#purpose","title":"Purpose","text":"The purpose of this node is that remove the ground points from the input pointcloud.
"},{"location":"perception/ground_segmentation/docs/ray-ground-filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"The points is separated radially (Ray), and the ground is classified for each Ray sequentially from the point close to ego-vehicle based on the geometric information such as the distance and angle between the points.
"},{"location":"perception/ground_segmentation/docs/ray-ground-filter/#inputs-outputs","title":"Inputs / Outputs","text":"This implementation inherits pointcloud_preprocessor::Filter
class, please refer README.
This implementation inherits pointcloud_preprocessor::Filter
class, please refer README.
input_frame
string frame id of input pointcloud output_frame
string frame id of output pointcloud general_max_slope
double The triangle created by general_max_slope
is called the global cone. If the point is outside the global cone, it is judged to be a point that is no on the ground initial_max_slope
double Generally, the point where the object first hits is far from ego-vehicle because of sensor blind spot, so resolution is different from that point and thereafter, so this parameter exists to set a separate local_max_slope
local_max_slope
double The triangle created by local_max_slope
is called the local cone. This parameter for classification based on the continuity of points min_height_threshold
double This parameter is used instead of height_threshold
because it's difficult to determine continuity in the local cone when the points are too close to each other. radial_divider_angle
double The angle of ray concentric_divider_distance
double Only check points which radius is larger than concentric_divider_distance
reclass_distance_threshold
double To check if point is to far from previous one, if so classify again min_x
double The parameter to set vehicle footprint manually max_x
double The parameter to set vehicle footprint manually min_y
double The parameter to set vehicle footprint manually max_y
double The parameter to set vehicle footprint manually"},{"location":"perception/ground_segmentation/docs/ray-ground-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":"The input_frame is set as parameter but it must be fixed as base_link for the current algorithm.
"},{"location":"perception/ground_segmentation/docs/ray-ground-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/ground_segmentation/docs/ray-ground-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/ground_segmentation/docs/ray-ground-filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/ground_segmentation/docs/ray-ground-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/ground_segmentation/docs/scan-ground-filter/","title":"Scan Ground Filter","text":""},{"location":"perception/ground_segmentation/docs/scan-ground-filter/#scan-ground-filter","title":"Scan Ground Filter","text":""},{"location":"perception/ground_segmentation/docs/scan-ground-filter/#purpose","title":"Purpose","text":"The purpose of this node is that remove the ground points from the input pointcloud.
"},{"location":"perception/ground_segmentation/docs/scan-ground-filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"This algorithm works by following steps,
This implementation inherits pointcloud_preprocessor::Filter
class, please refer README.
This implementation inherits pointcloud_preprocessor::Filter
class, please refer README.
input_frame
string \"base_link\" frame id of input pointcloud output_frame
string \"base_link\" frame id of output pointcloud global_slope_max_angle_deg
double 8.0 The global angle to classify as the ground or object [deg].A large threshold may reduce false positive of high slope road classification but it may lead to increase false negative of non-ground classification, particularly for small objects. local_slope_max_angle_deg
double 10.0 The local angle to classify as the ground or object [deg] when comparing with adjacent point.A small value enhance accuracy classification of object with inclined surface. This should be considered together with split_points_distance_tolerance
value. radial_divider_angle_deg
double 1.0 The angle which divide the whole pointcloud to sliced group [deg] split_points_distance_tolerance
double 0.2 The xy-distance threshold to distinguish far and near [m] split_height_distance
double 0.2 The height threshold to distinguish ground and non-ground pointcloud when comparing with adjacent points [m]. A small threshold improves classification of non-ground point, especially for high elevation resolution pointcloud lidar. However, it might cause false positive for small step-like road surface or misaligned multiple lidar configuration. use_virtual_ground_point
bool true whether to use the ground center of front wheels as the virtual ground point. detection_range_z_max
float 2.5 Maximum height of detection range [m], applied only for elevation_grid_mode center_pcl_shift
float 0.0 The x-axis offset of addition LiDARs from vehicle center of mass [m], recommended to use only for additional LiDARs in elevation_grid_mode non_ground_height_threshold
float 0.2 Height threshold of non ground objects [m] as split_height_distance
and applied only for elevation_grid_mode grid_mode_switch_radius
float 20.0 The distance where grid division mode change from by distance to by vertical angle [m], applied only for elevation_grid_mode grid_size_m
float 0.5 The first grid size [m], applied only for elevation_grid_mode.A large value enhances the prediction stability for ground surface. suitable for rough surface or multiple lidar configuration. gnd_grid_buffer_size
uint16 4 Number of grids using to estimate local ground slope, applied only for elevation_grid_mode low_priority_region_x
float -20.0 The non-zero x threshold in back side from which small objects detection is low priority [m] elevation_grid_mode
bool true Elevation grid scan mode option use_recheck_ground_cluster
bool true Enable recheck ground cluster"},{"location":"perception/ground_segmentation/docs/scan-ground-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":"The input_frame is set as parameter but it must be fixed as base_link for the current algorithm.
"},{"location":"perception/ground_segmentation/docs/scan-ground-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/ground_segmentation/docs/scan-ground-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/ground_segmentation/docs/scan-ground-filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":"The elevation grid idea is referred from \"Shen Z, Liang H, Lin L, Wang Z, Huang W, Yu J. Fast Ground Segmentation for 3D LiDAR Point Cloud Based on Jump-Convolution-Process. Remote Sensing. 2021; 13(16):3239. https://doi.org/10.3390/rs13163239\"
"},{"location":"perception/ground_segmentation/docs/scan-ground-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":"heatmap_visualizer is a package for visualizing heatmap of detected 3D objects' positions on the BEV space.
This package is used for qualitative evaluation and trend analysis of the detector, it means, for instance, the heatmap shows \"This detector performs good for near around of our vehicle, but far is bad\".
"},{"location":"perception/heatmap_visualizer/#how-to-run","title":"How to run","text":"ros2 launch heatmap_visualizer heatmap_visualizer.launch.xml input/objects:=<DETECTED_OBJECTS_TOPIC>\n
"},{"location":"perception/heatmap_visualizer/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"In this implementation, create heatmap of the center position of detected objects for each classes, for instance, CAR, PEDESTRIAN, etc, and publish them as occupancy grid maps.
In the above figure, the pink represents high detection frequency area and blue one is low, or black represents there is no detection.
As inner-workings, add center positions of detected objects to index of each corresponding grid map cell in a buffer. The created heatmap will be published by each specific frame, which can be specified with frame_count
. Note that the buffer to be add the positions is not reset per publishing. When publishing, firstly these values are normalized to [0, 1] using maximum and minimum values in the buffer. Secondly, they are scaled to integer in [0, 100] because nav_msgs::msg::OccupancyGrid
only allow the value in [0, 100].
~/input/objects
autoware_auto_perception_msgs::msg::DetectedObjects
detected objects"},{"location":"perception/heatmap_visualizer/#output","title":"Output","text":"Name Type Description ~/output/objects/<CLASS_NAME>
nav_msgs::msg::OccupancyGrid
visualized heatmap"},{"location":"perception/heatmap_visualizer/#parameters","title":"Parameters","text":""},{"location":"perception/heatmap_visualizer/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description publish_frame_count
int 50
The number of frames to publish heatmap heatmap_frame_id
string base_link
The frame ID of heatmap to be respected heatmap_length
float 200.0
A length of map in meter heatmap_resolution
float 0.8
A resolution of map use_confidence
bool false
A flag if use confidence score as heatmap value class_names
array [\"UNKNOWN\", \"CAR\", \"TRUCK\", \"BUS\", \"TRAILER\", \"BICYCLE\", \"MOTORBIKE\", \"PEDESTRIAN\"]
An array of class names to be published rename_to_car
bool true
A flag if rename car like vehicle to car"},{"location":"perception/heatmap_visualizer/#assumptions-known-limits","title":"Assumptions / Known limits","text":"The heatmap depends on the data to be used, so if the objects in data are sparse the heatmap will be sparse.
"},{"location":"perception/heatmap_visualizer/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/heatmap_visualizer/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/heatmap_visualizer/#referencesexternal-links","title":"References/External links","text":""},{"location":"perception/heatmap_visualizer/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/image_projection_based_fusion/","title":"image_projection_based_fusion","text":""},{"location":"perception/image_projection_based_fusion/#image_projection_based_fusion","title":"image_projection_based_fusion","text":""},{"location":"perception/image_projection_based_fusion/#purpose","title":"Purpose","text":"The image_projection_based_fusion
is a package to fuse detected obstacles (bounding box or segmentation) from image and 3d pointcloud or obstacles (bounding box, cluster or segmentation).
The offset between each camera and the lidar is set according to their shutter timing. After applying the offset to the timestamp, if the interval between the timestamp of pointcloud topic and the roi message is less than the match threshold, the two messages are matched.
current default value at autoware.universe for TIER IV Robotaxi are: - input_offset_ms: [61.67, 111.67, 45.0, 28.33, 78.33, 95.0] - match_threshold_ms: 30.0
"},{"location":"perception/image_projection_based_fusion/#fusion-and-timer","title":"fusion and timer","text":"The subscription status of the message is signed with 'O'.
1.if a pointcloud message is subscribed under the below condition:
pointcloud roi msg 1 roi msg 2 roi msg 3 subscription status O O OIf the roi msgs can be matched, fuse them and postprocess the pointcloud message. Otherwise, fuse the matched roi msgs and cache the pointcloud.
2.if a pointcloud message is subscribed under the below condition:
pointcloud roi msg 1 roi msg 2 roi msg 3 subscription status O Oif the roi msgs can be matched, fuse them and cache the pointcloud.
3.if a pointcloud message is subscribed under the below condition:
pointcloud roi msg 1 roi msg 2 roi msg 3 subscription status O O OIf the roi msg 3 is subscribed before the next pointcloud message coming or timeout, fuse it if matched, otherwise wait for the next roi msg 3. If the roi msg 3 is not subscribed before the next pointcloud message coming or timeout, postprocess the pointcloud message as it is.
The timeout threshold should be set according to the postprocessing time. E.g, if the postprocessing time is around 50ms, the timeout threshold should be set smaller than 50ms, so that the whole processing time could be less than 100ms. current default value at autoware.universe for XX1: - timeout_ms: 50.0
"},{"location":"perception/image_projection_based_fusion/#known-limits","title":"Known Limits","text":"The rclcpp::TimerBase timer could not break a for loop, therefore even if time is out when fusing a roi msg at the middle, the program will run until all msgs are fused.
"},{"location":"perception/image_projection_based_fusion/#detail-description-of-each-fusions-algorithm-is-in-the-following-links","title":"Detail description of each fusion's algorithm is in the following links","text":"Fusion Name Description Detail roi_cluster_fusion Overwrite a classification label of clusters by that of ROIs from a 2D object detector. link roi_detected_object_fusion Overwrite a classification label of detected objects by that of ROIs from a 2D object detector. link pointpainting_fusion Paint the point cloud with the ROIs from a 2D object detector and feed to a 3D object detector. link roi_pointcloud_fusion Matching pointcloud with ROIs from a 2D object detector to detect unknown-labeled objects link"},{"location":"perception/image_projection_based_fusion/docs/pointpainting-fusion/","title":"pointpainting_fusion","text":""},{"location":"perception/image_projection_based_fusion/docs/pointpainting-fusion/#pointpainting_fusion","title":"pointpainting_fusion","text":""},{"location":"perception/image_projection_based_fusion/docs/pointpainting-fusion/#purpose","title":"Purpose","text":"The pointpainting_fusion
is a package for utilizing the class information detected by a 2D object detection in 3D object detection.
The lidar points are projected onto the output of an image-only 2d object detection network and the class scores are appended to each point. The painted point cloud can then be fed to the centerpoint network.
"},{"location":"perception/image_projection_based_fusion/docs/pointpainting-fusion/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/image_projection_based_fusion/docs/pointpainting-fusion/#input","title":"Input","text":"Name Type Descriptioninput
sensor_msgs::msg::PointCloud2
pointcloud input/camera_info[0-7]
sensor_msgs::msg::CameraInfo
camera information to project 3d points onto image planes input/rois[0-7]
tier4_perception_msgs::msg::DetectedObjectsWithFeature
ROIs from each image input/image_raw[0-7]
sensor_msgs::msg::Image
images for visualization"},{"location":"perception/image_projection_based_fusion/docs/pointpainting-fusion/#output","title":"Output","text":"Name Type Description output
sensor_msgs::msg::PointCloud2
painted pointcloud ~/output/objects
autoware_auto_perception_msgs::msg::DetectedObjects
detected objects ~/debug/image_raw[0-7]
sensor_msgs::msg::Image
images for visualization"},{"location":"perception/image_projection_based_fusion/docs/pointpainting-fusion/#parameters","title":"Parameters","text":""},{"location":"perception/image_projection_based_fusion/docs/pointpainting-fusion/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description score_threshold
float 0.4
detected objects with score less than threshold are ignored densification_world_frame_id
string map
the world frame id to fuse multi-frame pointcloud densification_num_past_frames
int 0
the number of past frames to fuse with the current frame trt_precision
string fp16
TensorRT inference precision: fp32
or fp16
encoder_onnx_path
string \"\"
path to VoxelFeatureEncoder ONNX file encoder_engine_path
string \"\"
path to VoxelFeatureEncoder TensorRT Engine file head_onnx_path
string \"\"
path to DetectionHead ONNX file head_engine_path
string \"\"
path to DetectionHead TensorRT Engine file build_only
bool false
shutdown the node after TensorRT engine file is built"},{"location":"perception/image_projection_based_fusion/docs/pointpainting-fusion/#assumptions-known-limits","title":"Assumptions / Known limits","text":"[1] Vora, Sourabh, et al. \"PointPainting: Sequential fusion for 3d object detection.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020.
[2] CVPR'20 Workshop on Scalability in Autonomous Driving] Waymo Open Dataset Challenge: https://youtu.be/9g9GsI33ol8?t=535 Ding, Zhuangzhuang, et al. \"1st Place Solution for Waymo Open Dataset Challenge--3D Detection and Domain Adaptation.\" arXiv preprint arXiv:2006.15505 (2020).
"},{"location":"perception/image_projection_based_fusion/docs/pointpainting-fusion/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/image_projection_based_fusion/docs/roi-cluster-fusion/","title":"roi_cluster_fusion","text":""},{"location":"perception/image_projection_based_fusion/docs/roi-cluster-fusion/#roi_cluster_fusion","title":"roi_cluster_fusion","text":""},{"location":"perception/image_projection_based_fusion/docs/roi-cluster-fusion/#purpose","title":"Purpose","text":"The roi_cluster_fusion
is a package for filtering clusters that are less likely to be objects and overwriting labels of clusters with that of Region Of Interests (ROIs) by a 2D object detector.
The clusters are projected onto image planes, and then if the ROIs of clusters and ROIs by a detector are overlapped, the labels of clusters are overwritten with that of ROIs by detector. Intersection over Union (IoU) is used to determine if there are overlaps between them.
"},{"location":"perception/image_projection_based_fusion/docs/roi-cluster-fusion/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/image_projection_based_fusion/docs/roi-cluster-fusion/#input","title":"Input","text":"Name Type Descriptioninput
tier4_perception_msgs::msg::DetectedObjectsWithFeature
clustered pointcloud input/camera_info[0-7]
sensor_msgs::msg::CameraInfo
camera information to project 3d points onto image planes input/rois[0-7]
tier4_perception_msgs::msg::DetectedObjectsWithFeature
ROIs from each image input/image_raw[0-7]
sensor_msgs::msg::Image
images for visualization"},{"location":"perception/image_projection_based_fusion/docs/roi-cluster-fusion/#output","title":"Output","text":"Name Type Description output
tier4_perception_msgs::msg::DetectedObjectsWithFeature
labeled cluster pointcloud ~/debug/image_raw[0-7]
sensor_msgs::msg::Image
images for visualization"},{"location":"perception/image_projection_based_fusion/docs/roi-cluster-fusion/#parameters","title":"Parameters","text":"The following figure is an inner pipeline overview of RoI cluster fusion node. Please refer to it for your parameter settings.
"},{"location":"perception/image_projection_based_fusion/docs/roi-cluster-fusion/#core-parameters","title":"Core Parameters","text":"Name Type Descriptionfusion_distance
double If the detected object's distance to frame_id is less than the threshold, the fusion will be processed trust_object_distance
double if the detected object's distance is less than the trust_object_distance
, trust_object_iou_mode
will be used, otherwise non_trust_object_iou_mode
will be used trust_object_iou_mode
string select mode from 3 options {iou
, iou_x
, iou_y
} to calculate IoU in range of [0
, trust_distance
]. iou
: IoU along x-axis and y-axis iou_x
: IoU along x-axis iou_y
: IoU along y-axis non_trust_object_iou_mode
string the IOU mode using in range of [trust_distance
, fusion_distance
] if trust_distance
< fusion_distance
use_cluster_semantic_type
bool if false
, the labels of clusters are overwritten by UNKNOWN
before fusion only_allow_inside_cluster
bool if true
, the only clusters contained inside RoIs by a detector roi_scale_factor
double the scale factor for offset of detector RoIs if only_allow_inside_cluster=true
iou_threshold
double the IoU threshold to overwrite a label of clusters with a label of roi unknown_iou_threshold
double the IoU threshold to fuse cluster with unknown label of roi remove_unknown
bool if true
, remove all UNKNOWN
labeled objects from output rois_number
int the number of input rois debug_mode
bool If true
, subscribe and publish images for visualization."},{"location":"perception/image_projection_based_fusion/docs/roi-cluster-fusion/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/image_projection_based_fusion/docs/roi-cluster-fusion/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/image_projection_based_fusion/docs/roi-cluster-fusion/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/image_projection_based_fusion/docs/roi-cluster-fusion/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/image_projection_based_fusion/docs/roi-cluster-fusion/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/image_projection_based_fusion/docs/roi-detected-object-fusion/","title":"roi_detected_object_fusion","text":""},{"location":"perception/image_projection_based_fusion/docs/roi-detected-object-fusion/#roi_detected_object_fusion","title":"roi_detected_object_fusion","text":""},{"location":"perception/image_projection_based_fusion/docs/roi-detected-object-fusion/#purpose","title":"Purpose","text":"The roi_detected_object_fusion
is a package to overwrite labels of detected objects with that of Region Of Interests (ROIs) by a 2D object detector.
In what follows, we describe the algorithm utilized by roi_detected_object_fusion
(the meaning of each parameter can be found in the Parameters
section):
existence_probability
of a detected object is greater than the threshold, it is accepted without any further processing and published in output
.output
. The Intersection over Union (IoU) is used to determine if there are overlaps between the detections from input
and the ROIs from input/rois
.The DetectedObject has three possible shape choices/implementations, where the polygon's vertices for each case are defined as follows:
BOUNDING_BOX
: The 8 corners of a bounding box.CYLINDER
: The circle is approximated by a hexagon.POLYGON
: Not implemented yet.input
autoware_auto_perception_msgs::msg::DetectedObjects
input detected objects input/camera_info[0-7]
sensor_msgs::msg::CameraInfo
camera information to project 3d points onto image planes. input/rois[0-7]
tier4_perception_msgs::msg::DetectedObjectsWithFeature
ROIs from each image. input/image_raw[0-7]
sensor_msgs::msg::Image
images for visualization."},{"location":"perception/image_projection_based_fusion/docs/roi-detected-object-fusion/#output","title":"Output","text":"Name Type Description output
autoware_auto_perception_msgs::msg::DetectedObjects
detected objects ~/debug/image_raw[0-7]
sensor_msgs::msg::Image
images for visualization, ~/debug/fused_objects
autoware_auto_perception_msgs::msg::DetectedObjects
fused detected objects ~/debug/ignored_objects
autoware_auto_perception_msgs::msg::DetectedObjects
not fused detected objects"},{"location":"perception/image_projection_based_fusion/docs/roi-detected-object-fusion/#parameters","title":"Parameters","text":""},{"location":"perception/image_projection_based_fusion/docs/roi-detected-object-fusion/#core-parameters","title":"Core Parameters","text":"Name Type Description rois_number
int the number of input rois debug_mode
bool If set to true
, the node subscribes to the image topic and publishes an image with debug drawings. passthrough_lower_bound_probability_thresholds
vector[double] If the existence_probability
of a detected object is greater than the threshold, it is published in output. trust_distances
vector[double] If the distance of a detected object from the origin of frame_id is greater than the threshold, it is published in output. min_iou_threshold
double If the iou between detected objects and rois is greater than min_iou_threshold
, the objects are classified as fused. use_roi_probability
float If set to true
, the algorithm uses existence_probability
of ROIs to match with the that of detected objects. roi_probability_threshold
double If the existence_probability
of ROIs is greater than the threshold, matched detected objects are published in output
. can_assign_matrix
vector[int] association matrix between rois and detected_objects to check that two rois on images can be match"},{"location":"perception/image_projection_based_fusion/docs/roi-detected-object-fusion/#assumptions-known-limits","title":"Assumptions / Known limits","text":"POLYGON
, which is a shape of a detected object, isn't supported yet.
The node roi_pointcloud_fusion
is to cluster the pointcloud based on Region Of Interests (ROIs) detected by a 2D object detector, specific for unknown labeled ROI.
input
sensor_msgs::msg::PointCloud2
input pointcloud input/camera_info[0-7]
sensor_msgs::msg::CameraInfo
camera information to project 3d points onto image planes input/rois[0-7]
tier4_perception_msgs::msg::DetectedObjectsWithFeature
ROIs from each image input/image_raw[0-7]
sensor_msgs::msg::Image
images for visualization"},{"location":"perception/image_projection_based_fusion/docs/roi-pointcloud-fusion/#output","title":"Output","text":"Name Type Description output
sensor_msgs::msg::PointCloud2
output pointcloud as default of interface output_clusters
tier4_perception_msgs::msg::DetectedObjectsWithFeature
output clusters debug/clusters
sensor_msgs/msg/PointCloud2
colored cluster pointcloud for visualization"},{"location":"perception/image_projection_based_fusion/docs/roi-pointcloud-fusion/#parameters","title":"Parameters","text":""},{"location":"perception/image_projection_based_fusion/docs/roi-pointcloud-fusion/#core-parameters","title":"Core Parameters","text":"Name Type Description min_cluster_size
int the minimum number of points that a cluster needs to contain in order to be considered valid cluster_2d_tolerance
double cluster tolerance measured in radial direction rois_number
int the number of input rois"},{"location":"perception/image_projection_based_fusion/docs/roi-pointcloud-fusion/#assumptions-known-limits","title":"Assumptions / Known limits","text":"The node segmentation_pointcloud_fusion
is a package for filtering pointcloud that are belong to less interesting region which is defined by semantic or instance segmentation by 2D image segmentation model.
input
sensor_msgs::msg::PointCloud2
input pointcloud input/camera_info[0-7]
sensor_msgs::msg::CameraInfo
camera information to project 3d points onto image planes input/rois[0-7]
tier4_perception_msgs::msg::Image
semantic segmentation mask image input/image_raw[0-7]
sensor_msgs::msg::Image
images for visualization"},{"location":"perception/image_projection_based_fusion/docs/segmentation-pointcloud-fusion/#output","title":"Output","text":"Name Type Description output
sensor_msgs::msg::PointCloud2
output filtered pointcloud"},{"location":"perception/image_projection_based_fusion/docs/segmentation-pointcloud-fusion/#parameters","title":"Parameters","text":""},{"location":"perception/image_projection_based_fusion/docs/segmentation-pointcloud-fusion/#core-parameters","title":"Core Parameters","text":"Name Type Description rois_number
int the number of input rois"},{"location":"perception/image_projection_based_fusion/docs/segmentation-pointcloud-fusion/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/image_projection_based_fusion/docs/segmentation-pointcloud-fusion/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/image_projection_based_fusion/docs/segmentation-pointcloud-fusion/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/image_projection_based_fusion/docs/segmentation-pointcloud-fusion/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/image_projection_based_fusion/docs/segmentation-pointcloud-fusion/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/lidar_apollo_instance_segmentation/","title":"lidar_apollo_instance_segmentation","text":""},{"location":"perception/lidar_apollo_instance_segmentation/#lidar_apollo_instance_segmentation","title":"lidar_apollo_instance_segmentation","text":""},{"location":"perception/lidar_apollo_instance_segmentation/#purpose","title":"Purpose","text":"This node segments 3D pointcloud data from lidar sensors into obstacles, e.g., cars, trucks, bicycles, and pedestrians based on CNN based model and obstacle clustering method.
"},{"location":"perception/lidar_apollo_instance_segmentation/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"See the original design by Apollo.
"},{"location":"perception/lidar_apollo_instance_segmentation/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/lidar_apollo_instance_segmentation/#input","title":"Input","text":"Name Type Descriptioninput/pointcloud
sensor_msgs/PointCloud2
Pointcloud data from lidar sensors"},{"location":"perception/lidar_apollo_instance_segmentation/#output","title":"Output","text":"Name Type Description output/labeled_clusters
tier4_perception_msgs/DetectedObjectsWithFeature
Detected objects with labeled pointcloud cluster. debug/instance_pointcloud
sensor_msgs/PointCloud2
Segmented pointcloud for visualization."},{"location":"perception/lidar_apollo_instance_segmentation/#parameters","title":"Parameters","text":""},{"location":"perception/lidar_apollo_instance_segmentation/#node-parameters","title":"Node Parameters","text":"None
"},{"location":"perception/lidar_apollo_instance_segmentation/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Descriptionscore_threshold
double 0.8 If the score of a detected object is lower than this value, the object is ignored. range
int 60 Half of the length of feature map sides. [m] width
int 640 The grid width of feature map. height
int 640 The grid height of feature map. engine_file
string \"vls-128.engine\" The name of TensorRT engine file for CNN model. prototxt_file
string \"vls-128.prototxt\" The name of prototxt file for CNN model. caffemodel_file
string \"vls-128.caffemodel\" The name of caffemodel file for CNN model. use_intensity_feature
bool true The flag to use intensity feature of pointcloud. use_constant_feature
bool false The flag to use direction and distance feature of pointcloud. target_frame
string \"base_link\" Pointcloud data is transformed into this frame. z_offset
int 2 z offset from target frame. [m]"},{"location":"perception/lidar_apollo_instance_segmentation/#assumptions-known-limits","title":"Assumptions / Known limits","text":"There is no training code for CNN model.
"},{"location":"perception/lidar_apollo_instance_segmentation/#note","title":"Note","text":"This package makes use of three external codes. The trained files are provided by apollo. The trained files are automatically downloaded when you build.
Original URL
Supported lidars are velodyne 16, 64 and 128, but you can also use velodyne 32 and other lidars with good accuracy.
apollo 3D Obstacle Perception description
/******************************************************************************\n* Copyright 2017 The Apollo Authors. All Rights Reserved.\n*\n* Licensed under the Apache License, Version 2.0 (the \"License\");\n* you may not use this file except in compliance with the License.\n* You may obtain a copy of the License at\n*\n* http://www.apache.org/licenses/LICENSE-2.0\n*\n* Unless required by applicable law or agreed to in writing, software\n* distributed under the License is distributed on an \"AS IS\" BASIS,\n* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n* See the License for the specific language governing permissions and\n* limitations under the License.\n*****************************************************************************/\n
tensorRTWrapper : It is used under the lib directory.
MIT License\n\nCopyright (c) 2018 lewes6369\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n
autoware_perception description
/*\n* Copyright 2018-2019 Autoware Foundation. All rights reserved.\n*\n* Licensed under the Apache License, Version 2.0 (the \"License\");\n* you may not use this file except in compliance with the License.\n* You may obtain a copy of the License at\n*\n* http://www.apache.org/licenses/LICENSE-2.0\n*\n* Unless required by applicable law or agreed to in writing, software\n* distributed under the License is distributed on an \"AS IS\" BASIS,\n* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n* See the License for the specific language governing permissions and\n* limitations under the License.\n*/\n
This package will not run without a neural network for its inference. The network is provided by ansible script during the installation of Autoware or can be downloaded manually according to Manual Downloading. This package uses 'get_neural_network' function from tvm_utility package to create and provide proper dependency. See its design page for more information on how to handle user-compiled networks.
"},{"location":"perception/lidar_apollo_segmentation_tvm/#backend","title":"Backend","text":"The backend used for the inference can be selected by setting the lidar_apollo_segmentation_tvm_BACKEND
cmake variable. The current available options are llvm
for a CPU backend, and vulkan
for a GPU backend. It defaults to llvm
.
See the original design by Apollo. The paragraph of interest goes up to, but excluding, the \"MinBox Builder\" paragraph. This package instead relies on further processing by a dedicated shape estimator.
Note: the parameters described in the original design have been modified and are out of date.
"},{"location":"perception/lidar_apollo_segmentation_tvm/#inputs-outputs-api","title":"Inputs / Outputs / API","text":"The package exports a boolean lidar_apollo_segmentation_tvm_BUILT
cmake variable.
Lidar segmentation is based off a core algorithm by Apollo, with modifications from [TIER IV] (https://github.com/tier4/lidar_instance_segmentation_tvm) for the TVM backend.
"},{"location":"perception/lidar_apollo_segmentation_tvm_nodes/","title":"Index","text":""},{"location":"perception/lidar_apollo_segmentation_tvm_nodes/#lidar_apollo_segmentation_tvm_nodes","title":"lidar_apollo_segmentation_tvm_nodes","text":""},{"location":"perception/lidar_apollo_segmentation_tvm_nodes/#purpose-use-cases","title":"Purpose / Use cases","text":"An alternative to Euclidean clustering. This node detects and labels foreground obstacles (e.g. cars, motorcycles, pedestrians) from a point cloud, using a neural network.
"},{"location":"perception/lidar_apollo_segmentation_tvm_nodes/#design","title":"Design","text":"See the design of the algorithm in the core (lidar_apollo_segmentation_tvm) package's design documents.
"},{"location":"perception/lidar_apollo_segmentation_tvm_nodes/#usage","title":"Usage","text":"lidar_apollo_segmentation_tvm
and lidar_apollo_segmentation_tvm_nodes
will not work without a neural network. See the lidar_apollo_segmentation_tvm usage for more information.
The original node from Apollo has a Region Of Interest (ROI) filter. This has the benefit of working with a filtered point cloud that includes only the points inside the ROI (i.e., the drivable road and junction areas) with most of the background obstacles removed (such as buildings and trees around the road region). Not having this filter may negatively impact performance.
"},{"location":"perception/lidar_apollo_segmentation_tvm_nodes/#inputs-outputs-api","title":"Inputs / Outputs / API","text":""},{"location":"perception/lidar_apollo_segmentation_tvm_nodes/#inputs","title":"Inputs","text":"The input are non-ground points as a PointCloud2 message from the sensor_msgs package.
"},{"location":"perception/lidar_apollo_segmentation_tvm_nodes/#outputs","title":"Outputs","text":"The output is a DetectedObjectsWithFeature.
"},{"location":"perception/lidar_apollo_segmentation_tvm_nodes/#parameters","title":"Parameters","text":"Name Type Description Default Range range integer The range of the 2D grid with respect to the origin. 90 >0 score_threshold float The detection confidence score threshold for filtering out the candidate clusters in the post-processing step. 0.1 \u22650.0\u22641.0 use_intensity_feature boolean Enable input channel intensity feature. false N/A use_constant_feature boolean Enable input channel constant feature. false N/A z_offset float Vertical translation of the pointcloud before inference. 0.0 N/A min_height float The minimum height with respect to the origin -5.0 N/A max_height float The maximum height with respect to the origin. 5.0 N/A objectness_thresh float The threshold of objectness for filtering out non-object cells in the obstacle clustering step. 0.5 \u22650.0\u22641.0 min_pts_num integer In the post-processing step, the candidate clusters with less than min_pts_num points are removed. 3 \u22650 height_thresh float If it is non-negative, the points that are higher than the predicted object height by height_thresh are filtered out in the post-processing step. 0.5 N/A data_path string Packages data and artifacts directory path. $(env HOME)/autoware_data N/A"},{"location":"perception/lidar_apollo_segmentation_tvm_nodes/#error-detection-and-handling","title":"Error detection and handling","text":"Abort and warn when the input frame can't be converted to base_link
.
Both the input and output are controlled by the same actor, so the following security concerns are out-of-scope:
Leaking data to another actor would require a flaw in TVM or the host operating system that allows arbitrary memory to be read, a significant security flaw in itself. This is also true for an external actor operating the pipeline early: only the object that initiated the pipeline can run the methods to receive its output.
A Denial-of-Service attack could make the target hardware unusable for other pipelines but would require being able to run code on the CPU, which would already allow a more severe Denial-of-Service attack.
No elevation of privilege is required for this package.
"},{"location":"perception/lidar_apollo_segmentation_tvm_nodes/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":""},{"location":"perception/lidar_apollo_segmentation_tvm_nodes/#related-issues","title":"Related issues","text":""},{"location":"perception/lidar_apollo_segmentation_tvm_nodes/#226-autowareauto-neural-networks-inference-architecture-design","title":"226: Autoware.Auto Neural Networks Inference Architecture Design","text":""},{"location":"perception/lidar_centerpoint/","title":"lidar_centerpoint","text":""},{"location":"perception/lidar_centerpoint/#lidar_centerpoint","title":"lidar_centerpoint","text":""},{"location":"perception/lidar_centerpoint/#purpose","title":"Purpose","text":"lidar_centerpoint is a package for detecting dynamic 3D objects.
"},{"location":"perception/lidar_centerpoint/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"In this implementation, CenterPoint [1] uses a PointPillars-based [2] network to inference with TensorRT.
We trained the models using https://github.com/open-mmlab/mmdetection3d.
"},{"location":"perception/lidar_centerpoint/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/lidar_centerpoint/#input","title":"Input","text":"Name Type Description~/input/pointcloud
sensor_msgs::msg::PointCloud2
input pointcloud"},{"location":"perception/lidar_centerpoint/#output","title":"Output","text":"Name Type Description ~/output/objects
autoware_auto_perception_msgs::msg::DetectedObjects
detected objects debug/cyclic_time_ms
tier4_debug_msgs::msg::Float64Stamped
cyclic time (msg) debug/processing_time_ms
tier4_debug_msgs::msg::Float64Stamped
processing time (ms)"},{"location":"perception/lidar_centerpoint/#parameters","title":"Parameters","text":""},{"location":"perception/lidar_centerpoint/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description score_threshold
float 0.4
detected objects with score less than threshold are ignored densification_world_frame_id
string map
the world frame id to fuse multi-frame pointcloud densification_num_past_frames
int 1
the number of past frames to fuse with the current frame trt_precision
string fp16
TensorRT inference precision: fp32
or fp16
encoder_onnx_path
string \"\"
path to VoxelFeatureEncoder ONNX file encoder_engine_path
string \"\"
path to VoxelFeatureEncoder TensorRT Engine file head_onnx_path
string \"\"
path to DetectionHead ONNX file head_engine_path
string \"\"
path to DetectionHead TensorRT Engine file nms_iou_target_class_names
list[string] - target classes for IoU-based Non Maximum Suppression nms_iou_search_distance_2d
double - If two objects are farther than the value, NMS isn't applied. nms_iou_threshold
double - IoU threshold for the IoU-based Non Maximum Suppression build_only
bool false
shutdown the node after TensorRT engine file is built"},{"location":"perception/lidar_centerpoint/#assumptions-known-limits","title":"Assumptions / Known limits","text":"object.existence_probability
is stored the value of classification confidence of a DNN, not probability.You can download the onnx format of trained models by clicking on the links below.
Centerpoint
was trained in nuScenes
(~28k lidar frames) [8] and TIER IV's internal database (~11k lidar frames) for 60 epochs. Centerpoint tiny
was trained in Argoverse 2
(~110k lidar frames) [9] and TIER IV's internal database (~11k lidar frames) for 20 epochs.
In addition to its use as a standard ROS node, lidar_centerpoint
can also be used to perform inferences in an isolated manner. To do so, execute the following launcher, where pcd_path
is the path of the pointcloud to be used for inference.
ros2 launch lidar_centerpoint single_inference_lidar_centerpoint.launch.xml pcd_path:=test_pointcloud.pcd detections_path:=test_detections.ply\n
lidar_centerpoint
generates a ply
file in the provided detections_path
, which contains the detections as triangle meshes. These detections can be visualized by most 3D tools, but we also integrate a visualization UI using Open3D
which is launched alongside lidar_centerpoint
.
centerpoint
pts_voxel_encoder pts_backbone_neck_head There is a single change due to the limitation in the implementation of this package. num_filters=[32, 32]
of PillarFeatureNet
centerpoint_tiny
pts_voxel_encoder pts_backbone_neck_head The same model as default
of v0
. These changes are compared with this configuration.
"},{"location":"perception/lidar_centerpoint/#v0-20211203","title":"v0 (2021/12/03)","text":"Name URLs Descriptiondefault
pts_voxel_encoder pts_backbone_neck_head There are two changes from the original CenterPoint architecture. num_filters=[32]
of PillarFeatureNet
and ds_layer_strides=[2, 2, 2]
of RPN
"},{"location":"perception/lidar_centerpoint/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/lidar_centerpoint/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/lidar_centerpoint/#referencesexternal-links","title":"References/External links","text":"[1] Yin, Tianwei, Xingyi Zhou, and Philipp Kr\u00e4henb\u00fchl. \"Center-based 3d object detection and tracking.\" arXiv preprint arXiv:2006.11275 (2020).
[2] Lang, Alex H., et al. \"PointPillars: Fast encoders for object detection from point clouds.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019.
[3] https://github.com/tianweiy/CenterPoint
[4] https://github.com/open-mmlab/mmdetection3d
[5] https://github.com/open-mmlab/OpenPCDet
[6] https://github.com/yukkysaito/autoware_perception
[7] https://github.com/NVIDIA-AI-IOT/CUDA-PointPillars
[8] https://www.nuscenes.org/nuscenes
[9] https://www.argoverse.org/av2.html
"},{"location":"perception/lidar_centerpoint/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/lidar_centerpoint/launch/centerpoint_vs_centerpoint-tiny/","title":"Run lidar_centerpoint and lidar_centerpoint-tiny simultaneously","text":""},{"location":"perception/lidar_centerpoint/launch/centerpoint_vs_centerpoint-tiny/#run-lidar_centerpoint-and-lidar_centerpoint-tiny-simultaneously","title":"Run lidar_centerpoint and lidar_centerpoint-tiny simultaneously","text":"This tutorial is for showing centerpoint
and centerpoint_tiny
models\u2019 results simultaneously, making it easier to visualize and compare the performance.
Follow the steps in the Source Installation (link) in Autoware doc.
If you fail to build autoware environment according to lack of memory, then it is recommended to build autoware sequentially.
Source the ROS 2 Galactic setup script.
source /opt/ros/galactic/setup.bash\n
Build the entire autoware repository.
colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release --parallel-workers=1\n
Or you can use a constrained number of CPU to build only one package.
export MAKEFLAGS=\"-j 4\" && MAKE_JOBS=4 colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release --parallel-workers 1 --packages-select PACKAGE_NAME\n
Source the package.
source install/setup.bash\n
"},{"location":"perception/lidar_centerpoint/launch/centerpoint_vs_centerpoint-tiny/#data-preparation","title":"Data Preparation","text":""},{"location":"perception/lidar_centerpoint/launch/centerpoint_vs_centerpoint-tiny/#using-rosbag-dataset","title":"Using rosbag dataset","text":"ros2 bag play /YOUR/ROSBAG/PATH/ --clock 100\n
Don't forget to add clock
in order to sync between two rviz display.
You can also use the sample rosbag provided by autoware here.
If you want to merge several rosbags into one, you can refer to this tool.
"},{"location":"perception/lidar_centerpoint/launch/centerpoint_vs_centerpoint-tiny/#using-realtime-lidar-dataset","title":"Using realtime LiDAR dataset","text":"Set up your Ethernet connection according to 1.1 - 1.3 in this website.
Download Velodyne ROS driver
git clone -b ros2 https://github.com/ros-drivers/velodyne.git\n
Source the ROS 2 Galactic setup script.
source /opt/ros/galactic/setup.bash\n
Compile Velodyne driver
cd velodyne\nrosdep install -y --from-paths . --ignore-src --rosdistro $ROS_DISTRO\ncolcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release\n
Edit the configuration file. Specify the LiDAR device IP address in ./velodyne_driver/config/VLP32C-velodyne_driver_node-params.yaml
velodyne_driver_node:\nros__parameters:\ndevice_ip: 192.168.1.201 //change to your LiDAR device IP address\ngps_time: false\ntime_offset: 0.0\nenabled: true\nread_once: false\nread_fast: false\nrepeat_delay: 0.0\nframe_id: velodyne\nmodel: 32C\nrpm: 600.0\nport: 2368\n
Launch the velodyne driver.
# Terminal 1\nros2 launch velodyne_driver velodyne_driver_node-VLP32C-launch.py\n
Launch the velodyne_pointcloud.
# Terminal 2\nros2 launch velodyne_pointcloud velodyne_convert_node-VLP32C-launch.py\n
Point Cloud data will be available on topic /velodyne_points
. You can check with ros2 topic echo /velodyne_points
.
Check this website if there is any unexpected issue.
"},{"location":"perception/lidar_centerpoint/launch/centerpoint_vs_centerpoint-tiny/#launch-file-setting","title":"Launch file setting","text":"Several fields to check in centerpoint_vs_centerpoint-tiny.launch.xml
before running lidar centerpoint.
input/pointcloud
: set to the topic with input data you want to subscribe.model_path
: set to the path of the model.model_param_path
: set to the path of model's config file.Run
ros2 launch lidar_centerpoint centerpoint_vs_centerpoint-tiny.launch.xml\n
Then you will see two rviz window show immediately. On the left is the result for lidar centerpoint tiny, and on the right is the result for lidar centerpoint.
"},{"location":"perception/lidar_centerpoint/launch/centerpoint_vs_centerpoint-tiny/#troubleshooting","title":"Troubleshooting","text":""},{"location":"perception/lidar_centerpoint/launch/centerpoint_vs_centerpoint-tiny/#bounding-box-blink-on-rviz","title":"Bounding Box blink on rviz","text":"To avoid Bounding Boxes blinking on rviz, you can extend bbox marker lifetime.
Set marker_ptr->lifetime
and marker.lifetime
to a longer lifetime.
marker_ptr->lifetime
are in PATH/autoware/src/universe/autoware.universe/common/autoware_auto_perception_rviz_plugin/src/object_detection/object_polygon_detail.cpp
marker.lifetime
are in PATH/autoware/src/universe/autoware.universe/common/tier4_autoware_utils/include/tier4_autoware_utils/ros/marker_helper.hpp
Make sure to rebuild packages after any change.
"},{"location":"perception/lidar_centerpoint_tvm/","title":"lidar_centerpoint_tvm","text":""},{"location":"perception/lidar_centerpoint_tvm/#lidar_centerpoint_tvm","title":"lidar_centerpoint_tvm","text":""},{"location":"perception/lidar_centerpoint_tvm/#design","title":"Design","text":""},{"location":"perception/lidar_centerpoint_tvm/#usage","title":"Usage","text":"lidar_centerpoint_tvm is a package for detecting dynamic 3D objects using TVM compiled centerpoint module for different backends. To use this package, replace lidar_centerpoint
with lidar_centerpoint_tvm
in perception launch files(for example, lidar_based_detection.launch.xml
is lidar based detection is chosen.).
This package will not build without a neural network for its inference. The network is provided by the tvm_utility
package. See its design page for more information on how to enable downloading pre-compiled networks (by setting the DOWNLOAD_ARTIFACTS
cmake variable), or how to handle user-compiled networks.
The backend used for the inference can be selected by setting the lidar_centerpoint_tvm_BACKEND
cmake variable. The current available options are llvm
for a CPU backend, and vulkan
or opencl
for a GPU backend. It defaults to llvm
.
~/input/pointcloud
sensor_msgs::msg::PointCloud2
input pointcloud"},{"location":"perception/lidar_centerpoint_tvm/#output","title":"Output","text":"Name Type Description ~/output/objects
autoware_auto_perception_msgs::msg::DetectedObjects
detected objects debug/cyclic_time_ms
tier4_debug_msgs::msg::Float64Stamped
cyclic time (msg) debug/processing_time_ms
tier4_debug_msgs::msg::Float64Stamped
processing time (ms)"},{"location":"perception/lidar_centerpoint_tvm/#parameters","title":"Parameters","text":""},{"location":"perception/lidar_centerpoint_tvm/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description score_threshold
float 0.1
detected objects with score less than threshold are ignored densification_world_frame_id
string map
the world frame id to fuse multi-frame pointcloud densification_num_past_frames
int 1
the number of past frames to fuse with the current frame"},{"location":"perception/lidar_centerpoint_tvm/#bounding-box","title":"Bounding Box","text":"The lidar segmentation node establishes a bounding box for the detected obstacles. The L-fit
method of fitting a bounding box to a cluster is used for that.
Due to an accuracy issue of centerpoint
model, vulkan
cannot be used at the moment. As for 'llvm' backend, real-time performance cannot be achieved.
Scatter function can be implemented using either TVMScript or C++. For C++ implementation, please refer to https://github.com/angry-crab/autoware.universe/blob/c020419fe52e359287eccb1b77e93bdc1a681e24/perception/lidar_centerpoint_tvm/lib/network/scatter.cpp#L65
"},{"location":"perception/lidar_centerpoint_tvm/#reference","title":"Reference","text":"[1] Yin, Tianwei, Xingyi Zhou, and Philipp Kr\u00e4henb\u00fchl. \"Center-based 3d object detection and tracking.\" arXiv preprint arXiv:2006.11275 (2020).
[2] Lang, Alex H., et al. \"PointPillars: Fast encoders for object detection from point clouds.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019.
[3] https://github.com/tianweiy/CenterPoint
[4] https://github.com/Abraham423/CenterPoint
[5] https://github.com/open-mmlab/OpenPCDet
"},{"location":"perception/lidar_centerpoint_tvm/#related-issues","title":"Related issues","text":""},{"location":"perception/lidar_centerpoint_tvm/#908-run-lidar-centerpoint-with-tvm","title":"908: Run Lidar Centerpoint with TVM","text":""},{"location":"perception/map_based_prediction/","title":"map_based_prediction","text":""},{"location":"perception/map_based_prediction/#map_based_prediction","title":"map_based_prediction","text":""},{"location":"perception/map_based_prediction/#role","title":"Role","text":"map_based_prediction
is a module to predict the future paths (and their probabilities) of other vehicles and pedestrians according to the shape of the map and the surrounding environment.
Store time-series data of objects to determine the vehicle's route and to detect lane change for several duration. Object Data contains the object's position, speed, and time information.
"},{"location":"perception/map_based_prediction/#get-current-lanelet-and-update-object-history","title":"Get current lanelet and update Object history","text":"Search one or more lanelets satisfying the following conditions for each target object and store them in the ObjectData.
diff_yaw < threshold or diff_yaw > pi - threshold
.Lane Follow
, Left Lane Change
, and Right Lane Change
based on the object history and the reference path obtained in the first step.The conditions for left lane change detection are:
dist_threshold_to_bound_
.time_threshold_to_bound_
.Lane change logics is illustrated in the figure below.An example of how to tune the parameters is described later.
Currently we provide three parameters to tune lane change detection:
dist_threshold_to_bound_
: maximum distance from lane boundary allowed for lane changing vehicletime_threshold_to_bound_
: maximum time allowed for lane change vehicle to reach the boundarycutoff_freq_of_velocity_lpf_
: cutoff frequency of low pass filter for lateral velocityYou can change these parameters in rosparam in the table below.
param name default valuedist_threshold_for_lane_change_detection
1.0
[m] time_threshold_for_lane_change_detection
5.0
[s] cutoff_freq_of_velocity_for_lane_change_detection
0.1
[Hz]"},{"location":"perception/map_based_prediction/#tuning-threshold-parameters","title":"Tuning threshold parameters","text":"Increasing these two parameters will slow down and stabilize the lane change estimation.
Normally, we recommend tuning only time_threshold_for_lane_change_detection
because it is the more important factor for lane change decision.
Lateral velocity calculation is also a very important factor for lane change decision because it is used in the time domain decision.
The predicted time to reach the lane boundary is calculated by
\\[ t_{predicted} = \\dfrac{d_{lat}}{v_{lat}} \\]where \\(d_{lat}\\) and \\(v_{lat}\\) represent the lateral distance to the lane boundary and the lateral velocity, respectively.
Lowering the cutoff frequency of the low-pass filter for lateral velocity will make the lane change decision more stable but slower. Our setting is very conservative, so you may increase this parameter if you want to make the lane change decision faster.
For the additional information, here we show how we calculate lateral velocity.
lateral velocity calculation method equation description [applied] time derivative of lateral distance \\(\\dfrac{\\Delta d_{lat}}{\\Delta t}\\) Currently, we use this method to deal with winding roads. Since this time differentiation easily becomes noisy, we also use a low-pass filter to get smoothed velocity. [not applied] Object Velocity Projection to Lateral Direction \\(v_{obj} \\sin(\\theta)\\) Normally, object velocities are less noisy than the time derivative of lateral distance. But the yaw difference \\(\\theta\\) between the lane and object directions sometimes becomes discontinuous, so we did not adopt this method.Currently, we use the upper method with a low-pass filter to calculate lateral velocity.
"},{"location":"perception/map_based_prediction/#path-generation","title":"Path generation","text":"Path generation is generated on the frenet frame. The path is generated by the following steps:
See paper [2] for more details.
"},{"location":"perception/map_based_prediction/#tuning-lateral-path-shape","title":"Tuning lateral path shape","text":"lateral_control_time_horizon
parameter supports the tuning of the lateral path shape. This parameter is used to calculate the time to reach the reference path. The smaller the value, the more the path will be generated to reach the reference path quickly. (Mostly the center of the lane.)
It is possible to apply a maximum lateral acceleration constraint to generated vehicle paths. This check verifies if it is possible for the vehicle to perform the predicted path without surpassing a lateral acceleration threshold max_lateral_accel
when taking a curve. If it is not possible, it checks if the vehicle can slow down on time to take the curve with a deceleration of min_acceleration_before_curve
and comply with the constraint. If that is also not possible, the path is eliminated.
Currently we provide three parameters to tune the lateral acceleration constraint:
check_lateral_acceleration_constraints_
: to enable the constraint check.max_lateral_accel_
: max acceptable lateral acceleration for predicted paths (absolute value).min_acceleration_before_curve_
: the minimum acceleration the vehicle would theoretically use to slow down before a curve is taken (must be negative).You can change these parameters in rosparam in the table below.
param name default valuecheck_lateral_acceleration_constraints
false
[bool] max_lateral_accel
2.0
[m/s^2] min_acceleration_before_curve
-2.0
[m/s^2]"},{"location":"perception/map_based_prediction/#using-vehicle-acceleration-for-path-prediction-for-vehicle-obstacles","title":"Using Vehicle Acceleration for Path Prediction (for Vehicle Obstacles)","text":"By default, the map_based_prediction
module uses the current obstacle's velocity to compute its predicted path length. However, it is possible to use the obstacle's current acceleration to calculate its predicted path's length.
Since this module tries to predict the vehicle's path several seconds into the future, it is not practical to consider the current vehicle's acceleration as constant (it is not assumed the vehicle will be accelerating for prediction_time_horizon
seconds after detection). Instead, a decaying acceleration model is used. With the decaying acceleration model, a vehicle's acceleration is modeled as:
$\\ a(t) = a_{t0} \\cdot e^{-\\lambda \\cdot t} $
where $\\ a_{t0} $ is the vehicle acceleration at the time of detection $\\ t0 $, and $\\ \\lambda $ is the decay constant $\\ \\lambda = \\ln(2) / hl $ and $\\ hl $ is the exponential's half life.
Furthermore, the integration of $\\ a(t) $ over time gives us equations for velocity, $\\ v(t) $ and distance $\\ x(t) $ as:
$\\ v(t) = v{t0} + a * (1/\\lambda) \\cdot (1 - e^{-\\lambda \\cdot t}) $
and
$\\ x(t) = x{t0} + (v + a{t0} * (1/\\lambda)) \\cdot t + a(1/\u03bb^2)(e^{-\\lambda \\cdot t} - 1) $
With this model, the influence of the vehicle's detected instantaneous acceleration on the predicted path's length is diminished but still considered. This feature also considers that the obstacle might not accelerate past its road's speed limit (multiplied by a tunable factor).
Currently, we provide three parameters to tune the use of obstacle acceleration for path prediction:
use_vehicle_acceleration
: to enable the feature.acceleration_exponential_half_life
: The decaying acceleration model considers that the current vehicle acceleration will be halved after this many seconds.speed_limit_multiplier
: Set the vehicle type obstacle's maximum predicted speed as the legal speed limit in that lanelet times this value. This value should be at least equal or greater than 1.0.You can change these parameters in rosparam
in the table below.
use_vehicle_acceleration
false
[bool] acceleration_exponential_half_life
2.5
[s] speed_limit_multiplier
1.5
[]"},{"location":"perception/map_based_prediction/#path-prediction-for-crosswalk-users","title":"Path prediction for crosswalk users","text":"This module treats Pedestrians and Bicycles as objects using the crosswalk, and outputs prediction path based on map and estimated object's velocity, assuming the object has intention to cross the crosswalk, if the objects satisfies at least one of the following conditions:
If there are a reachable crosswalk entry points within the prediction_time_horizon
and the objects satisfies above condition, this module outputs additional predicted path to cross the opposite side via the crosswalk entry point.
If the target object is inside the road or crosswalk, this module outputs one or two additional prediction path(s) to reach exit point of the crosswalk. The number of prediction paths are depend on whether object is moving or not. If the object is moving, this module outputs one prediction path toward an exit point that existed in the direction of object's movement. One the other hand, if the object has stopped, it is impossible to infer which exit points the object want to go, so this module outputs two prediction paths toward both side exit point.
"},{"location":"perception/map_based_prediction/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/map_based_prediction/#input","title":"Input","text":"Name Type Description~/perception/object_recognition/tracking/objects
autoware_auto_perception_msgs::msg::TrackedObjects
tracking objects without predicted path. ~/vector_map
autoware_auto_mapping_msgs::msg::HADMapBin
binary data of Lanelet2 Map."},{"location":"perception/map_based_prediction/#output","title":"Output","text":"Name Type Description ~/input/objects
autoware_auto_perception_msgs::msg::TrackedObjects
tracking objects. Default is set to /perception/object_recognition/tracking/objects
~/output/objects
autoware_auto_perception_msgs::msg::PredictedObjects
tracking objects with predicted path. ~/objects_path_markers
visualization_msgs::msg::MarkerArray
marker for visualization."},{"location":"perception/map_based_prediction/#parameters","title":"Parameters","text":"Parameter Unit Type Description enable_delay_compensation
[-] bool flag to enable the time delay compensation for the position of the object prediction_time_horizon
[s] double predict time duration for predicted path lateral_control_time_horizon
[s] double time duration for predicted path will reach the reference path (mostly center of the lane) prediction_sampling_delta_time
[s] double sampling time for points in predicted path min_velocity_for_map_based_prediction
[m/s] double apply map-based prediction to the objects with higher velocity than this value min_crosswalk_user_velocity
[m/s] double minimum velocity used when crosswalk user's velocity is calculated max_crosswalk_user_delta_yaw_threshold_for_lanelet
[rad] double maximum yaw difference between crosswalk user and lanelet to use in path prediction for crosswalk users dist_threshold_for_searching_lanelet
[m] double The threshold of the angle used when searching for the lane to which the object belongs delta_yaw_threshold_for_searching_lanelet
[rad] double The threshold of the angle used when searching for the lane to which the object belongs sigma_lateral_offset
[m] double Standard deviation for lateral position of objects sigma_yaw_angle_deg
[deg] double Standard deviation yaw angle of objects object_buffer_time_length
[s] double Time span of object history to store the information history_time_length
[s] double Time span of object information used for prediction prediction_time_horizon_rate_for_validate_shoulder_lane_length
[-] double prediction path will disabled when the estimated path length exceeds lanelet length. This parameter control the estimated path length"},{"location":"perception/map_based_prediction/#assumptions-known-limits","title":"Assumptions / Known limits","text":"The results of the detection are processed by a time series. The main purpose is to give ID and estimate velocity.
"},{"location":"perception/multi_object_tracker/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"This multi object tracker consists of data association and EKF.
"},{"location":"perception/multi_object_tracker/#data-association","title":"Data association","text":"The data association performs maximum score matching, called min cost max flow problem. In this package, mussp[1] is used as solver. In addition, when associating observations to tracers, data association have gates such as the area of the object from the BEV, Mahalanobis distance, and maximum distance, depending on the class label.
"},{"location":"perception/multi_object_tracker/#ekf-tracker","title":"EKF Tracker","text":"Models for pedestrians, bicycles (motorcycles), cars and unknown are available. The pedestrian or bicycle tracker is running at the same time as the respective EKF model in order to enable the transition between pedestrian and bicycle tracking. For big vehicles such as trucks and buses, we have separate models for passenger cars and large vehicles because they are difficult to distinguish from passenger cars and are not stable. Therefore, separate models are prepared for passenger cars and big vehicles, and these models are run at the same time as the respective EKF models to ensure stability.
"},{"location":"perception/multi_object_tracker/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/multi_object_tracker/#input","title":"Input","text":"Name Type Description~/input
autoware_auto_perception_msgs::msg::DetectedObjects
obstacles"},{"location":"perception/multi_object_tracker/#output","title":"Output","text":"Name Type Description ~/output
autoware_auto_perception_msgs::msg::TrackedObjects
modified obstacles"},{"location":"perception/multi_object_tracker/#parameters","title":"Parameters","text":""},{"location":"perception/multi_object_tracker/#core-parameters","title":"Core Parameters","text":"Node parameters are defined in multi_object_tracker.param.yaml and association parameters are defined in data_association.param.yaml.
"},{"location":"perception/multi_object_tracker/#node-parameters","title":"Node parameters","text":"Name Type Description***_tracker
string EKF tracker name for each class world_frame_id
double object kinematics definition frame enable_delay_compensation
bool if True, tracker use timers to schedule publishers and use prediction step to extrapolate object state at desired timestamp publish_rate
double Timer frequency to output with delay compensation"},{"location":"perception/multi_object_tracker/#association-parameters","title":"Association parameters","text":"Name Type Description can_assign_matrix
double Assignment table for data association max_dist_matrix
double Maximum distance table for data association max_area_matrix
double Maximum area table for data association min_area_matrix
double Minimum area table for data association max_rad_matrix
double Maximum angle table for data association"},{"location":"perception/multi_object_tracker/#assumptions-known-limits","title":"Assumptions / Known limits","text":"See the model explanations.
"},{"location":"perception/multi_object_tracker/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/multi_object_tracker/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/multi_object_tracker/#evaluation-of-mussp","title":"Evaluation of muSSP","text":"According to our evaluation, muSSP is faster than normal SSP when the matrix size is more than 100.
Execution time for varying matrix size at 95% sparsity. In real data, the sparsity was often around 95%.
Execution time for varying the sparsity with matrix size 100.
"},{"location":"perception/multi_object_tracker/#optional-referencesexternal-links","title":"(Optional) References/External links","text":"This package makes use of external code.
Name License Original Repository muSSP Apache-2.0 https://github.com/yu-lab-vt/muSSP[1] C. Wang, Y. Wang, Y. Wang, C.-t. Wu, and G. Yu, \u201cmuSSP: Efficient Min-cost Flow Algorithm for Multi-object Tracking,\u201d NeurIPS, 2019
"},{"location":"perception/multi_object_tracker/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/multi_object_tracker/models/","title":"Models used in this module","text":""},{"location":"perception/multi_object_tracker/models/#models-used-in-this-module","title":"Models used in this module","text":""},{"location":"perception/multi_object_tracker/models/#tracking-model","title":"Tracking model","text":""},{"location":"perception/multi_object_tracker/models/#ctrv-model-1","title":"CTRV model [1]","text":"CTRV model is a model that assumes constant turn rate and velocity magnitude.
Kinematic bicycle model uses slip angle \\(\\beta\\) and velocity \\(v\\) to calculate yaw update. The merit of using this model is that it can prevent unintended yaw rotation when the vehicle is stopped.
Remarks that the velocity \\(v_{k}\\) is the norm of velocity of vehicle, not the longitudinal velocity. So the output twist in the object coordinate \\((x,y)\\) is calculated as follows.
\\[ \\begin{aligned} v_{x} &= v_{k} \\cos \\left(\\beta_{k}\\right) \\\\ v_{y} &= v_{k} \\sin \\left(\\beta_{k}\\right) \\end{aligned} \\]"},{"location":"perception/multi_object_tracker/models/#anchor-point-based-estimation","title":"Anchor point based estimation","text":"To separate the estimation of the position and the shape, we use anchor point based position estimation.
"},{"location":"perception/multi_object_tracker/models/#anchor-point-and-tracking-relationships","title":"Anchor point and tracking relationships","text":"Anchor point is set when the tracking is initialized. Its position is equal to the center of the bounding box of the first tracking bounding box.
Here show how anchor point is used in tracking.
Raw detection is converted to anchor point coordinate, and tracking
"},{"location":"perception/multi_object_tracker/models/#manage-anchor-point-offset","title":"Manage anchor point offset","text":"Anchor point should be kept in the same position of the object. In other words, the offset value must be adjusted so that the input BBOX and the output BBOX's closest plane to the ego vehicle are at the same position.
"},{"location":"perception/multi_object_tracker/models/#known-limits-drawbacks","title":"Known limits, drawbacks","text":"[1] Schubert, Robin & Richter, Eric & Wanielik, Gerd. (2008). Comparison and evaluation of advanced motion models for vehicle tracking. 1 - 6. 10.1109/ICIF.2008.4632283.
[2] Kong, Jason & Pfeiffer, Mark & Schildbach, Georg & Borrelli, Francesco. (2015). Kinematic and dynamic vehicle models for autonomous driving control design. 1094-1099. 10.1109/IVS.2015.7225830.
"},{"location":"perception/object_merger/","title":"object_merger","text":""},{"location":"perception/object_merger/#object_merger","title":"object_merger","text":""},{"location":"perception/object_merger/#purpose","title":"Purpose","text":"object_merger is a package for merging detected objects from two methods by data association.
"},{"location":"perception/object_merger/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"The successive shortest path algorithm is used to solve the data association problem (the minimum-cost flow problem). The cost is calculated by the distance between two objects and gate functions are applied to reset cost, s.t. the maximum distance, the maximum area and the minimum area.
"},{"location":"perception/object_merger/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/object_merger/#input","title":"Input","text":"Name Type Descriptioninput/object0
autoware_auto_perception_msgs::msg::DetectedObjects
detection objects input/object1
autoware_auto_perception_msgs::msg::DetectedObjects
detection objects"},{"location":"perception/object_merger/#output","title":"Output","text":"Name Type Description output/object
autoware_auto_perception_msgs::msg::DetectedObjects
modified Objects"},{"location":"perception/object_merger/#parameters","title":"Parameters","text":"Name Type Description can_assign_matrix
double Assignment table for data association max_dist_matrix
double Maximum distance table for data association max_area_matrix
double Maximum area table for data association min_area_matrix
double Minimum area table for data association max_rad_matrix
double Maximum angle table for data association base_link_frame_id
double association frame distance_threshold_list
std::vector<double>
Distance threshold for each class used in judging overlap. The class order depends on ObjectClassification. generalized_iou_threshold
std::vector<double>
Generalized IoU threshold for each class"},{"location":"perception/object_merger/#tips","title":"Tips","text":"distance_threshold_list
precision_threshold_to_judge_overlapped
generalized_iou_threshold
Data association algorithm was the same as that of multi_object_tracker, but the algorithm of multi_object_tracker was already updated.
"},{"location":"perception/object_range_splitter/","title":"object_range_splitter","text":""},{"location":"perception/object_range_splitter/#object_range_splitter","title":"object_range_splitter","text":""},{"location":"perception/object_range_splitter/#purpose","title":"Purpose","text":"object_range_splitter is a package to divide detected objects into two messages by the distance from the origin.
"},{"location":"perception/object_range_splitter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"perception/object_range_splitter/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/object_range_splitter/#input","title":"Input","text":"Name Type Descriptioninput/object
autoware_auto_perception_msgs::msg::DetectedObjects
detected objects"},{"location":"perception/object_range_splitter/#output","title":"Output","text":"Name Type Description output/long_range_object
autoware_auto_perception_msgs::msg::DetectedObjects
long range detected objects output/short_range_object
autoware_auto_perception_msgs::msg::DetectedObjects
short range detected objects"},{"location":"perception/object_range_splitter/#parameters","title":"Parameters","text":"Name Type Description split_range
float the distance boundary to divide detected objects [m]"},{"location":"perception/object_range_splitter/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/object_range_splitter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/object_range_splitter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/object_range_splitter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/object_range_splitter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/object_velocity_splitter/","title":"object_velocity_splitter","text":""},{"location":"perception/object_velocity_splitter/#object_velocity_splitter","title":"object_velocity_splitter","text":"This package contains a object filter module for autoware_auto_perception_msgs/msg/DetectedObject. This package can split DetectedObjects into two messages by object's speed.
"},{"location":"perception/object_velocity_splitter/#input","title":"Input","text":"Name Type Description~/input/objects
autoware_auto_perception_msgs/msg/DetectedObjects.msg 3D detected objects."},{"location":"perception/object_velocity_splitter/#output","title":"Output","text":"Name Type Description ~/output/low_speed_objects
autoware_auto_perception_msgs/msg/DetectedObjects.msg Objects with low speed ~/output/high_speed_objects
autoware_auto_perception_msgs/msg/DetectedObjects.msg Objects with high speed"},{"location":"perception/object_velocity_splitter/#parameters","title":"Parameters","text":"Name Type Description Default value velocity_threshold
double Velocity threshold parameter to split objects [m/s] 3.0"},{"location":"perception/occupancy_grid_map_outlier_filter/","title":"occupancy_grid_map_outlier_filter","text":""},{"location":"perception/occupancy_grid_map_outlier_filter/#occupancy_grid_map_outlier_filter","title":"occupancy_grid_map_outlier_filter","text":""},{"location":"perception/occupancy_grid_map_outlier_filter/#purpose","title":"Purpose","text":"This node is an outlier filter based on a occupancy grid map. Depending on the implementation of occupancy grid map, it can be called an outlier filter in time series, since the occupancy grid map expresses the occupancy probabilities in time series.
"},{"location":"perception/occupancy_grid_map_outlier_filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"Use the occupancy grid map to separate point clouds into those with low occupancy probability and those with high occupancy probability.
The point clouds that belong to the low occupancy probability are not necessarily outliers. In particular, the top of the moving object tends to belong to the low occupancy probability. Therefore, if use_radius_search_2d_filter
is true, then apply an radius search 2d outlier filter to the point cloud that is determined to have a low occupancy probability.
radius_search_2d_filter/search_radius
) and the number of point clouds. In this case, the point cloud to be referenced is not only low occupancy probability points, but all point cloud including high occupancy probability points.radius_search_2d_filter/min_points_and_distance_ratio
and distance from base link. However, the minimum and maximum number of point clouds is limited.The following video is a sample. Yellow points are high occupancy probability, green points are low occupancy probability which is not an outlier, and red points are outliers. At around 0:15 and 1:16 in the first video, a bird crosses the road, but it is considered as an outlier.
~/input/pointcloud
sensor_msgs/PointCloud2
Obstacle point cloud with ground removed. ~/input/occupancy_grid_map
nav_msgs/OccupancyGrid
A map in which the probability of the presence of an obstacle is occupancy probability map"},{"location":"perception/occupancy_grid_map_outlier_filter/#output","title":"Output","text":"Name Type Description ~/output/pointcloud
sensor_msgs/PointCloud2
Point cloud with outliers removed. trajectory ~/output/debug/outlier/pointcloud
sensor_msgs/PointCloud2
Point clouds removed as outliers. ~/output/debug/low_confidence/pointcloud
sensor_msgs/PointCloud2
Point clouds that had a low probability of occupancy in the occupancy grid map. However, it is not considered as an outlier. ~/output/debug/high_confidence/pointcloud
sensor_msgs/PointCloud2
Point clouds that had a high probability of occupancy in the occupancy grid map. trajectory"},{"location":"perception/occupancy_grid_map_outlier_filter/#parameters","title":"Parameters","text":"Name Type Description map_frame
string map frame id base_link_frame
string base link frame id cost_threshold
int Cost threshold of occupancy grid map (0~100). 100 means 100% probability that there is an obstacle, close to 50 means that it is indistinguishable whether it is an obstacle or free space, 0 means that there is no obstacle. enable_debugger
bool Whether to output the point cloud for debugging. use_radius_search_2d_filter
bool Whether or not to apply density-based outlier filters to objects that are judged to have low probability of occupancy on the occupancy grid map. radius_search_2d_filter/search_radius
float Radius when calculating the density radius_search_2d_filter/min_points_and_distance_ratio
float Threshold value of the number of point clouds per radius when the distance from baselink is 1m, because the number of point clouds varies with the distance from baselink. radius_search_2d_filter/min_points
int Minimum number of point clouds per radius radius_search_2d_filter/max_points
int Maximum number of point clouds per radius"},{"location":"perception/occupancy_grid_map_outlier_filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/occupancy_grid_map_outlier_filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/occupancy_grid_map_outlier_filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/occupancy_grid_map_outlier_filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/occupancy_grid_map_outlier_filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/probabilistic_occupancy_grid_map/","title":"probabilistic_occupancy_grid_map","text":""},{"location":"perception/probabilistic_occupancy_grid_map/#probabilistic_occupancy_grid_map","title":"probabilistic_occupancy_grid_map","text":""},{"location":"perception/probabilistic_occupancy_grid_map/#purpose","title":"Purpose","text":"This package outputs the probability of having an obstacle as occupancy grid map.
"},{"location":"perception/probabilistic_occupancy_grid_map/#referencesexternal-links","title":"References/External links","text":"Occupancy grid map is generated on map_frame
, and grid orientation is fixed.
You may need to choose scan_origin_frame
and gridmap_origin_frame
which means sensor origin and gridmap origin respectively. Especially, set your main LiDAR sensor frame (e.g. velodyne_top
in sample_vehicle) as a scan_origin_frame
would result in better performance.
Config parameters are managed in config/*.yaml
and here shows its outline.
Additional argument is shown below:
Name Default Descriptionuse_multithread
false
whether to use multithread use_intra_process
false
map_origin
`` parameter to override map_origin_frame
which means grid map origin scan_origin
`` parameter to override scan_origin_frame
which means scanning center output
occupancy_grid
output name use_pointcloud_container
false
container_name
occupancy_grid_map_container
input_obstacle_pointcloud
false
only for laserscan based method. If true, the node subscribe obstacle pointcloud input_obstacle_and_raw_pointcloud
true
only for laserscan based method. If true, the node subscribe both obstacle and raw pointcloud"},{"location":"perception/probabilistic_occupancy_grid_map/#test","title":"Test","text":"This package provides unit tests using gtest
. You can run the test by the following command.
colcon test --packages-select probabilistic_occupancy_grid_map --event-handlers console_direct+\n
Test contains the following.
The basic idea is to take a 2D laserscan and ray trace it to create a time-series processed occupancy grid map.
Optionally, obstacle point clouds and raw point clouds can be received and reflected in the occupancy grid map. The reason is that laserscan only uses the most foreground point in the polar coordinate system, so it throws away a lot of information. As a result, the occupancy grid map is almost an UNKNOWN cell. Therefore, the obstacle point cloud and the raw point cloud are used to reflect what is judged to be the ground and what is judged to be an obstacle in the occupancy grid map. The black and red dots represent raw point clouds, and the red dots represent obstacle point clouds. In other words, the black points are determined as the ground, and the red point cloud is the points determined as obstacles. The gray cells are represented as UNKNOWN cells.
Using the previous occupancy grid map, update the existence probability using a binary Bayesian filter (1). Also, the unobserved cells are time-decayed like the system noise of the Kalman filter (2).
~/input/laserscan
sensor_msgs::LaserScan
laserscan ~/input/obstacle_pointcloud
sensor_msgs::PointCloud2
obstacle pointcloud ~/input/raw_pointcloud
sensor_msgs::PointCloud2
The overall point cloud used to input the obstacle point cloud"},{"location":"perception/probabilistic_occupancy_grid_map/laserscan-based-occupancy-grid-map/#output","title":"Output","text":"Name Type Description ~/output/occupancy_grid_map
nav_msgs::OccupancyGrid
occupancy grid map"},{"location":"perception/probabilistic_occupancy_grid_map/laserscan-based-occupancy-grid-map/#parameters","title":"Parameters","text":""},{"location":"perception/probabilistic_occupancy_grid_map/laserscan-based-occupancy-grid-map/#node-parameters","title":"Node Parameters","text":"Name Type Description map_frame
string map frame base_link_frame
string base_link frame input_obstacle_pointcloud
bool whether to use the optional obstacle point cloud? If this is true, ~/input/obstacle_pointcloud
topics will be received. input_obstacle_and_raw_pointcloud
bool whether to use the optional obstacle and raw point cloud? If this is true, ~/input/obstacle_pointcloud
and ~/input/raw_pointcloud
topics will be received. use_height_filter
bool whether to height filter for ~/input/obstacle_pointcloud
and ~/input/raw_pointcloud
? By default, the height is set to -1~2m. map_length
double The length of the map. -100 if it is 50~50[m] map_resolution
double The map cell resolution [m]"},{"location":"perception/probabilistic_occupancy_grid_map/laserscan-based-occupancy-grid-map/#assumptions-known-limits","title":"Assumptions / Known limits","text":"In several places we have modified the external code written in BSD3 license.
Bresenham's_line_algorithm
First of all, input obstacle/raw pointcloud are transformed into the polar coordinate centered around scan_origin
and divided int circular bins per angle_increment respectively. At this time, each point belonging to each bin is stored as range data. In addition, the x,y information in the map coordinate is also stored for ray-tracing on the map coordinate. The bin contains the following information for each point
The following figure shows each of the bins from side view.
"},{"location":"perception/probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#2nd-step","title":"2nd step","text":"The ray trace is performed in three steps for each cell. The ray trace is done by Bresenham's line algorithm.
Initialize freespace to the farthest point of each bin.
Fill in the unknown cells. Based on the assumption that UNKNOWN
is behind the obstacle, the cells that are more than a distance margin from each obstacle point are filled with UNKNOWN
There are three reasons for setting a distance margin.
When the parameter grid_map_type
is \"OccupancyGridMapProjectiveBlindSpot\" and the scan_origin
is a sensor frame like velodyne_top
for instance, for each obstacle pointcloud, if there are no visible raw pointclouds that are located above the projected ray from the scan_origin
to that obstacle pointcloud, the cells between the obstacle pointcloud and the projected point
are filled with UNKNOWN
. Note that the scan_origin
should not be base_link
if this flag is true because otherwise all the cells behind the obstacle point clouds would be filled with UNKNOWN
.
Fill in the occupied cells. Fill in the point where the obstacle point is located with occupied. In addition, If the distance between obstacle points is less than or equal to the distance margin, that interval is filled with OCCUPIED
because the input may be inaccurate and obstacle points may not be determined as obstacles.
Using the previous occupancy grid map, update the existence probability using a binary Bayesian filter (1). Also, the unobserved cells are time-decayed like the system noise of the Kalman filter (2).
\\[ \\hat{P_{o}} = \\frac{(P_{o} *P_{z})}{(P_{o}* P_{z} + (1 - P_{o}) * \\bar{P_{z}})} \\tag{1} \\] \\[ \\hat{P_{o}} = \\frac{(P_{o} + 0.5 * \\frac{1}{ratio})}{(\\frac{1}{ratio} + 1)} \\tag{2} \\]"},{"location":"perception/probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#input","title":"Input","text":"Name Type Description~/input/obstacle_pointcloud
sensor_msgs::PointCloud2
obstacle pointcloud ~/input/raw_pointcloud
sensor_msgs::PointCloud2
The overall point cloud used to input the obstacle point cloud"},{"location":"perception/probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#output","title":"Output","text":"Name Type Description ~/output/occupancy_grid_map
nav_msgs::OccupancyGrid
occupancy grid map"},{"location":"perception/probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#parameters","title":"Parameters","text":""},{"location":"perception/probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#node-parameters","title":"Node Parameters","text":"Name Type Description map_frame
string map frame base_link_frame
string base_link frame use_height_filter
bool whether to height filter for ~/input/obstacle_pointcloud
and ~/input/raw_pointcloud
? By default, the height is set to -1~2m. map_length
double The length of the map. -100 if it is 50~50[m] map_resolution
double The map cell resolution [m] grid_map_type
string The type of grid map for estimating UNKNOWN
region behind obstacle point clouds"},{"location":"perception/probabilistic_occupancy_grid_map/pointcloud-based-occupancy-grid-map/#assumptions-known-limits","title":"Assumptions / Known limits","text":"In several places we have modified the external code written in BSD3 license.
If grid_map_type
is \"OccupancyGridMapProjectiveBlindSpot\" and pub_debug_grid
is true
, it is possible to check the each process of grid map generation by running
ros2 launch probabilistic_occupancy_grid_map debug.launch.xml\n
and visualizing the following occupancy grid map topics (which are listed in config/grid_map_param.yaml):
/perception/occupancy_grid_map/grid_1st_step
: FREE
cells are filled/perception/occupancy_grid_map/grid_2nd_step
: UNKNOWN
cells are filled/perception/occupancy_grid_map/grid_3rd_step
: OCCUPIED
cells are filledFor simplicity, we use OGM as the meaning of the occupancy grid map.
This package is used to fuse the OGMs from synchronized sensors. Especially for the lidar.
Here shows the example OGM for the this synchronized OGM fusion.
left lidar OGM right lidar OGM top lidar OGMOGM fusion with asynchronous sensor outputs is not suitable for this package. Asynchronous OGM fusion is under construction.
"},{"location":"perception/probabilistic_occupancy_grid_map/synchronized_grid_map_fusion/#processing-flow","title":"Processing flow","text":"The processing flow of this package is shown in the following figure.
input_ogm_topics
list of nav_msgs::msg::OccupancyGrid List of input topics for Occupancy Grid Maps. This parameter is given in list, so Output topic name Type Description ~/output/occupancy_grid_map
nav_msgs::msg::OccupancyGrid Output topic name of the fused Occupancy Grid Map. ~/debug/single_frame_map
nav_msgs::msg::OccupancyGrid (debug topic) Output topic name of the single frame fused Occupancy Grid Map."},{"location":"perception/probabilistic_occupancy_grid_map/synchronized_grid_map_fusion/#parameters","title":"Parameters","text":"Synchronized OGM fusion node parameters are shown in the following table. Main parameters to be considered in the fusion node is shown as bold.
Ros param name Sample value Description input_ogm_topics [\"topic1\", \"topic2\"] List of input topics for Occupancy Grid Maps input_ogm_reliabilities [0.8, 0.2] Weights for the reliability of each input topic fusion_method \"overwrite\" Method of fusion (\"overwrite\", \"log-odds\", \"dempster-shafer\") match_threshold_sec 0.01 Matching threshold in milliseconds timeout_sec 0.1 Timeout duration in seconds input_offset_sec [0.0, 0.0] Offset time in seconds for each input topic mapframe \"map\" Frame name for the fused map baselink_frame \"base_link\" Frame name for the base link gridmap_origin_frame \"base_link\" Frame name for the origin of the grid map fusion_map_length_x 100.0 Length of the fused map along the X-axis fusion_map_length_y 100.0 Length of the fused map along the Y-axis fusion_map_resolution 0.5 Resolution of the fused mapSince this node assumes that the OGMs from synchronized sensors are generated in the same time, we need to tune the match_threshold_sec
, timeout_sec
and input_offset_sec
parameters to successfully fuse the OGMs.
For the single frame fusion, the following fusion methods are supported.
Fusion Method in parameter Descriptionoverwrite
The value of the cell in the fused OGM is overwritten by the value of the cell in the OGM with the highest priority. We set priority as Occupied
> Free
> Unknown
. log-odds
The value of the cell in the fused OGM is calculated by the log-odds ratio method, which is known as a Bayesian fusion method. The log-odds of a probability \\(p\\) can be written as \\(l_p = \\log(\\frac{p}{1-p})\\). And the fused log-odds is calculated by the sum of log-odds. \\(l_f = \\Sigma l_p\\) dempster-shafer
The value of the cell in the fused OGM is calculated by the Dempster-Shafer theory[1]. This is also popular method to handle multiple evidences. This package applied conflict escape logic in [2] for the performance. See references for the algorithm details. For the multi frame fusion, currently only supporting log-odds
fusion method.
The minimum node launch will be like the following.
<?xml version=\"1.0\"?>\n<launch>\n<arg name=\"output_topic\" default=\"~/output/occupancy_grid_map\"/>\n<arg name=\"fusion_node_param_path\" default=\"$(find-pkg-share probabilistic_occupancy_grid_map)/config/synchronized_grid_map_fusion_node.param.yaml\"/>\n\n<node name=\"synchronized_grid_map_fusion_node\" exec=\"synchronized_grid_map_fusion_node\" pkg=\"probabilistic_occupancy_grid_map\" output=\"screen\">\n<remap from=\"~/output/occupancy_grid_map\" to=\"$(var output_topic)\"/>\n<param from=\"$(var fusion_node_param_path)\"/>\n</node>\n</launch>\n
"},{"location":"perception/probabilistic_occupancy_grid_map/synchronized_grid_map_fusion/#optional-generate-ogms-in-each-sensor-frame","title":"(Optional) Generate OGMs in each sensor frame","text":"You need to generate OGMs in each sensor frame before achieving grid map fusion.
probabilistic_occupancy_grid_map
package supports to generate OGMs for the each from the point cloud data.
<include file=\"$(find-pkg-share tier4_perception_launch)/launch/occupancy_grid_map/probabilistic_occupancy_grid_map.launch.xml\">\n<arg name=\"input/obstacle_pointcloud\" value=\"/perception/obstacle_segmentation/single_frame/pointcloud_raw\"/>\n<arg name=\"input/raw_pointcloud\" value=\"/sensing/lidar/right/outlier_filtered/pointcloud_synchronized\"/>\n<arg name=\"output\" value=\"/perception/occupancy_grid_map/right_lidar/map\"/>\n<arg name=\"map_frame\" value=\"base_link\"/>\n<arg name=\"scan_origin\" value=\"velodyne_right\"/>\n<arg name=\"use_intra_process\" value=\"true\"/>\n<arg name=\"use_multithread\" value=\"true\"/>\n<arg name=\"use_pointcloud_container\" value=\"$(var use_pointcloud_container)\"/>\n<arg name=\"pointcloud_container_name\" value=\"$(var pointcloud_container_name)\"/>\n<arg name=\"method\" value=\"pointcloud_based_occupancy_grid_map\"/>\n<arg name=\"param_file\" value=\"$(find-pkg-share probabilistic_occupancy_grid_map)/config/pointcloud_based_occupancy_grid_map_fusion.param.yaml\"/>\n</include>\n\n\nThe minimum parameter for the OGM generation in each frame is shown in the following table.\n\n|Parameter|Description|\n|--|--|\n|`input/obstacle_pointcloud`| The input point cloud data for the OGM generation. This point cloud data should be the point cloud data which is segmented as the obstacle.|\n|`input/raw_pointcloud`| The input point cloud data for the OGM generation. This point cloud data should be the point cloud data which is not segmented as the obstacle. |\n|`output`| The output topic of the OGM. |\n|`map_frame`| The tf frame for the OGM center origin. |\n|`scan_origin`| The tf frame for the sensor origin. |\n|`method`| The method for the OGM generation. Currently we support `pointcloud_based_occupancy_grid_map` and `laser_scan_based_occupancy_grid_map`. The pointcloud based method is recommended. |\n|`param_file`| The parameter file for the OGM generation. See [example parameter file](config/pointcloud_based_occupancy_grid_map_for_fusion.param.yaml) |\n
We recommend to use same map_frame
, size and resolutions for the OGMs from synchronized sensors. Also, remember to set enable_single_frame_mode
and filter_obstacle_pointcloud_by_raw_pointcloud
to true
in the probabilistic_occupancy_grid_map
package (you do not need to set these parameters if you use the above example config file).
We prepared the launch file to run both OGM generation node and fusion node in grid_map_fusion_with_synchronized_pointclouds.launch.py
You can include this launch file like the following.
<include file=\"$(find-pkg-share probabilistic_occupancy_grid_map)/launch/grid_map_fusion_with_synchronized_pointclouds.launch.py\">\n<arg name=\"output\" value=\"/perception/occupancy_grid_map/fusion/map\"/>\n<arg name=\"use_intra_process\" value=\"true\"/>\n<arg name=\"use_multithread\" value=\"true\"/>\n<arg name=\"use_pointcloud_container\" value=\"$(var use_pointcloud_container)\"/>\n<arg name=\"pointcloud_container_name\" value=\"$(var pointcloud_container_name)\"/>\n<arg name=\"method\" value=\"pointcloud_based_occupancy_grid_map\"/>\n<arg name=\"fusion_config_file\" value=\"$(var fusion_config_file)\"/>\n<arg name=\"ogm_config_file\" value=\"$(var ogm_config_file)\"/>\n</include>\n
The minimum parameter for the launch file is shown in the following table.
Parameter Descriptionoutput
The output topic of the finally fused OGM. method
The method for the OGM generation. Currently we support pointcloud_based_occupancy_grid_map
and laser_scan_based_occupancy_grid_map
. The pointcloud based method is recommended. fusion_config_file
The parameter file for the grid map fusion. See example parameter file ogm_config_file
The parameter file for the OGM generation. See example parameter file"},{"location":"perception/probabilistic_occupancy_grid_map/synchronized_grid_map_fusion/#references","title":"References","text":"This package contains a radar noise filter module for autoware_auto_perception_msgs/msg/DetectedObject. This package can filter the noise objects which cross to the ego vehicle.
"},{"location":"perception/radar_crossing_objects_noise_filter/#algorithm","title":"Algorithm","text":""},{"location":"perception/radar_crossing_objects_noise_filter/#background","title":"Background","text":"This package aim to filter the noise objects which cross from the ego vehicle. The reason why these objects are noise is as below.
Radars can get velocity information of objects as doppler velocity, but cannot get vertical velocity to doppler velocity directory. Some radars can output the objects with not only doppler velocity but also vertical velocity by estimation. If the vertical velocity estimation is poor, it leads to output noise objects. In other words, the above situation is that the objects which has vertical twist viewed from ego vehicle can tend to be noise objects.
The example is below figure. Velocity estimation fails on static objects, resulting in ghost objects crossing in front of ego vehicles.
When the ego vehicle turns around, the radars outputting at the object level sometimes fail to estimate the twist of objects correctly even if radar_tracks_msgs_converter compensates by the ego vehicle twist. So if an object detected by radars has circular motion viewing from base_link, it is likely that the speed is estimated incorrectly and that the object is a static object.
The example is below figure. When the ego vehicle turn right, the surrounding objects have left circular motion.
"},{"location":"perception/radar_crossing_objects_noise_filter/#detail-algorithm","title":"Detail Algorithm","text":"To filter the objects crossing to ego vehicle, this package filter the objects as below algorithm.
// If velocity of an object is rather than the velocity_threshold,\n// and crossing_yaw is near to vertical\n// angle_threshold < crossing_yaw < pi - angle_threshold\nif (\nvelocity > node_param_.velocity_threshold &&\nabs(std::cos(crossing_yaw)) < abs(std::cos(node_param_.angle_threshold))) {\n// Object is noise object;\n} else {\n// Object is not noise object;\n}\n
"},{"location":"perception/radar_crossing_objects_noise_filter/#input","title":"Input","text":"Name Type Description ~/input/objects
autoware_auto_perception_msgs/msg/DetectedObjects.msg Radar objects."},{"location":"perception/radar_crossing_objects_noise_filter/#output","title":"Output","text":"Name Type Description ~/output/noise_objects
autoware_auto_perception_msgs/msg/DetectedObjects.msg Noise objects ~/output/filtered_objects
autoware_auto_perception_msgs/msg/DetectedObjects.msg Filtered objects"},{"location":"perception/radar_crossing_objects_noise_filter/#parameters","title":"Parameters","text":"Name Type Description Default value angle_threshold
double The angle threshold parameter to filter [rad]. This parameter has condition that 0 < angle_threshold
< pi / 2. See algorithm chapter for details. 1.0472 velocity_threshold
double The velocity threshold parameter to filter [m/s]. See algorithm chapter for details. 3.0"},{"location":"perception/radar_fusion_to_detected_object/","title":"radar_fusion_to_detected_object","text":""},{"location":"perception/radar_fusion_to_detected_object/#radar_fusion_to_detected_object","title":"radar_fusion_to_detected_object","text":"This package contains a sensor fusion module for radar-detected objects and 3D detected objects. The fusion node can:
The document of core algorithm is here
"},{"location":"perception/radar_fusion_to_detected_object/#parameters-for-sensor-fusion","title":"Parameters for sensor fusion","text":"Name Type Description Default value bounding_box_margin double The distance to extend the 2D bird's-eye view Bounding Box on each side. This distance is used as a threshold to find radar centroids falling inside the extended box. [m] 2.0 split_threshold_velocity double The object's velocity threshold to decide to split for two objects from radar information (currently not implemented) [m/s] 5.0 threshold_yaw_diff double The yaw orientation threshold. If \u2223 \u03b8_ob \u2212 \u03b8_ra \u2223 < threshold \u00d7 yaw_diff attached to radar information include estimated velocity, where\u03b8obis yaw angle from 3d detected object,*\u03b8_ra is yaw angle from radar object. [rad] 0.35"},{"location":"perception/radar_fusion_to_detected_object/#weight-parameters-for-velocity-estimation","title":"Weight parameters for velocity estimation","text":"To tune these weight parameters, please see document in detail.
Name Type Description Default value velocity_weight_average double The twist coefficient of average twist of radar data in velocity estimation. 0.0 velocity_weight_median double The twist coefficient of median twist of radar data in velocity estimation. 0.0 velocity_weight_min_distance double The twist coefficient of radar data nearest to the center of bounding box in velocity estimation. 1.0 velocity_weight_target_value_average double The twist coefficient of target value weighted average in velocity estimation. Target value is amplitude if using radar pointcloud. Target value is probability if using radar objects. 0.0 velocity_weight_target_value_top double The twist coefficient of top target value radar data in velocity estimation. Target value is amplitude if using radar pointcloud. Target value is probability if using radar objects. 0.0"},{"location":"perception/radar_fusion_to_detected_object/#parameters-for-fixed-object-information","title":"Parameters for fixed object information","text":"Name Type Description Default value convert_doppler_to_twist bool Convert doppler velocity to twist using the yaw information of a detected object. false threshold_probability float If the probability of an output object is lower than this parameter, and the output object does not have radar points/objects, then delete the object. 0.4 compensate_probability bool If this parameter is true, compensate probability of objects to threshold probability. false"},{"location":"perception/radar_fusion_to_detected_object/#radar_object_fusion_to_detected_object","title":"radar_object_fusion_to_detected_object","text":"Sensor fusion with radar objects and a detected object.
ros2 launch radar_fusion_to_detected_object radar_object_to_detected_object.launch.xml\n
"},{"location":"perception/radar_fusion_to_detected_object/#input","title":"Input","text":"Name Type Description ~/input/objects
autoware_auto_perception_msgs/msg/DetectedObject.msg 3D detected objects. ~/input/radar_objects
autoware_auto_perception_msgs/msg/DetectedObjects.msg Radar objects. Note that frame_id need to be same as ~/input/objects
"},{"location":"perception/radar_fusion_to_detected_object/#output","title":"Output","text":"Name Type Description ~/output/objects
autoware_auto_perception_msgs/msg/DetectedObjects.msg 3D detected object with twist. ~/debug/low_confidence_objects
autoware_auto_perception_msgs/msg/DetectedObjects.msg 3D detected object that doesn't output as ~/output/objects
because of low confidence"},{"location":"perception/radar_fusion_to_detected_object/#parameters","title":"Parameters","text":"Name Type Description Default value update_rate_hz double The update rate [hz]. 20.0"},{"location":"perception/radar_fusion_to_detected_object/#radar_scan_fusion_to_detected_object-tbd","title":"radar_scan_fusion_to_detected_object (TBD)","text":"TBD
"},{"location":"perception/radar_fusion_to_detected_object/docs/algorithm/","title":"Algorithm","text":""},{"location":"perception/radar_fusion_to_detected_object/docs/algorithm/#common-algorithm","title":"Common Algorithm","text":""},{"location":"perception/radar_fusion_to_detected_object/docs/algorithm/#1-link-between-3d-bounding-box-and-radar-data","title":"1. Link between 3d bounding box and radar data","text":"Choose radar pointcloud/objects within 3D bounding box from lidar-base detection with margin space from bird's-eye view.
"},{"location":"perception/radar_fusion_to_detected_object/docs/algorithm/#2-feature-support-split-the-object-going-in-a-different-direction","title":"2. [Feature support] Split the object going in a different direction","text":"Estimate twist from chosen radar pointcloud/objects using twist and target value (Target value is amplitude if using radar pointcloud. Target value is probability if using radar objects). First, the estimation function calculate
Second, the estimation function calculate weighted average of these list. Third, twist information of estimated twist is attached to an object.
"},{"location":"perception/radar_fusion_to_detected_object/docs/algorithm/#4-feature-support-option-convert-doppler-velocity-to-twist","title":"4. [Feature support] [Option] Convert doppler velocity to twist","text":"If the twist information of radars is doppler velocity, convert from doppler velocity to twist using yaw angle of DetectedObject. Because radar pointcloud has only doppler velocity information, radar pointcloud fusion should use this feature. On the other hand, because radar objects have twist information, radar object fusion should not use this feature.
"},{"location":"perception/radar_fusion_to_detected_object/docs/algorithm/#5-delete-objects-with-low-probability","title":"5. Delete objects with low probability","text":"This package contains a radar object clustering for autoware_auto_perception_msgs/msg/DetectedObject input.
This package can make clustered objects from radar DetectedObjects, the objects which is converted from RadarTracks by radar_tracks_msgs_converter and is processed by noise filter. In other word, this package can combine multiple radar detections from one object into one and adjust class and size.
"},{"location":"perception/radar_object_clustering/#algorithm","title":"Algorithm","text":""},{"location":"perception/radar_object_clustering/#background","title":"Background","text":"In radars with object output, there are cases that multiple detection results are obtained from one object, especially for large vehicles such as trucks and trailers. Its multiple detection results cause separation of objects in tracking module. Therefore, by this package the multiple detection results are clustered into one object in advance.
"},{"location":"perception/radar_object_clustering/#detail-algorithm","title":"Detail Algorithm","text":"base_link
At first, to prevent changing the result from depending on the order of objects in DetectedObjects, input objects are sorted by distance from base_link
. In addition, to apply matching in closeness order considering occlusion, objects are sorted in order of short distance in advance.
If two radar objects are near, and yaw angle direction and velocity between two radar objects is similar (the degree of these is defined by parameters), then these are clustered. Note that radar characteristic affect parameters for this matching. For example, if resolution of range distance or angle is low and accuracy of velocity is high, then distance_threshold
parameter should be bigger and should set matching that strongly looks at velocity similarity.
After grouping for all radar objects, if multiple radar objects are grouping, the kinematics of the new clustered object is calculated from average of that and label and shape of the new clustered object is calculated from top confidence in radar objects.
When the label information from radar outputs lack accuracy, is_fixed_label
parameter is recommended to set true
. If the parameter is true, the label of a clustered object is overwritten by the label set by fixed_label
parameter. If this package use for faraway dynamic object detection with radar, the parameter is recommended to set to VEHICLE
.
When the size information from radar outputs lack accuracy, is_fixed_size
parameter is recommended to set true
. If the parameter is true, the size of a clustered object is overwritten by the label set by size_x
, size_y
, and size_z
parameters. If this package use for faraway dynamic object detection with radar, the parameter is recommended to set to size_x
, size_y
, size_z
, as average of vehicle size. Note that to use for multi_objects_tracker, the size parameters need to exceed min_area_matrix
parameters of it.
For now, size estimation for clustered object is not implemented. So is_fixed_size
parameter is recommended to set true
, and size parameters is recommended to set to value near to average size of vehicles.
~/input/objects
autoware_auto_perception_msgs/msg/DetectedObjects.msg Radar objects."},{"location":"perception/radar_object_clustering/#output","title":"Output","text":"Name Type Description ~/output/objects
autoware_auto_perception_msgs/msg/DetectedObjects.msg Output objects"},{"location":"perception/radar_object_clustering/#parameters","title":"Parameters","text":"Name Type Description Default value angle_threshold
double Angle threshold to judge whether radar detections come from one object. [rad] 0.174 distance_threshold
double Distance threshold to judge whether radar detections come from one object. [m] 4.0 velocity_threshold
double Velocity threshold to judge whether radar detections come from one object. [m/s] 2.0 is_fixed_label
bool If this parameter is true, the label of a clustered object is overwritten by the label set by fixed_label
parameter. false fixed_label
string If is_fixed_label
is true, the label of a clustered object is overwritten by this parameter. \"UNKNOWN\" is_fixed_size
bool If this parameter is true, the size of a clustered object is overwritten by the label set by size_x
, size_y
, and size_z
parameters. false size_x
double If is_fixed_size
is true, the x-axis size of a clustered object is overwritten by this parameter. [m] 4.0 size_y
double If is_fixed_size
is true, the y-axis size of a clustered object is overwritten by this parameter. [m] 1.5 size_z
double If is_fixed_size
is true, the z-axis size of a clustered object is overwritten by this parameter. [m] 1.5"},{"location":"perception/radar_object_tracker/","title":"Radar Object Tracker","text":""},{"location":"perception/radar_object_tracker/#radar-object-tracker","title":"Radar Object Tracker","text":""},{"location":"perception/radar_object_tracker/#purpose","title":"Purpose","text":"This package provides a radar object tracking node that processes sequences of detected objects to assign consistent identities to them and estimate their velocities.
"},{"location":"perception/radar_object_tracker/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"This radar object tracker is a combination of data association and tracking algorithms.
"},{"location":"perception/radar_object_tracker/#data-association","title":"Data Association","text":"The data association algorithm matches detected objects to existing tracks.
"},{"location":"perception/radar_object_tracker/#tracker-models","title":"Tracker Models","text":"The tracker models used in this package vary based on the class of the detected object. See more details in the models.md.
"},{"location":"perception/radar_object_tracker/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/radar_object_tracker/#input","title":"Input","text":"Name Type Description~/input
autoware_auto_perception_msgs::msg::DetectedObjects
Detected objects /vector/map
autoware_auto_msgs::msg::HADMapBin
Map data"},{"location":"perception/radar_object_tracker/#output","title":"Output","text":"Name Type Description ~/output
autoware_auto_perception_msgs::msg::TrackedObjects
Tracked objects"},{"location":"perception/radar_object_tracker/#parameters","title":"Parameters","text":""},{"location":"perception/radar_object_tracker/#node-parameters","title":"Node Parameters","text":"Name Type Default Value Description publish_rate
double 10.0 The rate at which to publish the output messages world_frame_id
string \"map\" The frame ID of the world coordinate system enable_delay_compensation
bool false Whether to enable delay compensation. If set to true
, output topic is published by timer with publish_rate
. tracking_config_directory
string \"./config/tracking/\" The directory containing the tracking configuration files enable_logging
bool false Whether to enable logging logging_file_path
string \"/tmp/association_log.json\" The path to the file where logs should be written tracker_lifetime
double 1.0 The lifetime of the tracker in seconds use_distance_based_noise_filtering
bool true Whether to use distance based filtering minimum_range_threshold
double 70.0 Minimum distance threshold for filtering in meters use_map_based_noise_filtering
bool true Whether to use map based filtering max_distance_from_lane
double 5.0 Maximum distance from lane for filtering in meters max_angle_diff_from_lane
double 0.785398 Maximum angle difference from lane for filtering in radians max_lateral_velocity
double 5.0 Maximum lateral velocity for filtering in m/s can_assign_matrix
array An array of integers used in the data association algorithm max_dist_matrix
array An array of doubles used in the data association algorithm max_area_matrix
array An array of doubles used in the data association algorithm min_area_matrix
array An array of doubles used in the data association algorithm max_rad_matrix
array An array of doubles used in the data association algorithm min_iou_matrix
array An array of doubles used in the data association algorithm See more details in the models.md.
"},{"location":"perception/radar_object_tracker/#tracker-parameters","title":"Tracker parameters","text":"Currently, this package supports the following trackers:
linear_motion_tracker
constant_turn_rate_motion_tracker
Default settings for each tracker are defined in the ./config/tracking/, and described in models.md.
"},{"location":"perception/radar_object_tracker/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/radar_object_tracker/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/radar_object_tracker/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/radar_object_tracker/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/radar_object_tracker/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/radar_object_tracker/models/","title":"models","text":""},{"location":"perception/radar_object_tracker/models/#models","title":"models","text":"Tracking models can be chosen from the ros parameter ~tracking_model
:
Each model has its own parameters, which can be set in the ros parameter server.
noise model
Just idea, not implemented yet.
\\[ \\begin{align} x_{k+1} &= x_k + \\frac{v_k}{\\omega_k} (sin(\\theta_k + \\omega_k dt) - sin(\\theta_k)) \\\\ y_{k+1} &= y_k + \\frac{v_k}{\\omega_k} (cos(\\theta_k) - cos(\\theta_k + \\omega_k dt)) \\\\ v_{k+1} &= v_k \\\\ \\theta_{k+1} &= \\theta_k + \\omega_k dt \\\\ \\omega_{k+1} &= \\omega_k \\end{align} \\]"},{"location":"perception/radar_object_tracker/models/#noise-filtering","title":"Noise filtering","text":"Radar sensors often have noisy measurement. So we use the following filter to reduce the false positive objects.
The figure below shows the current noise filtering process.
"},{"location":"perception/radar_object_tracker/models/#minimum-range-filter","title":"minimum range filter","text":"In most cases, Radar sensors are used with other sensors such as LiDAR and Camera, and Radar sensors are used to detect objects far away. So we can filter out objects that are too close to the sensor.
use_distance_based_noise_filtering
parameter is used to enable/disable this filter, and minimum_range_threshold
parameter is used to set the threshold.
With lanelet map information, We can filter out false positive objects that are not likely important obstacles.
We filter out objects that satisfy the following conditions:
Each condition can be set by the following parameters:
max_distance_from_lane
max_angle_diff_from_lane
max_lateral_velocity
This package converts from radar_msgs/msg/RadarTracks into autoware_auto_perception_msgs/msg/DetectedObject and autoware_auto_perception_msgs/msg/TrackedObject.
Autoware uses radar_msgs/msg/RadarTracks.msg as radar objects input data. To use radar objects data for Autoware perception module easily, radar_tracks_msgs_converter
converts message type from radar_msgs/msg/RadarTracks.msg
to autoware_auto_perception_msgs/msg/DetectedObject
. In addition, because many detection module have an assumption on base_link frame, radar_tracks_msgs_converter
provide the functions of transform frame_id.
Radar_tracks_msgs_converter
converts the label from radar_msgs/msg/RadarTrack.msg
to Autoware label. Label id is defined as below.
Additional vendor-specific classifications are permitted starting from 32000 in radar_msgs/msg/RadarTrack.msg. Autoware objects label is defined in ObjectClassification.idl
"},{"location":"perception/radar_tracks_msgs_converter/#interface","title":"Interface","text":""},{"location":"perception/radar_tracks_msgs_converter/#input","title":"Input","text":"~/input/radar_objects
(radar_msgs/msg/RadarTracks.msg
)~/input/odometry
(nav_msgs/msg/Odometry.msg
)~/output/radar_detected_objects
(autoware_auto_perception_msgs/msg/DetectedObject.idl
)~/output/radar_tracked_objects
(autoware_auto_perception_msgs/msg/TrackedObject.idl
)update_rate_hz
(double) [hz]This parameter is update rate for the onTimer
function. This parameter should be same as the frame rate of input topics.
new_frame_id
(string)This parameter is the header frame_id of the output topic.
use_twist_compensation
(bool)This parameter is the flag to use the compensation to linear of ego vehicle's twist. If the parameter is true, then the twist of the output objects' topic is compensated by the ego vehicle linear motion.
use_twist_yaw_compensation
(bool)This parameter is the flag to use the compensation to yaw rotation of ego vehicle's twist. If the parameter is true, then the ego motion compensation will also consider yaw motion of the ego vehicle.
static_object_speed_threshold
(float) [m/s]This parameter is the threshold to determine the flag is_stationary
. If the velocity is lower than this parameter, the flag is_stationary
of DetectedObject is set to true
and dealt as a static object.
This node calculates a refined object shape (bounding box, cylinder, convex hull) in which a pointcloud cluster fits according to a label.
"},{"location":"perception/shape_estimation/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"perception/shape_estimation/#fitting-algorithms","title":"Fitting algorithms","text":"bounding box
L-shape fitting. See reference below for details.
cylinder
cv::minEnclosingCircle
convex hull
cv::convexHull
input
tier4_perception_msgs::msg::DetectedObjectsWithFeature
detected objects with labeled cluster"},{"location":"perception/shape_estimation/#output","title":"Output","text":"Name Type Description output/objects
autoware_auto_perception_msgs::msg::DetectedObjects
detected objects with refined shape"},{"location":"perception/shape_estimation/#parameters","title":"Parameters","text":"Name Type Description Default Range use_corrector boolean The flag to apply rule-based corrector. true N/A use_filter boolean The flag to apply rule-based filter true N/A use_vehicle_reference_yaw boolean The flag to use vehicle reference yaw for corrector false N/A use_vehicle_reference_shape_size boolean The flag to use vehicle reference shape size false N/A use_boost_bbox_optimizer boolean The flag to use boost bbox optimizer false N/A"},{"location":"perception/shape_estimation/#assumptions-known-limits","title":"Assumptions / Known limits","text":"TBD
"},{"location":"perception/shape_estimation/#referencesexternal-links","title":"References/External links","text":"L-shape fitting implementation of the paper:
@conference{Zhang-2017-26536,\nauthor = {Xiao Zhang and Wenda Xu and Chiyu Dong and John M. Dolan},\ntitle = {Efficient L-Shape Fitting for Vehicle Detection Using Laser Scanners},\nbooktitle = {2017 IEEE Intelligent Vehicles Symposium},\nyear = {2017},\nmonth = {June},\nkeywords = {autonomous driving, laser scanner, perception, segmentation},\n}\n
"},{"location":"perception/simple_object_merger/","title":"simple_object_merger","text":""},{"location":"perception/simple_object_merger/#simple_object_merger","title":"simple_object_merger","text":"This package can merge multiple topics of autoware_auto_perception_msgs/msg/DetectedObject with low calculation cost.
"},{"location":"perception/simple_object_merger/#design","title":"Design","text":""},{"location":"perception/simple_object_merger/#background","title":"Background","text":"Object_merger is mainly used for merge process with DetectedObjects. There are 2 characteristics in Object_merger
. First, object_merger
solve data association algorithm like Hungarian algorithm for matching problem, but it needs computational cost. Second, object_merger
can handle only 2 DetectedObjects topics and cannot handle more than 2 topics in one node. To merge 6 DetectedObjects topics, 6 object_merger
nodes need to stand for now.
Therefore, simple_object_merger
aim to merge multiple DetectedObjects with low calculation cost. The package do not use data association algorithm to reduce the computational cost, and it can handle more than 2 topics in one node to prevent launching a large number of nodes.
Simple_object_merger
can be used for multiple radar detection. By combining them into one topic from multiple radar topics, the pipeline for faraway detection with radar can be simpler.
Merged objects will not be published until all topic data is received when initializing. In addition, to care sensor data drops and delayed, this package has a parameter to judge timeout. When the latest time of the data of a topic is older than the timeout parameter, it is not merged for output objects. For now specification of this package, if all topic data is received at first and after that the data drops, and the merged objects are published without objects which is judged as timeout.The timeout parameter should be determined by sensor cycle time.
Because this package does not have matching processing, there are overlapping objects depending on the input objects. So output objects can be used only when post-processing is used. For now, clustering processing can be used as post-processing.
"},{"location":"perception/simple_object_merger/#interface","title":"Interface","text":""},{"location":"perception/simple_object_merger/#input","title":"Input","text":"Input topics is defined by the parameter of input_topics
(List[string]). The type of input topics is std::vector<autoware_auto_perception_msgs/msg/DetectedObjects.msg>
.
~/output/objects
(autoware_auto_perception_msgs/msg/DetectedObjects.msg
)update_rate_hz
(double) [hz]This parameter is update rate for the onTimer
function. This parameter should be same as the frame rate of input topics.
new_frame_id
(string)This parameter is the header frame_id of the output topic. If output topics use for perception module, it should be set for \"base_link\"
timeout_threshold
(double) [s]This parameter is the threshold for timeout judgement. If the time difference between the first topic of input_topics
and an input topic is exceeded to this parameter, then the objects of topic is not merged to output objects.
for (size_t i = 0; i < input_topic_size; i++) {\ndouble time_diff = rclcpp::Time(objects_data_.at(i)->header.stamp).seconds() -\nrclcpp::Time(objects_data_.at(0)->header.stamp).seconds();\nif (std::abs(time_diff) < node_param_.timeout_threshold) {\n// merge objects\n}\n}\n
input_topics
(List[string])This parameter is the name of input topics. For example, when this packages use for radar objects, \"[/sensing/radar/front_center/detected_objects, /sensing/radar/front_left/detected_objects, /sensing/radar/rear_left/detected_objects, /sensing/radar/rear_center/detected_objects, /sensing/radar/rear_right/detected_objects, /sensing/radar/front_right/detected_objects]\"
can be set. For now, the time difference is calculated by the header time between the first topic of input_topics
and the input topics, so the most important objects to detect should be set in the first of input_topics
list.
This package classifies arbitrary categories using TensorRT for efficient and faster inference. Specifically, this optimizes preprocessing for efficient inference on embedded platform. Moreover, we support dynamic batched inference in GPUs and DLAs.
"},{"location":"perception/tensorrt_yolo/","title":"tensorrt_yolo","text":""},{"location":"perception/tensorrt_yolo/#tensorrt_yolo","title":"tensorrt_yolo","text":""},{"location":"perception/tensorrt_yolo/#purpose","title":"Purpose","text":"This package detects 2D bounding boxes for target objects e.g., cars, trucks, bicycles, and pedestrians on a image based on YOLO(You only look once) model.
"},{"location":"perception/tensorrt_yolo/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"perception/tensorrt_yolo/#cite","title":"Cite","text":"yolov3
Redmon, J., & Farhadi, A. (2018). Yolov3: An incremental improvement. arXiv preprint arXiv:1804.02767.
yolov4
Bochkovskiy, A., Wang, C. Y., & Liao, H. Y. M. (2020). Yolov4: Optimal speed and accuracy of object detection. arXiv preprint arXiv:2004.10934.
yolov5
Jocher, G., et al. (2021). ultralytics/yolov5: v6.0 - YOLOv5n 'Nano' models, Roboflow integration, TensorFlow export, OpenCV DNN support (v6.0). Zenodo. https://doi.org/10.5281/zenodo.5563715
"},{"location":"perception/tensorrt_yolo/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/tensorrt_yolo/#input","title":"Input","text":"Name Type Descriptionin/image
sensor_msgs/Image
The input image"},{"location":"perception/tensorrt_yolo/#output","title":"Output","text":"Name Type Description out/objects
tier4_perception_msgs/DetectedObjectsWithFeature
The detected objects with 2D bounding boxes out/image
sensor_msgs/Image
The image with 2D bounding boxes for visualization"},{"location":"perception/tensorrt_yolo/#parameters","title":"Parameters","text":""},{"location":"perception/tensorrt_yolo/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description anchors
double array [10.0, 13.0, 16.0, 30.0, 33.0, 23.0, 30.0, 61.0, 62.0, 45.0, 59.0, 119.0, 116.0, 90.0, 156.0, 198.0, 373.0, 326.0] The anchors to create bounding box candidates scale_x_y
double array [1.0, 1.0, 1.0] The scale parameter to eliminate grid sensitivity score_thresh
double 0.1 If the objectness score is less than this value, the object is ignored in yolo layer. iou_thresh
double 0.45 The iou threshold for NMS method detections_per_im
int 100 The maximum detection number for one frame use_darknet_layer
bool true The flag to use yolo layer in darknet ignore_thresh
double 0.5 If the output score is less than this value, ths object is ignored."},{"location":"perception/tensorrt_yolo/#node-parameters","title":"Node Parameters","text":"Name Type Default Value Description data_path
string \"\" Packages data and artifacts directory path onnx_file
string \"\" The onnx file name for yolo model engine_file
string \"\" The tensorrt engine file name for yolo model label_file
string \"\" The label file with label names for detected objects written on it calib_image_directory
string \"\" The directory name including calibration images for int8 inference calib_cache_file
string \"\" The calibration cache file for int8 inference mode
string \"FP32\" The inference mode: \"FP32\", \"FP16\", \"INT8\" gpu_id
int 0 GPU device ID that runs the model"},{"location":"perception/tensorrt_yolo/#assumptions-known-limits","title":"Assumptions / Known limits","text":"This package includes multiple licenses.
"},{"location":"perception/tensorrt_yolo/#onnx-model","title":"Onnx model","text":"All YOLO ONNX models are converted from the officially trained model. If you need information about training datasets and conditions, please refer to the official repositories.
All models are downloaded during env preparation by ansible (as mention in installation). It is also possible to download them manually, see Manual downloading of artifacts . When launching the node with a model for the first time, the model is automatically converted to TensorRT, although this may take some time.
"},{"location":"perception/tensorrt_yolo/#yolov3","title":"YOLOv3","text":"YOLOv3: Converted from darknet weight file and conf file.
YOLOv4: Converted from darknet weight file and conf file.
YOLOv4-tiny: Converted from darknet weight file and conf file.
Refer to this guide
This package detects target objects e.g., cars, trucks, bicycles, and pedestrians on a image based on YOLOX model.
"},{"location":"perception/tensorrt_yolox/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"perception/tensorrt_yolox/#cite","title":"Cite","text":"Zheng Ge, Songtao Liu, Feng Wang, Zeming Li, Jian Sun, \"YOLOX: Exceeding YOLO Series in 2021\", arXiv preprint arXiv:2107.08430, 2021 [ref]
"},{"location":"perception/tensorrt_yolox/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/tensorrt_yolox/#input","title":"Input","text":"Name Type Descriptionin/image
sensor_msgs/Image
The input image"},{"location":"perception/tensorrt_yolox/#output","title":"Output","text":"Name Type Description out/objects
tier4_perception_msgs/DetectedObjectsWithFeature
The detected objects with 2D bounding boxes out/image
sensor_msgs/Image
The image with 2D bounding boxes for visualization"},{"location":"perception/tensorrt_yolox/#parameters","title":"Parameters","text":""},{"location":"perception/tensorrt_yolox/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description score_threshold
float 0.3 If the objectness score is less than this value, the object is ignored in yolox layer. nms_threshold
float 0.7 The IoU threshold for NMS method NOTE: These two parameters are only valid for \"plain\" model (described later).
"},{"location":"perception/tensorrt_yolox/#node-parameters","title":"Node Parameters","text":"Name Type Default Value Descriptionmodel_path
string \"\" The onnx file name for yolox model label_path
string \"\" The label file with label names for detected objects written on it precision
string \"fp16\" The inference mode: \"fp32\", \"fp16\", \"int8\" build_only
bool false shutdown node after TensorRT engine file is built calibration_algorithm
string \"MinMax\" Calibration algorithm to be used for quantization when precision==int8. Valid value is one of: Entropy\",(\"Legacy\" | \"Percentile\"), \"MinMax\"] dla_core_id
int -1 If positive ID value is specified, the node assign inference task to the DLA core quantize_first_layer
bool false If true, set the operating precision for the first (input) layer to be fp16. This option is valid only when precision==int8 quantize_last_layer
bool false If true, set the operating precision for the last (output) layer to be fp16. This option is valid only when precision==int8 profile_per_layer
bool false If true, profiler function will be enabled. Since the profile function may affect execution speed, it is recommended to set this flag true only for development purpose. clip_value
double 0.0 If positive value is specified, the value of each layer output will be clipped between [0.0, clip_value]. This option is valid only when precision==int8 and used to manually specify the dynamic range instead of using any calibration preprocess_on_gpu
bool true If true, pre-processing is performed on GPU calibration_image_list_path
string \"\" Path to a file which contains path to images. Those images will be used for int8 quantization."},{"location":"perception/tensorrt_yolox/#assumptions-known-limits","title":"Assumptions / Known limits","text":"The label contained in detected 2D bounding boxes (i.e., out/objects
) will be either one of the followings:
If other labels (case insensitive) are contained in the file specified via the label_file
parameter, those are labeled as UNKNOWN
, while detected rectangles are drawn in the visualization result (out/image
).
A sample model (named yolox-tiny.onnx
) is downloaded by ansible script on env preparation stage, if not, please, follow Manual downloading of artifacts. To accelerate Non-maximum-suppression (NMS), which is one of the common post-process after object detection inference, EfficientNMS_TRT
module is attached after the ordinal YOLOX (tiny) network. The EfficientNMS_TRT
module contains fixed values for score_threshold
and nms_threshold
in it, hence these parameters are ignored when users specify ONNX models including this module.
This package accepts both EfficientNMS_TRT
attached ONNXs and models published from the official YOLOX repository (we referred to them as \"plain\" models).
In addition to yolox-tiny.onnx
, a custom model named yolox-sPlus-opt.onnx
is either available. This model is based on YOLOX-s and tuned to perform more accurate detection with almost comparable execution speed with yolox-tiny
. To get better results with this model, users are recommended to use some specific running arguments such as precision:=int8
, calibration_algorithm:=Entropy
, clip_value:=6.0
. Users can refer launch/yolox_sPlus_opt.launch.xml
to see how this model can be used.
All models are automatically converted to TensorRT format. These converted files will be saved in the same directory as specified ONNX files with .engine
filename extension and reused from the next run. The conversion process may take a while (typically 10 to 20 minutes) and the inference process is blocked until complete the conversion, so it will take some time until detection results are published (even until appearing in the topic list) on the first run
To convert users' own model that saved in PyTorch's pth
format into ONNX, users can exploit the converter offered by the official repository. For the convenience, only procedures are described below. Please refer the official document for more detail.
Install dependency
git clone git@github.com:Megvii-BaseDetection/YOLOX.git\ncd YOLOX\npython3 setup.py develop --user\n
Convert pth into ONNX
python3 tools/export_onnx.py \\\n--output-name YOUR_YOLOX.onnx \\\n-f YOUR_YOLOX.py \\\n-c YOUR_YOLOX.pth\n
Install dependency
git clone git@github.com:Megvii-BaseDetection/YOLOX.git\ncd YOLOX\npython3 setup.py develop --user\npip3 install git+ssh://git@github.com/wep21/yolox_onnx_modifier.git --user\n
Convert pth into ONNX
python3 tools/export_onnx.py \\\n--output-name YOUR_YOLOX.onnx \\\n-f YOUR_YOLOX.py \\\n-c YOUR_YOLOX.pth\n --decode_in_inference\n
Embed EfficientNMS_TRT
to the end of YOLOX
yolox_onnx_modifier YOUR_YOLOX.onnx -o YOUR_YOLOX_WITH_NMS.onnx\n
A sample label file (named label.txt
)is also downloaded automatically during env preparation process (NOTE: This file is incompatible with models that output labels for the COCO dataset (e.g., models from the official YOLOX repository)).
This file represents the correspondence between class index (integer outputted from YOLOX network) and class label (strings making understanding easier). This package maps class IDs (incremented from 0) with labels according to the order in this file.
"},{"location":"perception/tensorrt_yolox/#reference-repositories","title":"Reference repositories","text":"This package try to merge two tracking objects from different sensor.
"},{"location":"perception/tracking_object_merger/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"Merging tracking objects from different sensor is a combination of data association and state fusion algorithms.
Detailed process depends on the merger policy.
"},{"location":"perception/tracking_object_merger/#decorative_tracker_merger","title":"decorative_tracker_merger","text":"In decorative_tracker_merger, we assume there are dominant tracking objects and sub tracking objects. The name decorative
means that sub tracking objects are used to complement the main objects.
Usually the dominant tracking objects are from LiDAR and sub tracking objects are from Radar or Camera.
Here show the processing pipeline.
"},{"location":"perception/tracking_object_merger/#time-sync","title":"time sync","text":"Sub object(Radar or Camera) often has higher frequency than dominant object(LiDAR). So we need to sync the time of sub object to dominant object.
"},{"location":"perception/tracking_object_merger/#data-association","title":"data association","text":"In the data association, we use the following rules to determine whether two tracking objects are the same object.
distance gate
: distance between two tracking objectsangle gate
: angle between two tracking objectsmahalanobis_distance_gate
: Mahalanobis distance between two tracking objectsmin_iou_gate
: minimum IoU between two tracking objectsmax_velocity_gate
: maximum velocity difference between two tracking objectsSub tracking objects are merged into dominant tracking objects.
Depends on the tracklet input sensor state, we update the tracklet state with different rules.
state\\priority 1st 2nd 3rd Kinematics except velocity LiDAR Radar Camera Forward velocity Radar LiDAR Camera Object classification Camera LiDAR Radar"},{"location":"perception/tracking_object_merger/#tracklet-management","title":"tracklet management","text":"We use the existence_probability
to manage tracklet.
existence_probability
to \\(p_{sensor}\\) value.existence_probability
to \\(p_{sensor}\\) value.existence_probability
by decay_rate
existence_probability
is larger than publish_probability_threshold
existence_probability
is smaller than remove_probability_threshold
These parameter can be set in config/decorative_tracker_merger.param.yaml
.
tracker_state_parameter:\nremove_probability_threshold: 0.3\npublish_probability_threshold: 0.6\ndefault_lidar_existence_probability: 0.7\ndefault_radar_existence_probability: 0.6\ndefault_camera_existence_probability: 0.6\ndecay_rate: 0.1\nmax_dt: 1.0\n
"},{"location":"perception/tracking_object_merger/#inputparameters","title":"input/parameters","text":"topic name message type description ~input/main_object
autoware_auto_perception_msgs::TrackedObjects
Dominant tracking objects. Output will be published with this dominant object stamps. ~input/sub_object
autoware_auto_perception_msgs::TrackedObjects
Sub tracking objects. output/object
autoware_auto_perception_msgs::TrackedObjects
Merged tracking objects. debug/interpolated_sub_object
autoware_auto_perception_msgs::TrackedObjects
Interpolated sub tracking objects. Default parameters are set in config/decorative_tracker_merger.param.yaml.
parameter name description default valuebase_link_frame_id
base link frame id. This is used to transform the tracking object. \"base_link\" time_sync_threshold
time sync threshold. If the time difference between two tracking objects is smaller than this value, we consider these two tracking objects are the same object. 0.05 sub_object_timeout_sec
sub object timeout. If the sub object is not updated for this time, we consider this object is not exist. 0.5 main_sensor_type
main sensor type. This is used to determine the dominant tracking object. \"lidar\" sub_sensor_type
sub sensor type. This is used to determine the sub tracking object. \"radar\" tracker_state_parameter
tracker state parameter. This is used to manage the tracklet. tracker_state_parameter
is described in tracklet managementAs explained in tracklet management, this tracker merger tend to maintain the both input tracking objects.
If there are many false positive tracking objects,
default_<sensor>_existence_probability
of that sensordecay_rate
publish_probability_threshold
to publish only reliable tracking objectsThis is future work.
"},{"location":"perception/traffic_light_arbiter/","title":"traffic_light_arbiter","text":""},{"location":"perception/traffic_light_arbiter/#traffic_light_arbiter","title":"traffic_light_arbiter","text":""},{"location":"perception/traffic_light_arbiter/#purpose","title":"Purpose","text":"This package receives traffic signals from perception and external (e.g., V2X) components and combines them using either a confidence-based or a external-preference based approach.
"},{"location":"perception/traffic_light_arbiter/#trafficlightarbiter","title":"TrafficLightArbiter","text":"A node that merges traffic light/signal state from image recognition and external (e.g., V2X) systems to provide to a planning component.
"},{"location":"perception/traffic_light_arbiter/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/traffic_light_arbiter/#input","title":"Input","text":"Name Type Description ~/sub/vector_map autoware_auto_mapping_msgs::msg::HADMapBin The vector map to get valid traffic signal ids. ~/sub/perception_traffic_signals autoware_perception_msgs::msg::TrafficSignalArray The traffic signals from the image recognition pipeline. ~/sub/external_traffic_signals autoware_perception_msgs::msg::TrafficSignalArray The traffic signals from an external system."},{"location":"perception/traffic_light_arbiter/#output","title":"Output","text":"Name Type Description ~/pub/traffic_signals autoware_perception_msgs::msg::TrafficSignalArray The merged traffic signal state."},{"location":"perception/traffic_light_arbiter/#parameters","title":"Parameters","text":""},{"location":"perception/traffic_light_arbiter/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Descriptionexternal_time_tolerance
double 5.0 The duration in seconds an external message is considered valid for merging perception_time_tolerance
double 1.0 The duration in seconds a perception message is considered valid for merging external_priority
bool false Whether or not externals signals take precedence over perception-based ones. If false, the merging uses confidence as a criteria"},{"location":"perception/traffic_light_classifier/","title":"traffic_light_classifier","text":""},{"location":"perception/traffic_light_classifier/#traffic_light_classifier","title":"traffic_light_classifier","text":""},{"location":"perception/traffic_light_classifier/#purpose","title":"Purpose","text":"traffic_light_classifier is a package for classifying traffic light labels using cropped image around a traffic light. This package has two classifier models: cnn_classifier
and hsv_classifier
.
Traffic light labels are classified by EfficientNet-b1 or MobileNet-v2. Totally 83400 (58600 for training, 14800 for evaluation and 10000 for test) TIER IV internal images of Japanese traffic lights were used for fine-tuning. The information of the models is listed here:
Name Input Size Test Accuracy EfficientNet-b1 128 x 128 99.76% MobileNet-v2 224 x 224 99.81%"},{"location":"perception/traffic_light_classifier/#hsv_classifier","title":"hsv_classifier","text":"Traffic light colors (green, yellow and red) are classified in HSV model.
"},{"location":"perception/traffic_light_classifier/#about-label","title":"About Label","text":"The message type is designed to comply with the unified road signs proposed at the Vienna Convention. This idea has been also proposed in Autoware.Auto.
There are rules for naming labels that nodes receive. One traffic light is represented by the following character string separated by commas. color1-shape1, color2-shape2
.
For example, the simple red and red cross traffic light label must be expressed as \"red-circle, red-cross\".
These colors and shapes are assigned to the message as follows:
"},{"location":"perception/traffic_light_classifier/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"perception/traffic_light_classifier/#input","title":"Input","text":"Name Type Description~/input/image
sensor_msgs::msg::Image
input image ~/input/rois
tier4_perception_msgs::msg::TrafficLightRoiArray
rois of traffic lights"},{"location":"perception/traffic_light_classifier/#output","title":"Output","text":"Name Type Description ~/output/traffic_signals
tier4_perception_msgs::msg::TrafficSignalArray
classified signals ~/output/debug/image
sensor_msgs::msg::Image
image for debugging"},{"location":"perception/traffic_light_classifier/#parameters","title":"Parameters","text":""},{"location":"perception/traffic_light_classifier/#node-parameters","title":"Node Parameters","text":"Name Type Description classifier_type
int if the value is 1
, cnn_classifier is used data_path
str packages data and artifacts directory path backlight_threshold
float If the intensity get grater than this overwrite with UNKNOWN in corresponding RoI. Note that, if the value is much higher, the node only overwrites in the harsher backlight situations. Therefore, If you wouldn't like to use this feature set this value to 1.0
. The value can be [0.0, 1.0]
. The confidence of overwritten signal is set to 0.0
."},{"location":"perception/traffic_light_classifier/#core-parameters","title":"Core Parameters","text":""},{"location":"perception/traffic_light_classifier/#cnn_classifier_1","title":"cnn_classifier","text":"Name Type Description classifier_label_path
str path to the model file classifier_model_path
str path to the label file classifier_precision
str TensorRT precision, fp16
or int8
classifier_mean
vector\\ 3-channel input image mean classifier_std
vector\\ 3-channel input image std apply_softmax
bool whether or not apply softmax"},{"location":"perception/traffic_light_classifier/#hsv_classifier_1","title":"hsv_classifier","text":"Name Type Description green_min_h
int the minimum hue of green color green_min_s
int the minimum saturation of green color green_min_v
int the minimum value (brightness) of green color green_max_h
int the maximum hue of green color green_max_s
int the maximum saturation of green color green_max_v
int the maximum value (brightness) of green color yellow_min_h
int the minimum hue of yellow color yellow_min_s
int the minimum saturation of yellow color yellow_min_v
int the minimum value (brightness) of yellow color yellow_max_h
int the maximum hue of yellow color yellow_max_s
int the maximum saturation of yellow color yellow_max_v
int the maximum value (brightness) of yellow color red_min_h
int the minimum hue of red color red_min_s
int the minimum saturation of red color red_min_v
int the minimum value (brightness) of red color red_max_h
int the maximum hue of red color red_max_s
int the maximum saturation of red color red_max_v
int the maximum value (brightness) of red color"},{"location":"perception/traffic_light_classifier/#training-traffic-light-classifier-model","title":"Training Traffic Light Classifier Model","text":""},{"location":"perception/traffic_light_classifier/#overview","title":"Overview","text":"This guide provides detailed instructions on training a traffic light classifier model using the mmlab/mmpretrain repository and deploying it using mmlab/mmdeploy. If you wish to create a custom traffic light classifier model with your own dataset, please follow the steps outlined below.
"},{"location":"perception/traffic_light_classifier/#data-preparation","title":"Data Preparation","text":""},{"location":"perception/traffic_light_classifier/#use-sample-dataset","title":"Use Sample Dataset","text":"Autoware offers a sample dataset that illustrates the training procedures for traffic light classification. This dataset comprises 1045 images categorized into red, green, and yellow labels. To utilize this sample dataset, please download it from link and extract it to a designated folder of your choice.
"},{"location":"perception/traffic_light_classifier/#use-your-custom-dataset","title":"Use Your Custom Dataset","text":"To train a traffic light classifier, adopt a structured subfolder format where each subfolder represents a distinct class. Below is an illustrative dataset structure example;
DATASET_ROOT\n \u251c\u2500\u2500 TRAIN\n \u2502 \u251c\u2500\u2500 RED\n \u2502 \u2502 \u251c\u2500\u2500 001.png\n \u2502 \u2502 \u251c\u2500\u2500 002.png\n \u2502 \u2502 \u2514\u2500\u2500 ...\n \u2502 \u2502\n \u2502 \u251c\u2500\u2500 GREEN\n \u2502 \u2502 \u251c\u2500\u2500 001.png\n \u2502 \u2502 \u251c\u2500\u2500 002.png\n \u2502 \u2502 \u2514\u2500\u2500...\n \u2502 \u2502\n \u2502 \u251c\u2500\u2500 YELLOW\n \u2502 \u2502 \u251c\u2500\u2500 001.png\n \u2502 \u2502 \u251c\u2500\u2500 002.png\n \u2502 \u2502 \u2514\u2500\u2500...\n \u2502 \u2514\u2500\u2500 ...\n \u2502\n \u251c\u2500\u2500 VAL\n \u2502 \u2514\u2500\u2500...\n \u2502\n \u2502\n \u2514\u2500\u2500 TEST\n \u2514\u2500\u2500 ...\n
"},{"location":"perception/traffic_light_classifier/#installation","title":"Installation","text":""},{"location":"perception/traffic_light_classifier/#prerequisites","title":"Prerequisites","text":"Step 1. Download and install Miniconda from the official website.
Step 2. Create a conda virtual environment and activate it
conda create --name tl-classifier python=3.8 -y\nconda activate tl-classifier\n
Step 3. Install PyTorch
Please ensure you have PyTorch installed, compatible with CUDA 11.6, as it is a requirement for current Autoware
conda install pytorch==1.13.1 torchvision==0.14.1 pytorch-cuda=11.6 -c pytorch -c nvidia\n
"},{"location":"perception/traffic_light_classifier/#install-mmlabmmpretrain","title":"Install mmlab/mmpretrain","text":"Step 1. Install mmpretrain from source
cd ~/\ngit clone https://github.com/open-mmlab/mmpretrain.git\ncd mmpretrain\npip install -U openmim && mim install -e .\n
"},{"location":"perception/traffic_light_classifier/#training","title":"Training","text":"MMPretrain offers a training script that is controlled through a configuration file. Leveraging an inheritance design pattern, you can effortlessly tailor the training script using Python files as configuration files.
In the example, we demonstrate the training steps on the MobileNetV2 model, but you have the flexibility to employ alternative classification models such as EfficientNetV2, EfficientNetV3, ResNet, and more.
"},{"location":"perception/traffic_light_classifier/#create-a-config-file","title":"Create a config file","text":"Generate a configuration file for your preferred model within the configs
folder
touch ~/mmpretrain/configs/mobilenet_v2/mobilenet-v2_8xb32_custom.py\n
Open the configuration file in your preferred text editor and make a copy of the provided content. Adjust the data_root variable to match the path of your dataset. You are welcome to customize the configuration parameters for the model, dataset, and scheduler to suit your preferences
# Inherit model, schedule and default_runtime from base model\n_base_ = [\n '../_base_/models/mobilenet_v2_1x.py',\n '../_base_/schedules/imagenet_bs256_epochstep.py',\n '../_base_/default_runtime.py'\n]\n\n# Set the number of classes to the model\n# You can also change other model parameters here\n# For detailed descriptions of model parameters, please refer to link below\n# (Customize model)[https://mmpretrain.readthedocs.io/en/latest/advanced_guides/modules.html]\nmodel = dict(head=dict(num_classes=3, topk=(1, 3)))\n\n# Set max epochs and validation interval\ntrain_cfg = dict(by_epoch=True, max_epochs=50, val_interval=5)\n\n# Set optimizer and lr scheduler\noptim_wrapper = dict(\n optimizer=dict(type='SGD', lr=0.001, momentum=0.9))\nparam_scheduler = dict(type='StepLR', by_epoch=True, step_size=1, gamma=0.98)\n\ndataset_type = 'CustomDataset'\ndata_root = \"/PATH/OF/YOUR/DATASET\"\n\n# Customize data preprocessing and dataloader pipeline for training set\n# These parameters calculated for the sample dataset\ndata_preprocessor = dict(\n mean=[0.2888 * 256, 0.2570 * 256, 0.2329 * 256],\n std=[0.2106 * 256, 0.2037 * 256, 0.1864 * 256],\n num_classes=3,\n to_rgb=True,\n)\n\n# Customize data preprocessing and dataloader pipeline for train set\n# For detailed descriptions of data pipeline, please refer to link below\n# (Customize data pipeline)[https://mmpretrain.readthedocs.io/en/latest/advanced_guides/pipeline.html]\ntrain_pipeline = [\n dict(type='LoadImageFromFile'),\n dict(type='Resize', scale=224),\n dict(type='RandomFlip', prob=0.5, direction='horizontal'),\n dict(type='PackInputs'),\n]\ntrain_dataloader = dict(\n dataset=dict(\n type=dataset_type,\n data_root=data_root,\n ann_file='',\n data_prefix='train',\n with_label=True,\n pipeline=train_pipeline,\n ),\n num_workers=8,\n batch_size=32,\n sampler=dict(type='DefaultSampler', shuffle=True)\n)\n\n# Customize data preprocessing and dataloader pipeline for test set\ntest_pipeline = [\n dict(type='LoadImageFromFile'),\n dict(type='Resize', scale=224),\n dict(type='PackInputs'),\n]\n\n# Customize data preprocessing and dataloader pipeline for validation set\nval_cfg = dict()\nval_dataloader = dict(\n dataset=dict(\n type=dataset_type,\n data_root=data_root,\n ann_file='',\n data_prefix='val',\n with_label=True,\n pipeline=test_pipeline,\n ),\n num_workers=8,\n batch_size=32,\n sampler=dict(type='DefaultSampler', shuffle=True)\n)\n\nval_evaluator = dict(topk=(1, 3,), type='Accuracy')\n\ntest_dataloader = val_dataloader\ntest_evaluator = val_evaluator\n
"},{"location":"perception/traffic_light_classifier/#start-training","title":"Start training","text":"cd ~/mmpretrain\npython tools/train.py configs/mobilenet_v2/mobilenet-v2_8xb32_custom.py\n
Training logs and weights will be saved in the work_dirs/mobilenet-v2_8xb32_custom
folder.
The 'mmdeploy' toolset is designed for deploying your trained model onto various target devices. With its capabilities, you can seamlessly convert PyTorch models into the ONNX format.
# Activate your conda environment\nconda activate tl-classifier\n\n# Install mmenigne and mmcv\nmim install mmengine\nmim install \"mmcv>=2.0.0rc2\"\n\n# Install mmdeploy\npip install mmdeploy==1.2.0\n\n# Support onnxruntime\npip install mmdeploy-runtime==1.2.0\npip install mmdeploy-runtime-gpu==1.2.0\npip install onnxruntime-gpu==1.8.1\n\n#Clone mmdeploy repository\ncd ~/\ngit clone -b main https://github.com/open-mmlab/mmdeploy.git\n
"},{"location":"perception/traffic_light_classifier/#convert-pytorch-model-to-onnx-model_1","title":"Convert PyTorch model to ONNX model","text":"cd ~/mmdeploy\n\n# Run deploy.py script\n# deploy.py script takes 5 main arguments with these order; config file path, train config file path,\n# checkpoint file path, demo image path, and work directory path\npython tools/deploy.py \\\n~/mmdeploy/configs/mmpretrain/classification_onnxruntime_static.py\\\n~/mmpretrain/configs/mobilenet_v2/train_mobilenet_v2.py \\\n~/mmpretrain/work_dirs/train_mobilenet_v2/epoch_300.pth \\\n/SAMPLE/IAMGE/DIRECTORY \\\n--work-dir mmdeploy_model/mobilenet_v2\n
Converted ONNX model will be saved in the mmdeploy/mmdeploy_model/mobilenet_v2
folder.
After obtaining your onnx model, update parameters defined in the launch file (e.g. model_file_path
, label_file_path
, input_h
, input_w
...). Note that, we only support labels defined in tier4_perception_msgs::msg::TrafficLightElement.
[1] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov and L. Chen, \"MobileNetV2: Inverted Residuals and Linear Bottlenecks,\" 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, 2018, pp. 4510-4520, doi: 10.1109/CVPR.2018.00474.
[2] Tan, Mingxing, and Quoc Le. \"EfficientNet: Rethinking model scaling for convolutional neural networks.\" International conference on machine learning. PMLR, 2019.
"},{"location":"perception/traffic_light_classifier/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"perception/traffic_light_fine_detector/","title":"traffic_light_fine_detector","text":""},{"location":"perception/traffic_light_fine_detector/#traffic_light_fine_detector","title":"traffic_light_fine_detector","text":""},{"location":"perception/traffic_light_fine_detector/#purpose","title":"Purpose","text":"It is a package for traffic light detection using YoloX-s.
"},{"location":"perception/traffic_light_fine_detector/#training-information","title":"Training Information","text":""},{"location":"perception/traffic_light_fine_detector/#pretrained-model","title":"Pretrained Model","text":"The model is based on YOLOX and the pretrained model could be downloaded from here.
"},{"location":"perception/traffic_light_fine_detector/#training-data","title":"Training Data","text":"The model was fine-tuned on around 17,000 TIER IV internal images of Japanese traffic lights.
"},{"location":"perception/traffic_light_fine_detector/#trained-onnx-model","title":"Trained Onnx model","text":"You can download the ONNX file using these instructions. Please visit autoware-documentation for more information.
"},{"location":"perception/traffic_light_fine_detector/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"Based on the camera image and the global ROI array detected by map_based_detection
node, a CNN-based detection method enables highly accurate traffic light detection.
~/input/image
sensor_msgs/Image
The full size camera image ~/input/rois
tier4_perception_msgs::msg::TrafficLightRoiArray
The array of ROIs detected by map_based_detector ~/expect/rois
tier4_perception_msgs::msg::TrafficLightRoiArray
The array of ROIs detected by map_based_detector without any offset"},{"location":"perception/traffic_light_fine_detector/#output","title":"Output","text":"Name Type Description ~/output/rois
tier4_perception_msgs::msg::TrafficLightRoiArray
The detected accurate rois ~/debug/exe_time_ms
tier4_debug_msgs::msg::Float32Stamped
The time taken for inference"},{"location":"perception/traffic_light_fine_detector/#parameters","title":"Parameters","text":""},{"location":"perception/traffic_light_fine_detector/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description fine_detector_score_thresh
double 0.3 If the objectness score is less than this value, the object is ignored fine_detector_nms_thresh
double 0.65 IoU threshold to perform Non-Maximum Suppression"},{"location":"perception/traffic_light_fine_detector/#node-parameters","title":"Node Parameters","text":"Name Type Default Value Description data_path
string \"$(env HOME)/autoware_data\" packages data and artifacts directory path fine_detector_model_path
string \"\" The onnx file name for yolo model fine_detector_label_path
string \"\" The label file with label names for detected objects written on it fine_detector_precision
string \"fp32\" The inference mode: \"fp32\", \"fp16\" approximate_sync
bool false Flag for whether to ues approximate sync policy"},{"location":"perception/traffic_light_fine_detector/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/traffic_light_fine_detector/#reference-repositories","title":"Reference repositories","text":"YOLOX github repository
traffic_light_map_based_detector
Package","text":""},{"location":"perception/traffic_light_map_based_detector/#overview","title":"Overview","text":"traffic_light_map_based_detector
calculates where the traffic lights will appear in the image based on the HD map.
Calibration and vibration errors can be entered as parameters, and the size of the detected RegionOfInterest will change according to the error.
If the node receives route information, it only looks at traffic lights on that route. If the node receives no route information, it looks at a radius of 200 meters and the angle between the traffic light and the camera is less than 40 degrees.
"},{"location":"perception/traffic_light_map_based_detector/#input-topics","title":"Input topics","text":"Name Type Description~input/vector_map
autoware_auto_mapping_msgs::HADMapBin vector map ~input/camera_info
sensor_msgs::CameraInfo target camera parameter ~input/route
autoware_planning_msgs::LaneletRoute optional: route"},{"location":"perception/traffic_light_map_based_detector/#output-topics","title":"Output topics","text":"Name Type Description ~output/rois
tier4_perception_msgs::TrafficLightRoiArray location of traffic lights in image corresponding to the camera info ~expect/rois
tier4_perception_msgs::TrafficLightRoiArray location of traffic lights in image without any offset ~debug/markers
visualization_msgs::MarkerArray visualization to debug"},{"location":"perception/traffic_light_map_based_detector/#node-parameters","title":"Node parameters","text":"Parameter Type Description max_vibration_pitch
double Maximum error in pitch direction. If -5~+5, it will be 10. max_vibration_yaw
double Maximum error in yaw direction. If -5~+5, it will be 10. max_vibration_height
double Maximum error in height direction. If -5~+5, it will be 10. max_vibration_width
double Maximum error in width direction. If -5~+5, it will be 10. max_vibration_depth
double Maximum error in depth direction. If -5~+5, it will be 10. max_detection_range
double Maximum detection range in meters. Must be positive min_timestamp_offset
double Minimum timestamp offset when searching for corresponding tf max_timestamp_offset
double Maximum timestamp offset when searching for corresponding tf timestamp_sample_len
double sampling length between min_timestamp_offset and max_timestamp_offset"},{"location":"perception/traffic_light_multi_camera_fusion/","title":"The `traffic_light_multi_camera_fusion` Package","text":""},{"location":"perception/traffic_light_multi_camera_fusion/#the-traffic_light_multi_camera_fusion-package","title":"The traffic_light_multi_camera_fusion
Package","text":""},{"location":"perception/traffic_light_multi_camera_fusion/#overview","title":"Overview","text":"traffic_light_multi_camera_fusion
performs traffic light signal fusion which can be summarized as the following two tasks:
For every camera, the following three topics are subscribed:
Name Type Description~/<camera_namespace>/camera_info
sensor_msgs::CameraInfo camera info from traffic_light_map_based_detector ~/<camera_namespace>/rois
tier4_perception_msgs::TrafficLightRoiArray detection roi from traffic_light_fine_detector ~/<camera_namespace>/traffic_signals
tier4_perception_msgs::TrafficLightSignalArray classification result from traffic_light_classifier You don't need to configure these topics manually. Just provide the camera_namespaces
parameter and the node will automatically extract the <camera_namespace>
and create the subscribers.
~/output/traffic_signals
autoware_perception_msgs::TrafficLightSignalArray traffic light signal fusion result"},{"location":"perception/traffic_light_multi_camera_fusion/#node-parameters","title":"Node parameters","text":"Parameter Type Description camera_namespaces
vector\\ Camera Namespaces to be fused message_lifespan
double The maximum timestamp span to be fused approximate_sync
bool Whether work in Approximate Synchronization Mode perform_group_fusion
bool Whether perform Group Fusion"},{"location":"perception/traffic_light_occlusion_predictor/","title":"The `traffic_light_occlusion_predictor` Package","text":""},{"location":"perception/traffic_light_occlusion_predictor/#the-traffic_light_occlusion_predictor-package","title":"The traffic_light_occlusion_predictor
Package","text":""},{"location":"perception/traffic_light_occlusion_predictor/#overview","title":"Overview","text":"traffic_light_occlusion_predictor
receives the detected traffic lights rois and calculates the occlusion ratios of each roi with point cloud.
For each traffic light roi, hundreds of pixels would be selected and projected into the 3D space. Then from the camera point of view, the number of projected pixels that are occluded by the point cloud is counted and used for calculating the occlusion ratio for the roi. As shown in follow image, the red pixels are occluded and the occlusion ratio is the number of red pixels divided by the total pixel numbers.
If no point cloud is received or all point clouds have very large stamp difference with the camera image, the occlusion ratio of each roi would be set as 0.
"},{"location":"perception/traffic_light_occlusion_predictor/#input-topics","title":"Input topics","text":"Name Type Description~input/vector_map
autoware_auto_mapping_msgs::HADMapBin vector map ~/input/rois
autoware_auto_perception_msgs::TrafficLightRoiArray traffic light detections ~input/camera_info
sensor_msgs::CameraInfo target camera parameter ~/input/cloud
sensor_msgs::PointCloud2 LiDAR point cloud"},{"location":"perception/traffic_light_occlusion_predictor/#output-topics","title":"Output topics","text":"Name Type Description ~/output/occlusion
autoware_auto_perception_msgs::TrafficLightOcclusionArray occlusion ratios of each roi"},{"location":"perception/traffic_light_occlusion_predictor/#node-parameters","title":"Node parameters","text":"Parameter Type Description azimuth_occlusion_resolution_deg
double azimuth resolution of LiDAR point cloud (degree) elevation_occlusion_resolution_deg
double elevation resolution of LiDAR point cloud (degree) max_valid_pt_dist
double The points within this distance would be used for calculation max_image_cloud_delay
double The maximum delay between LiDAR point cloud and camera image max_wait_t
double The maximum time waiting for the LiDAR point cloud"},{"location":"perception/traffic_light_ssd_fine_detector/","title":"traffic_light_ssd_fine_detector","text":""},{"location":"perception/traffic_light_ssd_fine_detector/#traffic_light_ssd_fine_detector","title":"traffic_light_ssd_fine_detector","text":""},{"location":"perception/traffic_light_ssd_fine_detector/#purpose","title":"Purpose","text":"It is a package for traffic light detection using MobileNetV2 and SSDLite.
"},{"location":"perception/traffic_light_ssd_fine_detector/#training-information","title":"Training Information","text":"NOTE:
pytorch
or mmdetection
in dnn_header_type
.input
boxes
.scores
.The model is based on pytorch-ssd and the pretrained model could be downloaded from here.
"},{"location":"perception/traffic_light_ssd_fine_detector/#training-data","title":"Training Data","text":"The model was fine-tuned on 1750 TIER IV internal images of Japanese traffic lights.
"},{"location":"perception/traffic_light_ssd_fine_detector/#trained-onnx-model","title":"Trained Onnx model","text":"In order to train models and export onnx model, we recommend open-mmlab/mmdetection. Please follow the official document to install and experiment with mmdetection. If you get into troubles, FAQ page would help you.
The following steps are example of a quick-start.
"},{"location":"perception/traffic_light_ssd_fine_detector/#step-0-install-mmcv-and-mim","title":"step 0. Install MMCV and MIM","text":"NOTE :
In order to install mmcv suitable for your CUDA version, install it specifying a url.
# Install mim\n$ pip install -U openmim\n\n# Install mmcv on a machine with CUDA11.6 and PyTorch1.13.0\n$ pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu116/torch1.13/index.html\n
"},{"location":"perception/traffic_light_ssd_fine_detector/#step-1-install-mmdetection","title":"step 1. Install MMDetection","text":"You can install mmdetection as a Python package or from source.
# As a Python package\n$ pip install mmdet\n\n# From source\n$ git clone https://github.com/open-mmlab/mmdetection.git\n$ cd mmdetection\n$ pip install -v -e .\n
"},{"location":"perception/traffic_light_ssd_fine_detector/#step-2-train-your-model","title":"step 2. Train your model","text":"Train model with your experiment configuration file. For the details of config file, see here.
# [] is optional, you can start training from pre-trained checkpoint\n$ mim train mmdet YOUR_CONFIG.py [--resume-from YOUR_CHECKPOINT.pth]\n
"},{"location":"perception/traffic_light_ssd_fine_detector/#step-3-export-onnx-model","title":"step 3. Export onnx model","text":"In exporting onnx, use mmdetection/tools/deployment/pytorch2onnx.py
or open-mmlab/mmdeploy. NOTE:
cd ~/mmdetection/tools/deployment\npython3 pytorch2onnx.py YOUR_CONFIG.py ...\n
"},{"location":"perception/traffic_light_ssd_fine_detector/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"Based on the camera image and the global ROI array detected by map_based_detection
node, a CNN-based detection method enables highly accurate traffic light detection.
~/input/image
sensor_msgs/Image
The full size camera image ~/input/rois
tier4_perception_msgs::msg::TrafficLightRoiArray
The array of ROIs detected by map_based_detector"},{"location":"perception/traffic_light_ssd_fine_detector/#output","title":"Output","text":"Name Type Description ~/output/rois
tier4_perception_msgs::msg::TrafficLightRoiArray
The detected accurate rois ~/debug/exe_time_ms
tier4_debug_msgs::msg::Float32Stamped
The time taken for inference"},{"location":"perception/traffic_light_ssd_fine_detector/#parameters","title":"Parameters","text":""},{"location":"perception/traffic_light_ssd_fine_detector/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description score_thresh
double 0.7 If the objectness score is less than this value, the object is ignored mean
std::vector [0.5,0.5,0.5] Average value of the normalized values of the image data used for training std
std::vector [0.5,0.5,0.5] Standard deviation of the normalized values of the image data used for training"},{"location":"perception/traffic_light_ssd_fine_detector/#node-parameters","title":"Node Parameters","text":"Name Type Default Value Description data_path
string \"$(env HOME)/autoware_data\" packages data and artifacts directory path onnx_file
string \"$(var data_path)/traffic_light_ssd_fine_detector/mb2-ssd-lite-tlr.onnx\" The onnx file name for yolo model label_file
string \"$(var data_path)/traffic_light_ssd_fine_detector/voc_labels_tl.txt\" The label file with label names for detected objects written on it dnn_header_type
string \"pytorch\" Name of DNN trained toolbox: \"pytorch\" or \"mmdetection\" mode
string \"FP32\" The inference mode: \"FP32\", \"FP16\", \"INT8\" max_batch_size
int 8 The size of the batch processed at one time by inference by TensorRT approximate_sync
bool false Flag for whether to ues approximate sync policy build_only
bool false shutdown node after TensorRT engine file is built"},{"location":"perception/traffic_light_ssd_fine_detector/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/traffic_light_ssd_fine_detector/#reference-repositories","title":"Reference repositories","text":"pytorch-ssd github repository
MobileNetV2
The traffic_light_visualization
is a package that includes two visualizing nodes:
~/input/tl_state
tier4_perception_msgs::msg::TrafficSignalArray
status of traffic lights ~/input/vector_map
autoware_auto_mapping_msgs::msg::HADMapBin
vector map"},{"location":"perception/traffic_light_visualization/#output","title":"Output","text":"Name Type Description ~/output/traffic_light
visualization_msgs::msg::MarkerArray
marker array that indicates status of traffic lights"},{"location":"perception/traffic_light_visualization/#traffic_light_roi_visualizer","title":"traffic_light_roi_visualizer","text":""},{"location":"perception/traffic_light_visualization/#input_1","title":"Input","text":"Name Type Description ~/input/tl_state
tier4_perception_msgs::msg::TrafficSignalArray
status of traffic lights ~/input/image
sensor_msgs::msg::Image
the image captured by perception cameras ~/input/rois
tier4_perception_msgs::msg::TrafficLightRoiArray
the ROIs detected by traffic_light_ssd_fine_detector
~/input/rough/rois
(option) tier4_perception_msgs::msg::TrafficLightRoiArray
the ROIs detected by traffic_light_map_based_detector
"},{"location":"perception/traffic_light_visualization/#output_1","title":"Output","text":"Name Type Description ~/output/image
sensor_msgs::msg::Image
output image with ROIs"},{"location":"perception/traffic_light_visualization/#parameters","title":"Parameters","text":""},{"location":"perception/traffic_light_visualization/#traffic_light_map_visualizer_1","title":"traffic_light_map_visualizer","text":"None
"},{"location":"perception/traffic_light_visualization/#traffic_light_roi_visualizer_1","title":"traffic_light_roi_visualizer","text":""},{"location":"perception/traffic_light_visualization/#node-parameters","title":"Node Parameters","text":"Name Type Default Value Descriptionenable_fine_detection
bool false whether to visualize result of the traffic light fine detection"},{"location":"perception/traffic_light_visualization/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"perception/traffic_light_visualization/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"perception/traffic_light_visualization/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"perception/traffic_light_visualization/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"perception/traffic_light_visualization/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"planning/","title":"Planning Components","text":""},{"location":"planning/#planning-components","title":"Planning Components","text":""},{"location":"planning/#getting-started","title":"Getting Started","text":"The Autoware.Universe Planning Modules represent a cutting-edge component within the broader open-source autonomous driving software stack. These modules play a pivotal role in autonomous vehicle navigation, skillfully handling route planning, dynamic obstacle avoidance, and real-time adaptation to varied traffic conditions.
The Module in the Planning Component refers to the various components that collectively form the planning system of the software. These modules cover a range of functionalities necessary for autonomous vehicle planning. Autoware's planning modules are modularized, meaning users can customize which functions are enabled by changing the configuration. This modular design allows for flexibility and adaptability to different scenarios and requirements in autonomous vehicle operations.
"},{"location":"planning/#how-to-enable-or-disable-planning-module","title":"How to Enable or Disable Planning Module","text":"Enabling and disabling modules involves managing settings in key configuration and launch files.
"},{"location":"planning/#key-files-for-configuration","title":"Key Files for Configuration","text":"The default_preset.yaml
file acts as the primary configuration file, where planning modules can be disable or enabled. Furthermore, users can also set the type of motion planner across various motion planners. For example:
launch_avoidance_module
: Set to true
to enable the avoidance module, or false
to disable it.motion_stop_planner_type
: Set default
to either obstacle_stop_planner
or obstacle_cruise_planner
.Note
Click here to view the default_preset.yaml
.
The launch files reference the settings defined in default_preset.yaml
to apply the configurations when the behavior path planner's node is running. For instance, the parameter avoidance.enable_module
in
<param name=\"avoidance.enable_module\" value=\"$(var launch_avoidance_module)\"/>\n
corresponds to launch_avoidance_module from default_preset.yaml
.
There are multiple parameters available for configuration, and users have the option to modify them in here. It's important to note that not all parameters are adjustable via rqt_reconfigure
. To ensure the changes are effective, modify the parameters and then restart Autoware. Additionally, detailed information about each parameter is available in the corresponding documents under the planning tab.
This guide outlines the steps for integrating your custom module into Autoware:
default_preset.yaml
file. For example- arg:\nname: launch_intersection_module\ndefault: \"true\"\n
<arg name=\"launch_intersection_module\" default=\"true\"/>\n\n<let\nname=\"behavior_velocity_planner_launch_modules\"\nvalue=\"$(eval "'$(var behavior_velocity_planner_launch_modules)' + 'behavior_velocity_planner::IntersectionModulePlugin, '")\"\nif=\"$(var launch_intersection_module)\"\n/>\n
behavior_velocity_planner_intersection_module_param_path
is used.<arg name=\"behavior_velocity_planner_intersection_module_param_path\" value=\"$(var behavior_velocity_config_path)/intersection.param.yaml\"/>\n
<param from=\"$(var behavior_velocity_planner_intersection_module_param_path)\"/>\n
Note
Depending on the specific module you wish to add, the relevant files and steps may vary. This guide provides a general overview and serves as a starting point. It's important to adapt these instructions to the specifics of your module.
"},{"location":"planning/#join-our-community-driven-effort","title":"Join Our Community-Driven Effort","text":"Autoware thrives on community collaboration. Every contribution, big or small, is invaluable to us. Whether it's reporting bugs, suggesting improvements, offering new ideas, or anything else you can think of \u2013 we welcome it all with open arms.
"},{"location":"planning/#how-to-contribute","title":"How to Contribute?","text":"Ready to contribute? Great! To get started, simply visit our Contributing Guidelines where you'll find all the information you need to jump in. This includes instructions on submitting bug reports, proposing feature enhancements, and even contributing to the codebase.
"},{"location":"planning/#join-our-planning-control-working-group-meetings","title":"Join Our Planning & Control Working Group Meetings","text":"The Planning & Control working group is an integral part of our community. We meet bi-weekly to discuss our current progress, upcoming challenges, and brainstorm new ideas. These meetings are a fantastic opportunity to directly contribute to our discussions and decision-making processes.
Meeting Details:
Interested in joining our meetings? We\u2019d love to have you! For more information on how to participate, visit the following link: How to participate in the working group.
"},{"location":"planning/#citations","title":"Citations","text":"Occasionally, we publish papers specific to the Planning Component in Autoware. We encourage you to explore these publications and find valuable insights for your work. If you find them useful and incorporate any of our methodologies or algorithms in your projects, citing our papers would be immensely helpful. This support allows us to reach a broader audience and continue contributing to the field.
If you use the Jerk Constrained Velocity Planning algorithm in Motion Velocity Smoother module in the Planning Component, we kindly request you to cite the relevant paper.
Y. Shimizu, T. Horibe, F. Watanabe and S. Kato, \"Jerk Constrained Velocity Planning for an Autonomous Vehicle: Linear Programming Approach,\" 2022 International Conference on Robotics and Automation (ICRA)
@inproceedings{shimizu2022,\n author={Shimizu, Yutaka and Horibe, Takamasa and Watanabe, Fumiya and Kato, Shinpei},\n booktitle={2022 International Conference on Robotics and Automation (ICRA)},\n title={Jerk Constrained Velocity Planning for an Autonomous Vehicle: Linear Programming Approach},\n year={2022},\n pages={5814-5820},\n doi={10.1109/ICRA46639.2022.9812155}}\n
"},{"location":"planning/behavior_path_avoidance_by_lane_change_module/","title":"Avoidance by lane change design","text":""},{"location":"planning/behavior_path_avoidance_by_lane_change_module/#avoidance-by-lane-change-design","title":"Avoidance by lane change design","text":"This is a sub-module to avoid obstacles by lane change maneuver.
"},{"location":"planning/behavior_path_avoidance_by_lane_change_module/#purpose-role","title":"Purpose / Role","text":"This module is designed as one of the obstacle avoidance features and generates a lane change path if the following conditions are satisfied.
Basically, this module is implemented by reusing the avoidance target filtering logic of the existing Normal Avoidance Module and the path generation logic of the Normal Lane Change Module. On the other hand, the conditions under which the module is activated differ from those of a normal avoidance module.
Check that the following conditions are satisfied after the filtering process for the avoidance target.
"},{"location":"planning/behavior_path_avoidance_by_lane_change_module/#number-of-the-avoidance-target-objects","title":"Number of the avoidance target objects","text":"This module is launched when the number of avoidance target objects on EGO DRIVING LANE is greater than execute_object_num
. If there are no avoidance targets in the ego driving lane or their number is less than the parameter, the obstacle is avoided by normal avoidance behavior (if the normal avoidance module is registered).
Unlike the normal avoidance module, which specifies the shift line end point, this module does not specify its end point when generating a lane change path. On the other hand, setting execute_only_when_lane_change_finish_before_object
to true
will activate this module only if the lane change can be completed before the avoidance target object.
Although setting the parameter to false
would increase the scene of avoidance by lane change, it is assumed that sufficient lateral margin may not be ensured in some cases because the vehicle passes by the side of obstacles during the lane change.
true
, this module will be launched only when the lane change end point is NOT behind the avoidance target object. true"},{"location":"planning/behavior_path_avoidance_module/","title":"Avoidance design","text":""},{"location":"planning/behavior_path_avoidance_module/#avoidance-design","title":"Avoidance design","text":"This is a rule-based path planning module designed for obstacle avoidance.
"},{"location":"planning/behavior_path_avoidance_module/#purpose-role","title":"Purpose / Role","text":"This module is designed for rule-based avoidance that is easy for developers to design its behavior. It generates avoidance path parameterized by intuitive parameters such as lateral jerk and avoidance distance margin. This makes it possible to pre-define avoidance behavior.
In addition, the approval interface of behavior_path_planner allows external users / modules (e.g. remote operation) to intervene the decision of the vehicle behavior.\u3000 This function is expected to be used, for example, for remote intervention in emergency situations or gathering information on operator decisions during development.
"},{"location":"planning/behavior_path_avoidance_module/#limitations","title":"Limitations","text":"This module allows developers to design vehicle behavior in avoidance planning using specific rules. Due to the property of rule-based planning, the algorithm can not compensate for not colliding with obstacles in complex cases. This is a trade-off between \"be intuitive and easy to design\" and \"be hard to tune but can handle many cases\". This module adopts the former policy and therefore this output should be checked more strictly in the later stage. In the .iv reference implementation, there is another avoidance module in motion planning module that uses optimization to handle the avoidance in complex cases. (Note that, the motion planner needs to be adjusted so that the behavior result will not be changed much in the simple case and this is a typical challenge for the behavior-motion hierarchical architecture.)
"},{"location":"planning/behavior_path_avoidance_module/#why-is-avoidance-in-behavior-module","title":"Why is avoidance in behavior module?","text":"This module executes avoidance over lanes, and the decision requires the lane structure information to take care of traffic rules (e.g. it needs to send an indicator signal when the vehicle crosses a lane). The difference between motion and behavior module in the planning stack is whether the planner takes traffic rules into account, which is why this avoidance module exists in the behavior module.
"},{"location":"planning/behavior_path_avoidance_module/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"The following figure shows a simple explanation of the logic for avoidance path generation. First, target objects are picked up, and shift requests are generated for each object. These shift requests are generated by taking into account the lateral jerk required for avoidance (red lines). Then these requests are merged and the shift points are created on the reference path (blue line). Filtering operations are performed on the shift points such as removing unnecessary shift points (yellow line), and finally a smooth avoidance path is generated by combining Clothoid-like curve primitives (green line).
"},{"location":"planning/behavior_path_avoidance_module/#flowchart","title":"Flowchart","text":""},{"location":"planning/behavior_path_avoidance_module/#overview-of-algorithm-for-target-object-filtering","title":"Overview of algorithm for target object filtering","text":""},{"location":"planning/behavior_path_avoidance_module/#how-to-decide-the-target-obstacles","title":"How to decide the target obstacles","text":"The avoidance target should be limited to stationary objects (you should not avoid a vehicle waiting at a traffic light even if it blocks your path). Therefore, target vehicles for avoidance should meet the following specific conditions.
threshold_speed_object_is_stopped
: parameter that be used for judge the object has stopped or not.threshold_time_object_is_moving
: parameter that be used for chattering prevention.2.0 m
) or too far(default: < 150.0 m
) and object is not behind the path goal.Not only the length from the centerline, but also the length from the road shoulder is calculated and used for the filtering process. It calculates the ratio of the actual length between the the object's center and the center line shift_length
and the maximum length the object can shift shiftable_length
.
The closer the object is to the shoulder, the larger the value of \\(ratio\\) (theoretical max value is 1.0), and it compares the value and object_check_shiftable_ratio
to determine whether the object is a parked-car. If the road has no road shoulders, it uses object_check_min_road_shoulder_width
as a road shoulder width virtually.
In order to prevent chattering of recognition results, once an obstacle is targeted, it is hold for a while even if it disappears. This is effective when recognition is unstable. However, since it will result in over-detection (increase a number of false-positive), it is necessary to adjust parameters according to the recognition accuracy (if object_last_seen_threshold = 0.0
, the recognition result is 100% trusted).
Since object recognition results contain noise related to position ,orientation and boundary size, if the raw object recognition results are used in path generation, the avoidance path will be directly affected by the noise.
Therefore, in order to reduce the influence of the noise, avoidance module generate a envelope polygon for the avoidance target that covers it, and the avoidance path should be generated based on that polygon. The envelope polygons are generated so that they are parallel to the reference path and the polygon size is larger than the avoidance target (define by object_envelope_buffer
). The position and size of the polygon is not updated as long as the avoidance target exists within that polygon.
# default value\nobject_envelope_buffer: 0.3 # [m]\n
"},{"location":"planning/behavior_path_avoidance_module/#computing-shift-length-and-shift-points","title":"Computing Shift Length and Shift Points","text":"The lateral shift length is affected by 4 variables, namely lateral_collision_safety_buffer
, lateral_collision_margin
, vehicle_width
and overhang_distance
. The equation is as follows
avoid_margin = lateral_collision_margin + lateral_collision_safety_buffer + 0.5 * vehicle_width\nmax_allowable_lateral_distance = to_road_shoulder_distance - road_shoulder_safety_margin - 0.5 * vehicle_width\nif(isOnRight(o))\n{\nshift_length = avoid_margin + overhang_distance\n}\nelse\n{\nshift_length = avoid_margin - overhang_distance\n}\n
The following figure illustrates these variables(This figure just shows the max value of lateral shift length).
"},{"location":"planning/behavior_path_avoidance_module/#rationale-of-having-safety-buffer-and-safety-margin","title":"Rationale of having safety buffer and safety margin","text":"To compute the shift length, additional parameters that can be tune are lateral_collision_safety_buffer
and road_shoulder_safety_margin
.
lateral_collision_safety_buffer
parameter is used to set a safety gap that will act as the final line of defense when computing avoidance path.lateral_collision_margin
might be changing according to the situation for various reasons. Therefore, lateral_collision_safety_buffer
will act as the final line of defense in case of the usage of lateral_collision_margin
fails.road_shoulder_safety_margin
will prevent the module from generating a path that might cause the vehicle to go too near the road shoulder or adjacent lane dividing line.The shift length is set as a constant value before the feature is implemented. Setting the shift length like this will cause the module to generate an avoidance path regardless of actual environmental properties. For example, the path might exceed the actual road boundary or go towards a wall. Therefore, to address this limitation, in addition to how to decide the target obstacle, the module also takes into account the following additional element
These elements are used to compute the distance from the object to the road's shoulder (to_road_shoulder_distance
). The parameters use_adjacent_lane
and use_opposite_lane
allows further configuration of the to to_road_shoulder_distance
. The following image illustrates the configuration.
If one of the following conditions is false
, then the shift point will not be generated.
avoid_margin = lateral_collision_margin + lateral_collision_safety_buffer + 0.5 * vehicle_width\navoid_margin <= (to_road_shoulder_distance - 0.5 * vehicle_width - road_shoulder_safety_margin)\n
The obstacle intrudes into the current driving path.
when the object is on right of the path
-overhang_dist<(lateral_collision_margin + lateral_collision_safety_buffer + 0.5 * vehicle_width)\n
when the object is on left of the path
overhang_dist<(lateral_collision_margin + lateral_collision_safety_buffer + 0.5 * vehicle_width)\n
Generate shift points for obstacles with given lateral jerk. These points are integrated to generate an avoidance path. The detailed process flow for each case corresponding to the obstacle placement are described below. The actual implementation is not separated for each case, but the function corresponding to multiple obstacle case (both directions)
is always running.
The lateral shift distance to the obstacle is calculated, and then the shift point is generated from the ego vehicle speed and the given lateral jerk as shown in the figure below. A smooth avoidance path is then calculated based on the shift point.
Additionally, the following processes are executed in special cases.
"},{"location":"planning/behavior_path_avoidance_module/#lateral-jerk-relaxation-conditions","title":"Lateral jerk relaxation conditions","text":"There is a problem that we can not know the actual speed during avoidance in advance. This is especially critical when the ego vehicle speed is 0. To solve that, this module provides a parameter for the minimum avoidance speed, which is used for the lateral jerk calculation when the vehicle speed is low.
Generate shift points for multiple obstacles. All of them are merged to generate new shift points along the reference path. The new points are filtered (e.g. remove small-impact shift points), and the avoidance path is computed for the filtered shift points.
Merge process of raw shift points: check the shift length on each path points. If the shift points are overlapped, the maximum shift value is selected for the same direction.
For the details of the shift point filtering, see filtering for shift points.
"},{"location":"planning/behavior_path_avoidance_module/#multiple-obstacle-case-both-direction","title":"Multiple obstacle case (both direction)","text":"Generate shift points for multiple obstacles. All of them are merged to generate new shift points. If there are areas where the desired shifts conflict in different directions, the sum of the maximum shift amounts of these areas is used as the final shift amount. The rest of the process is the same as in the case of one direction.
"},{"location":"planning/behavior_path_avoidance_module/#filtering-for-shift-points","title":"Filtering for shift points","text":"The shift points are modified by a filtering process in order to get the expected shape of the avoidance path. It contains the following filters.
This module has following parameters that sets which areas the path may extend into when generating an avoidance path.
# drivable area setting\nuse_adjacent_lane: true\nuse_opposite_lane: true\nuse_intersection_areas: false\nuse_hatched_road_markings: false\n
"},{"location":"planning/behavior_path_avoidance_module/#adjacent-lane","title":"adjacent lane","text":""},{"location":"planning/behavior_path_avoidance_module/#opposite-lane","title":"opposite lane","text":""},{"location":"planning/behavior_path_avoidance_module/#intersection-areas","title":"intersection areas","text":"The intersection area is defined on Lanelet map. See here
"},{"location":"planning/behavior_path_avoidance_module/#hatched-road-markings","title":"hatched road markings","text":"The hatched road marking is defined on Lanelet map. See here
"},{"location":"planning/behavior_path_avoidance_module/#safety-check","title":"Safety check","text":"The avoidance module has a safety check logic. The result of safe check is used for yield maneuver. It is enable by setting enable
as true
.
# safety check configuration\nenable: true # [-]\ncheck_current_lane: false # [-]\ncheck_shift_side_lane: true # [-]\ncheck_other_side_lane: false # [-]\ncheck_unavoidable_object: false # [-]\ncheck_other_object: true # [-]\n\n# collision check parameters\ncheck_all_predicted_path: false # [-]\ntime_horizon: 10.0 # [s]\nidling_time: 1.5 # [s]\nsafety_check_backward_distance: 50.0 # [m]\nsafety_check_accel_for_rss: 2.5 # [m/ss]\n
safety_check_backward_distance
is the parameter related to the safety check area. The module checks a collision risk for all vehicle that is within shift side lane and between object object_check_forward_distance
ahead and safety_check_backward_distance
behind.
NOTE: Even if a part of an object polygon overlaps the detection area, if the center of gravity of the object does not exist on the lane, the vehicle is excluded from the safety check target.
Judge the risk of collision based on ego future position and object prediction path. The module calculates Ego's future position in the time horizon (safety_check_time_horizon
), and use object's prediction path as object future position.
After calculating the future position of Ego and object, the module calculates the lateral/longitudinal deviation of Ego and the object. The module also calculates the lateral/longitudinal margin necessary to determine that it is safe to execute avoidance maneuver, and if both the lateral and longitudinal distances are less than the margins, it determines that there is a risk of a collision at that time.
The value of the longitudinal margin is calculated based on Responsibility-Sensitive Safety theory (RSS). The safety_check_idling_time
represents \\(T_{idle}\\), and safety_check_accel_for_rss
represents \\(a_{max}\\).
The lateral margin is changeable based on ego longitudinal velocity. If the vehicle is driving at a high speed, the lateral margin should be larger, and if the vehicle is driving at a low speed, the value of the lateral margin should be set to a smaller value. Thus, the lateral margin for each vehicle speed is set as a parameter, and the module determines the lateral margin from the current vehicle speed as shown in the following figure.
target_velocity_matrix:\ncol_size: 5\nmatrix: [2.78 5.56 ... 16.7 # target velocity [m/s]\n0.50 0.75 ... 1.50] # margin [m]\n
"},{"location":"planning/behavior_path_avoidance_module/#yield-maneuver","title":"Yield maneuver","text":""},{"location":"planning/behavior_path_avoidance_module/#overview","title":"Overview","text":"If an avoidance path can be generated and it is determined that avoidance maneuver should not be executed due to surrounding traffic conditions, the module executes YIELD maneuver. In yield maneuver, the vehicle slows down to the target vehicle velocity (yield_velocity
) and keep that speed until the module judge that avoidance path is safe. If the YIELD condition goes on and the vehicle approaches the avoidance target, it stops at the avoidable position and waits until the safety is confirmed.
# For yield maneuver\nyield_velocity: 2.78 # [m/s]\n
NOTE: In yield maneuver, the vehicle decelerates target velocity under constraints.
nominal_deceleration: -1.0 # [m/ss]\nnominal_jerk: 0.5 # [m/sss]\n
If it satisfies following all of three conditions, the module inserts stop point in front of the avoidance target with an avoidable interval.
The module determines that it is NOT passable without avoidance if the object overhang is less than the threshold.
lateral_passable_collision_margin: 0.5 # [-]\n
\\[ L_{overhang} < \\frac{W}{2} + L_{margin} (not passable) \\] The \\(W\\) represents vehicle width, and \\(L_{margin}\\) represents lateral_passable_collision_margin
.
The current behavior in unsafe condition is just slow down and it is so conservative. It is difficult to achieve aggressive behavior in the current architecture because of modularity. There are many modules in autoware that change the vehicle speed, and the avoidance module cannot know what speed planning they will output, so it is forced to choose a behavior that is as independent of other modules' processing as possible.
"},{"location":"planning/behavior_path_avoidance_module/#limitation2","title":"Limitation2","text":"The YIELD maneuver is executed ONLY when the vehicle has NOT initiated avoidance maneuver. The module has a threshold parameter (avoidance_initiate_threshold
) for the amount of shifting and determines that the vehicle is initiating avoidance if the vehicle current shift exceeds the threshold.
If enable_cancel_maneuver
parameter is true, Avoidance Module takes different actions according to the situations as follows:
If enable_cancel_maneuver
parameter is false, Avoidance Module doesn't revert generated avoidance path even if path objects are gone.
WIP
"},{"location":"planning/behavior_path_avoidance_module/#parameters","title":"Parameters","text":"The avoidance specific parameter configuration file can be located at src/autoware/launcher/planning_launch/config/scenario_planning/lane_driving/behavior_planning/behavior_path_planner/avoidance/avoidance.param.yaml
.
namespace: avoidance.
use_adjacent_lane
must be true
to take effects true use_intersection_areas [-] bool Extend drivable to intersection area. false use_hatched_road_markings [-] bool Extend drivable to hatched road marking area. false Name Unit Type Description Default value output_debug_marker [-] bool Flag to publish debug marker (set false
as default since it takes considerable cost). false output_debug_info [-] bool Flag to print debug info (set false
as default since it takes considerable cost). false"},{"location":"planning/behavior_path_avoidance_module/#avoidance-target-filtering-parameters","title":"Avoidance target filtering parameters","text":"namespace: avoidance.target_object.
This module supports all object classes, and it can set following parameters independently.
car:\nis_target: true # [-]\nmoving_speed_threshold: 1.0 # [m/s]\nmoving_time_threshold: 1.0 # [s]\nmax_expand_ratio: 0.0 # [-]\nenvelope_buffer_margin: 0.3 # [m]\navoid_margin_lateral: 1.0 # [m]\nsafety_buffer_lateral: 0.7 # [m]\nsafety_buffer_longitudinal: 0.0 # [m]\n
Name Unit Type Description Default value is_target [-] bool By setting this flag true
, this module avoid those class objects. false moving_speed_threshold [m/s] double Objects with speed greater than this will be judged as moving ones. 1.0 moving_time_threshold [s] double Objects keep moving longer duration than this will be excluded from avoidance target. 1.0 envelope_buffer_margin [m] double The buffer between raw boundary box of detected objects and enveloped polygon that is used for avoidance path generation. 0.3 avoid_margin_lateral [m] double The lateral distance between ego and avoidance targets. 1.0 safety_buffer_lateral [m] double Creates an additional lateral gap that will prevent the vehicle from getting to near to the obstacle. 0.5 safety_buffer_longitudinal [m] double Creates an additional longitudinal gap that will prevent the vehicle from getting to near to the obstacle. 0.0 Parameters for the logic to compensate perception noise of the far objects.
Name Unit Type Description Default value max_expand_ratio [-] double This value will be appliedenvelope_buffer_margin
according to the distance between the ego and object. 0.0 lower_distance_for_polygon_expansion [-] double If the distance between the ego and object is less than this, the expand ratio will be zero. 30.0 upper_distance_for_polygon_expansion [-] double If the distance between the ego and object is larger than this, the expand ratio will be max_expand_ratio
. 100.0 namespace: avoidance.target_filtering.
namespace: avoidance.safety_check.
namespace: avoidance.avoidance.lateral.
namespace: avoidance.avoidance.longitudinal.
namespace: avoidance.yield.
namespace: avoidance.stop.
namespace: avoidance.constraints.
TRUE: allow to control breaking mildness
false namespace: avoidance.constraints.lateral.
namespace: avoidance.constraints.longitudinal.
(*2) If there are multiple vehicles in a row to be avoided, no new avoidance path will be generated unless their lateral margin difference exceeds this value.
"},{"location":"planning/behavior_path_avoidance_module/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":"Safety Check
Consideration of the speed of the avoidance target
Cancel avoidance when target disappears
Improved performance of avoidance target selection
5m
), but small resolution should be applied for complex paths.Developers can see what is going on in each process by visualizing all the avoidance planning process outputs. The example includes target vehicles, shift points for each object, shift points after each filtering process, etc.
To enable the debug marker, execute ros2 param set /planning/scenario_planning/lane_driving/behavior_planning/behavior_path_planner avoidance.publish_debug_marker true
(no restart is needed) or simply set the publish_debug_marker
to true
in the avoidance.param.yaml
for permanent effect (restart is needed). Then add the marker /planning/scenario_planning/lane_driving/behavior_planning/behavior_path_planner/debug/avoidance
in rviz2
.
If for some reason, no shift point is generated for your object, you can check for the failure reason via ros2 topic echo
.
To print the debug message, just run the following
ros2 topic echo /planning/scenario_planning/lane_driving/behavior_planning/behavior_path_planner/debug/avoidance_debug_message_array\n
"},{"location":"planning/behavior_path_dynamic_avoidance_module/","title":"Dynamic avoidance design","text":""},{"location":"planning/behavior_path_dynamic_avoidance_module/#dynamic-avoidance-design","title":"Dynamic avoidance design","text":""},{"location":"planning/behavior_path_dynamic_avoidance_module/#purpose-role","title":"Purpose / Role","text":"This is a module designed for avoiding obstacles which are running. Static obstacles such as parked vehicles are dealt with by the avoidance module.
This module is under development. In the current implementation, the dynamic obstacles to avoid is extracted from the drivable area. Then the motion planner, in detail obstacle_avoidance_planner, will generate an avoiding trajectory.
"},{"location":"planning/behavior_path_dynamic_avoidance_module/#overview-of-drivable-area-modification","title":"Overview of drivable area modification","text":""},{"location":"planning/behavior_path_dynamic_avoidance_module/#filtering-obstacles-to-avoid","title":"Filtering obstacles to avoid","text":"The dynamics obstacles meeting the following condition will be avoided.
target_object.*
.target_object.min_obstacle_vel
.To realize dynamic obstacles for avoidance, the time dimension should be take into an account considering the dynamics. However, it will make the planning problem much harder to solve. Therefore, we project the time dimension to the 2D pose dimension.
Currently, the predicted paths of predicted objects are not so stable. Therefore, instead of using the predicted paths, we assume that the obstacle will run parallel to the ego's path.
First, a maximum lateral offset to avoid is calculated as follows. The polygon's width to extract from the drivable area is the obstacle width and double drivable_area_generation.lat_offset_from_obstacle
. We can limit the lateral shift offset by drivable_area_generation.max_lat_offset_to_avoid
.
Then, extracting the same directional and opposite directional obstacles from the drivable area will work as follows considering TTC (time to collision). Regarding the same directional obstacles, obstacles whose TTC is negative will be ignored (e.g. The obstacle is in front of the ego, and the obstacle's velocity is larger than the ego's velocity.).
Same directional obstacles
Opposite directional obstacles
"},{"location":"planning/behavior_path_dynamic_avoidance_module/#parameters","title":"Parameters","text":"Name Unit Type Description Default value target_object.car [-] bool The flag whether to avoid cars or not true target_object.truck [-] bool The flag whether to avoid trucks or not true ... [-] bool ... ... target_object.min_obstacle_vel [m/s] double Minimum obstacle velocity to avoid 1.0 drivable_area_generation.lat_offset_from_obstacle [m] double Lateral offset to avoid from obstacles 0.8 drivable_area_generation.max_lat_offset_to_avoid [m] double Maximum lateral offset to avoid 0.5 drivable_area_generation.overtaking_object.max_time_to_collision [s] double Maximum value when calculating time to collision 3.0 drivable_area_generation.overtaking_object.start_duration_to_avoid [s] double Duration to consider avoidance before passing by obstacles 4.0 drivable_area_generation.overtaking_object.end_duration_to_avoid [s] double Duration to consider avoidance after passing by obstacles 5.0 drivable_area_generation.overtaking_object.duration_to_hold_avoidance [s] double Duration to hold avoidance after passing by obstacles 3.0 drivable_area_generation.oncoming_object.max_time_to_collision [s] double Maximum value when calculating time to collision 3.0 drivable_area_generation.oncoming_object.start_duration_to_avoid [s] double Duration to consider avoidance before passing by obstacles 9.0 drivable_area_generation.oncoming_object.end_duration_to_avoid [s] double Duration to consider avoidance after passing by obstacles 0.0"},{"location":"planning/behavior_path_goal_planner_module/","title":"Goal Planner design","text":""},{"location":"planning/behavior_path_goal_planner_module/#goal-planner-design","title":"Goal Planner design","text":""},{"location":"planning/behavior_path_goal_planner_module/#purpose-role","title":"Purpose / Role","text":"Plan path around the goal.
If goal modification is not allowed, park at the designated fixed goal. (fixed_goal_planner
in the figure below) When allowed, park in accordance with the specified policy(e.g pull over on left/right side of the lane). (rough_goal_planner
in the figure below). Currently rough goal planner only support pull_over feature, but it would be desirable to be able to accommodate various parking policies in the future.
Either one is activated when all conditions are met.
"},{"location":"planning/behavior_path_goal_planner_module/#fixed_goal_planner","title":"fixed_goal_planner","text":"allow_goal_modification=false
by default.If the target path contains a goal, modify the points of the path so that the path and the goal are connected smoothly. This process will change the shape of the path by the distance of refine_goal_search_radius_range
from the goal. Note that this logic depends on the interpolation algorithm that will be executed in a later module (at the moment it uses spline interpolation), so it needs to be updated in the future.
pull_over_minimum_request_length
.allow_goal_modification=true
.2D Rough Goal Pose
with the key bind r
in RViz, but in the future there will be a panel of tools to manipulate various Route API from RViz.pull_over_minimum_request_length
.road_shoulder
.1m
).0.01m/s
).Generate footprints from ego-vehicle path points and determine obstacle collision from the value of occupancy_grid of the corresponding cell.
"},{"location":"planning/behavior_path_goal_planner_module/#parameters-for-occupancy-grid-based-collision-check","title":"Parameters for occupancy grid based collision check","text":"Name Unit Type Description Default value use_occupancy_grid_for_goal_search [-] bool flag whether to use occupancy grid for goal search collision check true use_occupancy_grid_for_goal_longitudinal_margin [-] bool flag whether to use occupancy grid for keeping longitudinal margin false use_occupancy_grid_for_path_collision_check [-] bool flag whether to use occupancy grid for collision check false occupancy_grid_collision_check_margin [m] double margin to calculate ego-vehicle cells from footprint. 0.0 theta_size [-] int size of theta angle to be considered. angular resolution for collision check will be 2\\(\\pi\\) / theta_size [rad]. 360 obstacle_threshold [-] int threshold of cell values to be considered as obstacles 60"},{"location":"planning/behavior_path_goal_planner_module/#object-recognition-based-collision-check","title":"object recognition based collision check","text":""},{"location":"planning/behavior_path_goal_planner_module/#parameters-for-object-recognition-based-collision-check","title":"Parameters for object recognition based collision check","text":"Name Unit Type Description Default value use_object_recognition [-] bool flag whether to use object recognition for collision check true object_recognition_collision_check_margin [m] double margin to calculate ego-vehicle cells from footprint. 0.6 object_recognition_collision_check_max_extra_stopping_margin [m] double maximum value when adding longitudinal distance margin for collision check considering stopping distance 1.0 detection_bound_offset [m] double expand pull over lane with this offset to make detection area for collision check of path generation 15.0"},{"location":"planning/behavior_path_goal_planner_module/#goal-search","title":"Goal Search","text":"If it is not possible to park safely at a given goal, /planning/scenario_planning/modified_goal
is searched for in certain range of the shoulder lane.
goal search video
"},{"location":"planning/behavior_path_goal_planner_module/#parameters-for-goal-search","title":"Parameters for goal search","text":"Name Unit Type Description Default value goal_priority [-] string In caseminimum_weighted_distance
, sort with smaller longitudinal distances taking precedence over smaller lateral distances. In case minimum_longitudinal_distance
, sort with weighted lateral distance against longitudinal distance. minimum_weighted_distance
prioritize_goals_before_objects [-] bool If there are objects that may need to be avoided, prioritize the goal in front of them true forward_goal_search_length [m] double length of forward range to be explored from the original goal 20.0 backward_goal_search_length [m] double length of backward range to be explored from the original goal 20.0 goal_search_interval [m] double distance interval for goal search 2.0 longitudinal_margin [m] double margin between ego-vehicle at the goal position and obstacles 3.0 max_lateral_offset [m] double maximum offset of goal search in the lateral direction 0.5 lateral_offset_interval [m] double distance interval of goal search in the lateral direction 0.25 ignore_distance_from_lane_start [m] double distance from start of pull over lanes for ignoring goal candidates 0.0 ignore_distance_from_lane_start [m] double distance from start of pull over lanes for ignoring goal candidates 0.0 margin_from_boundary [m] double distance margin from edge of the shoulder lane 0.5"},{"location":"planning/behavior_path_goal_planner_module/#pull-over","title":"Pull Over","text":"There are three path generation methods. The path is generated with a certain margin (default: 0.5 m
) from the boundary of shoulder lane.
efficient_path
use a goal that can generate an efficient path which is set in efficient_path_order
. In case close_goal
use the closest goal to the original one. efficient_path efficient_path_order [-] string efficient order of pull over planner along lanes\u3000excluding freespace pull over [\"SHIFT\", \"ARC_FORWARD\", \"ARC_BACKWARD\"]"},{"location":"planning/behavior_path_goal_planner_module/#shift-parking","title":"shift parking","text":"Pull over distance is calculated by the speed, lateral deviation, and the lateral jerk. The lateral jerk is searched for among the predetermined minimum and maximum values, and the one satisfies ready conditions described above is output.
shift_parking video
"},{"location":"planning/behavior_path_goal_planner_module/#parameters-for-shift-parking","title":"Parameters for shift parking","text":"Name Unit Type Description Default value enable_shift_parking [-] bool flag whether to enable shift parking true shift_sampling_num [-] int Number of samplings in the minimum to maximum range of lateral_jerk 4 maximum_lateral_jerk [m/s3] double maximum lateral jerk 2.0 minimum_lateral_jerk [m/s3] double minimum lateral jerk 0.5 deceleration_interval [m] double distance of deceleration section 15.0 after_shift_straight_distance [m] double straight line distance after pull over end point 1.0"},{"location":"planning/behavior_path_goal_planner_module/#geometric-parallel-parking","title":"geometric parallel parking","text":"Generate two arc paths with discontinuous curvature. It stops twice in the middle of the path to control the steer on the spot. There are two path generation methods: forward and backward. See also [1] for details of the algorithm. There is also a simple python implementation.
"},{"location":"planning/behavior_path_goal_planner_module/#parameters-geometric-parallel-parking","title":"Parameters geometric parallel parking","text":"Name Unit Type Description Default value arc_path_interval [m] double interval between arc path points 1.0 pull_over_max_steer_rad [rad] double maximum steer angle for path generation. it may not be possible to control steer up to max_steer_angle in vehicle_info when stopped 0.35"},{"location":"planning/behavior_path_goal_planner_module/#arc-forward-parking","title":"arc forward parking","text":"Generate two forward arc paths.
arc_forward_parking video
"},{"location":"planning/behavior_path_goal_planner_module/#parameters-arc-forward-parking","title":"Parameters arc forward parking","text":"Name Unit Type Description Default value enable_arc_forward_parking [-] bool flag whether to enable arc forward parking true after_forward_parking_straight_distance [m] double straight line distance after pull over end point 2.0 forward_parking_velocity [m/s] double velocity when forward parking 1.38 forward_parking_lane_departure_margin [m/s] double lane departure margin for front left corner of ego-vehicle when forward parking 0.0"},{"location":"planning/behavior_path_goal_planner_module/#arc-backward-parking","title":"arc backward parking","text":"Generate two backward arc paths.
.
arc_backward_parking video
"},{"location":"planning/behavior_path_goal_planner_module/#parameters-arc-backward-parking","title":"Parameters arc backward parking","text":"Name Unit Type Description Default value enable_arc_backward_parking [-] bool flag whether to enable arc backward parking true after_backward_parking_straight_distance [m] double straight line distance after pull over end point 2.0 backward_parking_velocity [m/s] double velocity when backward parking -1.38 backward_parking_lane_departure_margin [m/s] double lane departure margin for front right corner of ego-vehicle when backward 0.0"},{"location":"planning/behavior_path_goal_planner_module/#freespace-parking","title":"freespace parking","text":"If the vehicle gets stuck with lane_parking
, run freespace_parking
. To run this feature, you need to set parking_lot
to the map, activate_by_scenario
of costmap_generator to false
and enable_freespace_parking
to true
Simultaneous execution with avoidance_module
in the flowchart is under development.
See freespace_planner for other parameters.
"},{"location":"planning/behavior_path_lane_change_module/","title":"Lane Change design","text":""},{"location":"planning/behavior_path_lane_change_module/#lane-change-design","title":"Lane Change design","text":"The Lane Change module is activated when lane change is needed and can be safely executed.
"},{"location":"planning/behavior_path_lane_change_module/#lane-change-requirement","title":"Lane Change Requirement","text":"preferred_lane
.The lane change candidate path is divided into two phases: preparation and lane-changing. The following figure illustrates each phase of the lane change candidate path.
"},{"location":"planning/behavior_path_lane_change_module/#preparation-phase","title":"Preparation phase","text":"The preparation trajectory is the candidate path's first and the straight portion generated along the ego vehicle's current lane. The length of the preparation trajectory is computed as follows.
lane_change_prepare_distance = current_speed * lane_change_prepare_duration + 0.5 * deceleration * lane_change_prepare_duration^2\n
During the preparation phase, the turn signal will be activated when the remaining distance is equal to or less than lane_change_search_distance
.
The lane-changing phase consist of the shifted path that moves ego from current lane to the target lane. Total distance of lane-changing phase is as follows. Note that during the lane changing phase, the ego vehicle travels at a constant speed.
lane_change_prepare_velocity = std::max(current_speed + deceleration * lane_change_prepare_duration, minimum_lane_changing_velocity)\nlane_changing_distance = lane_change_prepare_velocity * lane_changing_duration\n
The backward_length_buffer_for_end_of_lane
is added to allow some window for any possible delay, such as control or mechanical delay during brake lag.
Lane change velocity is affected by the ego vehicle's current velocity. High velocity requires longer preparation and lane changing distance. However we also need to plan lane changing trajectories in case ego vehicle slows down. Computing candidate paths that assumes ego vehicle's slows down is performed by substituting predetermined deceleration value into prepare_length
, prepare_velocity
and lane_changing_length
equation.
The predetermined longitudinal acceleration values are a set of value that starts from longitudinal_acceleration = maximum_longitudinal_acceleration
, and decrease by longitudinal_acceleration_resolution
until it reaches longitudinal_acceleration = -maximum_longitudinal_deceleration
. Both maximum_longitudinal_acceleration
and maximum_longitudinal_deceleration
are calculated as: defined in the common.param
file as normal.min_acc
.
maximum_longitudinal_acceleration = min(common_param.max_acc, lane_change_param.max_acc)\nmaximum_longitudinal_deceleration = max(common_param.min_acc, lane_change_param.min_acc)\n
where common_param
is vehicle common parameter, which defines vehicle common maximum longitudinal acceleration and deceleration. Whereas, lane_change_param
has maximum longitudinal acceleration and deceleration for the lane change module. For example, if a user set and common_param.max_acc=1.0
and lane_change_param.max_acc=0.0
, maximum_longitudinal_acceleration
becomes 0.0
, and the lane change does not accelerate in the lane change phase.
The longitudinal_acceleration_resolution
is determine by the following
longitudinal_acceleration_resolution = (maximum_longitudinal_acceleration - minimum_longitudinal_acceleration) / longitudinal_acceleration_sampling_num\n
Note that when the current_velocity
is lower than minimum_lane_changing_velocity
, the vehicle needs to accelerate its velocity to minimum_lane_changing_velocity
. Therefore, longitudinal acceleration becomes positive value (not decelerate).
The following figure illustrates when longitudinal_acceleration_sampling_num = 4
. Assuming that maximum_deceleration = 1.0
then a0 == 0.0 == no deceleration
, a1 == 0.25
, a2 == 0.5
, a3 == 0.75
and a4 == 1.0 == maximum_deceleration
. a0
is the expected lane change trajectories should ego vehicle do not decelerate, and a1
's path is the expected lane change trajectories should ego vehicle decelerate at 0.25 m/s^2
.
Which path will be chosen will depend on validity and collision check.
"},{"location":"planning/behavior_path_lane_change_module/#multiple-candidate-path-samples-lateral-acceleration","title":"Multiple candidate path samples (lateral acceleration)","text":"In addition to sampling longitudinal acceleration, we also sample lane change paths by adjusting the value of lateral acceleration. Since lateral acceleration influences the duration of a lane change, a lower lateral acceleration value results in a longer lane change path, while a higher lateral acceleration value leads to a shorter lane change path. This allows the lane change module to generate a shorter lane change path by increasing the lateral acceleration when there is limited space for the lane change.
The maximum and minimum lateral accelerations are defined in the lane change parameter file as a map. The range of lateral acceleration is determined for each velocity by linearly interpolating the values in the map. Let's assume we have the following map
Ego Velocity Minimum lateral acceleration Maximum lateral acceleration 0.0 0.2 0.3 2.0 0.2 0.4 4.0 0.3 0.4 6.0 0.3 0.5In this case, when the current velocity of the ego vehicle is 3.0, the minimum and maximum lateral accelerations are 0.25 and 0.4 respectively. These values are obtained by linearly interpolating the second and third rows of the map, which provide the minimum and maximum lateral acceleration values.
Within this range, we sample the lateral acceleration for the ego vehicle. Similar to the method used for sampling longitudinal acceleration, the resolution of lateral acceleration (lateral_acceleration_resolution) is determined by the following:
lateral_acceleration_resolution = (maximum_lateral_acceleration - minimum_lateral_acceleration) / lateral_acceleration_sampling_num\n
"},{"location":"planning/behavior_path_lane_change_module/#candidate-paths-validity-check","title":"Candidate Path's validity check","text":"A candidate path is valid if the total lane change distance is less than
The goal must also be in the list of the preferred lane.
The following flow chart illustrates the validity check.
"},{"location":"planning/behavior_path_lane_change_module/#candidate-paths-safety-check","title":"Candidate Path's Safety check","text":"See safety check utils explanation
"},{"location":"planning/behavior_path_lane_change_module/#objects-selection-and-classification","title":"Objects selection and classification","text":"First, we divide the target objects into obstacles in the target lane, obstacles in the current lane, and obstacles in other lanes. Target lane indicates the lane that the ego vehicle is going to reach after the lane change and current lane mean the current lane where the ego vehicle is following before the lane change. Other lanes are lanes that do not belong to the target and current lanes. The following picture describes objects on each lane. Note that users can remove objects either on current and other lanes from safety check by changing the flag, which are check_objects_on_current_lanes
and check_objects_on_other_lanes
.
Furthermore, to change lanes behind a vehicle waiting at a traffic light, we skip the safety check for the stopping vehicles near the traffic light.\u3000The explanation for parked car detection is written in documentation for avoidance module.
"},{"location":"planning/behavior_path_lane_change_module/#collision-check-in-prepare-phase","title":"Collision check in prepare phase","text":"The ego vehicle may need to secure ample inter-vehicle distance ahead of the target vehicle before attempting a lane change. The flag enable_collision_check_at_prepare_phase
can be enabled to gain this behavior. The following image illustrates the differences between the false
and true
cases.
The parameter prepare_phase_ignore_target_speed_thresh
can be configured to ignore the prepare phase collision check for targets whose speeds are less than a specific threshold, such as stationary or very slow-moving objects.
When driving on the public road with other vehicles, there exist scenarios where lane changes cannot be executed. Suppose the candidate path is evaluated as unsafe, for example, due to incoming vehicles in the adjacent lane. In that case, the ego vehicle can't change lanes, and it is impossible to reach the goal. Therefore, the ego vehicle must stop earlier at a certain distance and wait for the adjacent lane to be evaluated as safe. The minimum stopping distance can be computed from shift length and minimum lane changing velocity.
lane_changing_time = f(shift_length, lat_acceleration, lat_jerk)\nminimum_lane_change_distance = minimum_prepare_length + minimum_lane_changing_velocity * lane_changing_time + lane_change_finish_judge_buffer\n
The following figure illustrates when the lane is blocked in multiple lane changes cases.
"},{"location":"planning/behavior_path_lane_change_module/#stopping-position-when-an-object-exists-ahead","title":"Stopping position when an object exists ahead","text":"When an obstacle is in front of the ego vehicle, stop with keeping a distance for lane change. The position to be stopped depends on the situation, such as when the lane change is blocked by the target lane obstacle, or when the lane change is not needed immediately.The following shows the division in that case.
"},{"location":"planning/behavior_path_lane_change_module/#when-the-ego-vehicle-is-near-the-end-of-the-lane-change","title":"When the ego vehicle is near the end of the lane change","text":"Regardless of the presence or absence of objects in the lane change target lane, stop by keeping the distance necessary for lane change to the object ahead.
"},{"location":"planning/behavior_path_lane_change_module/#when-the-ego-vehicle-is-not-near-the-end-of-the-lane-change","title":"When the ego vehicle is not near the end of the lane change","text":"If there are NO objects in the lane change section of the target lane, stop by keeping the distance necessary for lane change to the object ahead.
If there are objects in the lane change section of the target lane, stop WITHOUT keeping the distance necessary for lane change to the object ahead.
"},{"location":"planning/behavior_path_lane_change_module/#when-the-target-lane-is-far-away","title":"When the target lane is far away","text":"When the target lane for lane change is far away and not next to the current lane, do not keep the distance necessary for lane change to the object ahead.
"},{"location":"planning/behavior_path_lane_change_module/#lane-change-when-stuck","title":"Lane Change When Stuck","text":"The ego vehicle is considered stuck if it is stopped and meets any of the following conditions:
In this case, the safety check for lane change is relaxed compared to normal times. Please refer to the 'stuck' section under the 'Collision checks during lane change' for more details. The function to stop by keeping a margin against forward obstacle in the previous section is being performed to achieve this feature.
"},{"location":"planning/behavior_path_lane_change_module/#lane-change-regulations","title":"Lane change regulations","text":"If you want to regulate lane change on crosswalks or intersections, the lane change module finds a lane change path excluding it includes crosswalks or intersections. To regulate lane change on crosswalks or intersections, change regulation.crosswalk
or regulation.intersection
to true
. If the ego vehicle gets stuck, to avoid stuck, it enables lane change in crosswalk/intersection. If the ego vehicle stops more than stuck_detection.stop_time
seconds, it is regarded as a stuck. If the ego vehicle velocity is smaller than stuck_detection.velocity
, it is regarded as stopping.
The abort process may result in three different outcome; Cancel, Abort and Stop/Cruise.
The following depicts the flow of the abort lane change check.
"},{"location":"planning/behavior_path_lane_change_module/#cancel","title":"Cancel","text":"Suppose the lane change trajectory is evaluated as unsafe. In that case, if the ego vehicle has not departed from the current lane yet, the trajectory will be reset, and the ego vehicle will resume the lane following the maneuver.
The function can be enabled by setting enable_on_prepare_phase
to true
.
The following image illustrates the cancel process.
"},{"location":"planning/behavior_path_lane_change_module/#abort","title":"Abort","text":"Assume the ego vehicle has already departed from the current lane. In that case, it is dangerous to cancel the path, and it will cause the ego vehicle to change the heading direction abruptly. In this case, planning a trajectory that allows the ego vehicle to return to the current path while minimizing the heading changes is necessary. In this case, the lane change module will generate an abort path. The following images show an example of the abort path. Do note that the function DOESN'T GUARANTEE a safe abort process, as it didn't check the presence of the surrounding objects and/or their reactions. The function can be enable manually by setting both enable_on_prepare_phase
and enable_on_lane_changing_phase
to true
. The parameter max_lateral_jerk
need to be set to a high value in order for it to work.
The last behavior will also occur if the ego vehicle has departed from the current lane. If the abort function is disabled or the abort is no longer possible, the ego vehicle will attempt to stop or transition to the obstacle cruise mode. Do note that the module DOESN'T GUARANTEE safe maneuver due to the unexpected behavior that might've occurred during these critical scenarios. The following images illustrate the situation.
"},{"location":"planning/behavior_path_lane_change_module/#parameters","title":"Parameters","text":""},{"location":"planning/behavior_path_lane_change_module/#essential-lane-change-parameters","title":"Essential lane change parameters","text":"The following parameters are configurable in lane_change.param.yaml
.
backward_lane_length
[m] double The backward length to check incoming objects in lane change target lane. 200.0 prepare_duration
[m] double The preparation time for the ego vehicle to be ready to perform lane change. 4.0 backward_length_buffer_for_end_of_lane
[m] double The end of lane buffer to ensure ego vehicle has enough distance to start lane change 3.0 backward_length_buffer_for_blocking_object
[m] double The end of lane buffer to ensure ego vehicle has enough distance to start lane change when there is an object in front 3.0 lane_change_finish_judge_buffer
[m] double The additional buffer used to confirm lane change process completion 3.0 finish_judge_lateral_threshold
[m] double Lateral distance threshold to confirm lane change process completion 0.2 lane_changing_lateral_jerk
[m/s3] double Lateral jerk value for lane change path generation 0.5 minimum_lane_changing_velocity
[m/s] double Minimum speed during lane changing process. 2.78 prediction_time_resolution
[s] double Time resolution for object's path interpolation and collision check. 0.5 longitudinal_acceleration_sampling_num
[-] int Number of possible lane-changing trajectories that are being influenced by longitudinal acceleration 5 lateral_acceleration_sampling_num
[-] int Number of possible lane-changing trajectories that are being influenced by lateral acceleration 3 object_check_min_road_shoulder_width
[m] double Width considered as a road shoulder if the lane does not have a road shoulder 0.5 object_shiftable_ratio_threshold
[-] double Vehicles around the center line within this distance ratio will be excluded from parking objects 0.6 min_length_for_turn_signal_activation
[m] double Turn signal will be activated if the ego vehicle approaches to this length from minimum lane change length 10.0 length_ratio_for_turn_signal_deactivation
[-] double Turn signal will be deactivated if the ego vehicle approaches to this length ratio for lane change finish point 0.8 max_longitudinal_acc
[-] double maximum longitudinal acceleration for lane change 1.0 min_longitudinal_acc
[-] double maximum longitudinal deceleration for lane change -1.0 lateral_acceleration.velocity
[m/s] double Reference velocity for lateral acceleration calculation (look up table) [0.0, 4.0, 10.0] lateral_acceleration.min_values
[m/ss] double Min lateral acceleration values corresponding to velocity (look up table) [0.15, 0.15, 0.15] lateral_acceleration.max_values
[m/ss] double Max lateral acceleration values corresponding to velocity (look up table) [0.5, 0.5, 0.5] target_object.car
[-] boolean Include car objects for safety check true target_object.truck
[-] boolean Include truck objects for safety check true target_object.bus
[-] boolean Include bus objects for safety check true target_object.trailer
[-] boolean Include trailer objects for safety check true target_object.unknown
[-] boolean Include unknown objects for safety check true target_object.bicycle
[-] boolean Include bicycle objects for safety check true target_object.motorcycle
[-] boolean Include motorcycle objects for safety check true target_object.pedestrian
[-] boolean Include pedestrian objects for safety check true"},{"location":"planning/behavior_path_lane_change_module/#lane-change-regulations_1","title":"Lane change regulations","text":"Name Unit Type Description Default value regulation.crosswalk
[-] boolean Regulate lane change on crosswalks false regulation.intersection
[-] boolean Regulate lane change on intersections false"},{"location":"planning/behavior_path_lane_change_module/#ego-vehicle-stuck-detection","title":"Ego vehicle stuck detection","text":"Name Unit Type Description Default value stuck_detection.velocity
[m/s] double Velocity threshold for ego vehicle stuck detection 0.1 stuck_detection.stop_time
[s] double Stop time threshold for ego vehicle stuck detection 3.0"},{"location":"planning/behavior_path_lane_change_module/#collision-checks-during-lane-change","title":"Collision checks during lane change","text":"The following parameters are configurable in behavior_path_planner.param.yaml
and lane_change.param.yaml
.
safety_check.execution.lateral_distance_max_threshold
[m] double The lateral distance threshold that is used to determine whether lateral distance between two object is enough and whether lane change is safe. 2.0 safety_check.execution.longitudinal_distance_min_threshold
[m] double The longitudinal distance threshold that is used to determine whether longitudinal distance between two object is enough and whether lane change is safe. 3.0 safety_check.execution.expected_front_deceleration
[m/s^2] double The front object's maximum deceleration when the front vehicle perform sudden braking. (*1) -1.0 safety_check.execution.expected_rear_deceleration
[m/s^2] double The rear object's maximum deceleration when the rear vehicle perform sudden braking. (*1) -1.0 safety_check.execution.rear_vehicle_reaction_time
[s] double The reaction time of the rear vehicle driver which starts from the driver noticing the sudden braking of the front vehicle until the driver step on the brake. 2.0 safety_check.execution.rear_vehicle_safety_time_margin
[s] double The time buffer for the rear vehicle to come into complete stop when its driver perform sudden braking. 2.0 safety_check.execution.enable_collision_check_at_prepare_phase
[-] boolean Perform collision check starting from prepare phase. If false
, collision check only evaluated for lane changing phase. true safety_check.execution.prepare_phase_ignore_target_speed_thresh
[m/s] double Ignore collision check in prepare phase of object speed that is lesser that the configured value. enable_collision_check_at_prepare_phase
must be true
0.1 safety_check.execution.check_objects_on_current_lanes
[-] boolean If true, the lane change module include objects on current lanes. true safety_check.execution.check_objects_on_other_lanes
[-] boolean If true, the lane change module include objects on other lanes. true safety_check.execution.use_all_predicted_path
[-] boolean If false, use only the predicted path that has the maximum confidence. true"},{"location":"planning/behavior_path_lane_change_module/#cancel_1","title":"cancel","text":"Name Unit Type Description Default value safety_check.cancel.lateral_distance_max_threshold
[m] double The lateral distance threshold that is used to determine whether lateral distance between two object is enough and whether lane change is safe. 1.5 safety_check.cancel.longitudinal_distance_min_threshold
[m] double The longitudinal distance threshold that is used to determine whether longitudinal distance between two object is enough and whether lane change is safe. 3.0 safety_check.cancel.expected_front_deceleration
[m/s^2] double The front object's maximum deceleration when the front vehicle perform sudden braking. (*1) -1.5 safety_check.cancel.expected_rear_deceleration
[m/s^2] double The rear object's maximum deceleration when the rear vehicle perform sudden braking. (*1) -2.5 safety_check.cancel.rear_vehicle_reaction_time
[s] double The reaction time of the rear vehicle driver which starts from the driver noticing the sudden braking of the front vehicle until the driver step on the brake. 2.0 safety_check.cancel.rear_vehicle_safety_time_margin
[s] double The time buffer for the rear vehicle to come into complete stop when its driver perform sudden braking. 2.5 safety_check.cancel.enable_collision_check_at_prepare_phase
[-] boolean Perform collision check starting from prepare phase. If false
, collision check only evaluated for lane changing phase. false safety_check.cancel.prepare_phase_ignore_target_speed_thresh
[m/s] double Ignore collision check in prepare phase of object speed that is lesser that the configured value. enable_collision_check_at_prepare_phase
must be true
0.2 safety_check.cancel.check_objects_on_current_lanes
[-] boolean If true, the lane change module include objects on current lanes. false safety_check.cancel.check_objects_on_other_lanes
[-] boolean If true, the lane change module include objects on other lanes. false safety_check.cancel.use_all_predicted_path
[-] boolean If false, use only the predicted path that has the maximum confidence. false"},{"location":"planning/behavior_path_lane_change_module/#stuck","title":"stuck","text":"Name Unit Type Description Default value safety_check.stuck.lateral_distance_max_threshold
[m] double The lateral distance threshold that is used to determine whether lateral distance between two object is enough and whether lane change is safe. 2.0 safety_check.stuck.longitudinal_distance_min_threshold
[m] double The longitudinal distance threshold that is used to determine whether longitudinal distance between two object is enough and whether lane change is safe. 3.0 safety_check.stuck.expected_front_deceleration
[m/s^2] double The front object's maximum deceleration when the front vehicle perform sudden braking. (*1) -1.0 safety_check.stuck.expected_rear_deceleration
[m/s^2] double The rear object's maximum deceleration when the rear vehicle perform sudden braking. (*1) -1.0 safety_check.stuck.rear_vehicle_reaction_time
[s] double The reaction time of the rear vehicle driver which starts from the driver noticing the sudden braking of the front vehicle until the driver step on the brake. 2.0 safety_check.stuck.rear_vehicle_safety_time_margin
[s] double The time buffer for the rear vehicle to come into complete stop when its driver perform sudden braking. 2.0 safety_check.stuck.enable_collision_check_at_prepare_phase
[-] boolean Perform collision check starting from prepare phase. If false
, collision check only evaluated for lane changing phase. true safety_check.stuck.prepare_phase_ignore_target_speed_thresh
[m/s] double Ignore collision check in prepare phase of object speed that is lesser that the configured value. enable_collision_check_at_prepare_phase
must be true
0.1 safety_check.stuck.check_objects_on_current_lanes
[-] boolean If true, the lane change module include objects on current lanes. true safety_check.stuck.check_objects_on_other_lanes
[-] boolean If true, the lane change module include objects on other lanes. true safety_check.stuck.use_all_predicted_path
[-] boolean If false, use only the predicted path that has the maximum confidence. true (*1) the value must be negative.
"},{"location":"planning/behavior_path_lane_change_module/#abort-lane-change","title":"Abort lane change","text":"The following parameters are configurable in lane_change.param.yaml
.
cancel.enable_on_prepare_phase
[-] boolean Enable cancel lane change true cancel.enable_on_lane_changing_phase
[-] boolean Enable abort lane change. false cancel.delta_time
[s] double The time taken to start steering to return to the center line. 3.0 cancel.duration
[s] double The time taken to complete returning to the center line. 3.0 cancel.max_lateral_jerk
[m/sss] double The maximum lateral jerk for abort path 1000.0 cancel.overhang_tolerance
[m] double Lane change cancel is prohibited if the vehicle head exceeds the lane boundary more than this tolerance distance 0.0"},{"location":"planning/behavior_path_lane_change_module/#debug","title":"Debug","text":"The following parameters are configurable in lane_change.param.yaml
.
publish_debug_marker
[-] boolean Flag to publish debug marker false"},{"location":"planning/behavior_path_lane_change_module/#debug-marker-visualization","title":"Debug Marker & Visualization","text":"To enable the debug marker, execute (no restart is needed)
ros2 param set /planning/scenario_planning/lane_driving/behavior_planning/behavior_path_planner lane_change.publish_debug_marker true\n
or simply set the publish_debug_marker
to true
in the lane_change.param.yaml
for permanent effect (restart is needed).
Then add the marker
/planning/scenario_planning/lane_driving/behavior_planning/behavior_path_planner/debug/lane_change_left\n
in rviz2
.
Available information
The Behavior Path Planner's main objective is to significantly enhance the safety of autonomous vehicles by minimizing the risk of accidents. It improves driving efficiency through time conservation and underpins reliability with its rule-based approach. Additionally, it allows users to integrate their own custom behavior modules or use it with different types of vehicles, such as cars, buses, and delivery robots, as well as in various environments, from busy urban streets to open highways.
The module begins by thoroughly analyzing the ego vehicle's current situation, including its position, speed, and surrounding environment. This analysis leads to essential driving decisions about lane changes or stopping and subsequently generates a path that is both safe and efficient. It considers road geometry, traffic rules, and dynamic conditions while also incorporating obstacle avoidance to respond to static and dynamic obstacles such as other vehicles, pedestrians, or unexpected roadblocks, ensuring safe navigation.
Moreover, the planner actively interacts with other traffic participants, predicting their actions and accordingly adjusting the vehicle's path. This ensures not only the safety of the autonomous vehicle but also contributes to smooth traffic flow. Its adherence to traffic laws, including speed limits and traffic signals, further guarantees lawful and predictable driving behavior. The planner is also designed to minimize sudden or abrupt maneuvers, aiming for a comfortable and natural driving experience.
Note
The Planning Component Design Document outlines the foundational philosophy guiding the design and future development of the Behavior Path Planner module. We strongly encourage readers to consult this document to understand the rationale behind its current configuration and the direction of its ongoing development.
"},{"location":"planning/behavior_path_planner/#purpose-use-cases","title":"Purpose / Use Cases","text":"Essentially, the module has three primary responsibilities:
Behavior Path Planner has following scene modules
Name Description Details Lane Following this module generates reference path from lanelet centerline. LINK Avoidance this module generates avoidance path when there is objects that should be avoid. LINK Dynamic Avoidance WIP LINK Avoidance By Lane Change this module generates lane change path when there is objects that should be avoid. LINK Lane Change this module is performed when it is necessary and a collision check with other vehicles is cleared. LINK External Lane Change WIP LINK Start Planner this module is performed when ego-vehicle is in the road lane and goal is in the shoulder lane. ego-vehicle will stop at the goal. LINK Goal Planner this module is performed when ego-vehicle is stationary and footprint of ego-vehicle is included in shoulder lane. This module ends when ego-vehicle merges into the road. LINK Side Shift (for remote control) shift the path to left or right according to an external instruction. LINKNote
click on the following images to view the video of their execution
Note
Users can refer to Planning component design for some additional behavior.
"},{"location":"planning/behavior_path_planner/#how-to-add-or-implement-new-module","title":"How to add or implement new module?","text":"All scene modules are implemented by inheriting base class scene_module_interface.hpp
.
Warning
The remainder of this subsection is work in progress (WIP).
"},{"location":"planning/behavior_path_planner/#planner-manager","title":"Planner Manager","text":"The Planner Manager's responsibilities include:
Note
To check the scene module's transition, i.e.: registered, approved and candidate modules, set verbose: true
in the behavior path planner configuration file.
Note
For more in-depth information, refer to Manager design document.
"},{"location":"planning/behavior_path_planner/#inputs-outputs-api","title":"Inputs / Outputs / API","text":""},{"location":"planning/behavior_path_planner/#input","title":"Input","text":"Name Required? Type Description ~/input/odometry \u25cbnav_msgs::msg::Odometry
for ego velocity. ~/input/accel \u25cb geometry_msgs::msg::AccelWithCovarianceStamped
for ego acceleration. ~/input/objects \u25cb autoware_auto_perception_msgs::msg::PredictedObjects
dynamic objects from perception module. ~/input/occupancy_grid_map \u25cb nav_msgs::msg::OccupancyGrid
occupancy grid map from perception module. This is used for only Goal Planner module. ~/input/traffic_signals \u25cb autoware_perception_msgs::msg::TrafficSignalArray
traffic signals information from the perception module ~/input/vector_map \u25cb autoware_auto_mapping_msgs::msg::HADMapBin
vector map information. ~/input/route \u25cb autoware_auto_mapping_msgs::msg::LaneletRoute
current route from start to goal. ~/input/scenario \u25cb tier4_planning_msgs::msg::Scenario
Launches behavior path planner if current scenario == Scenario:LaneDriving
. ~/input/lateral_offset \u25b3 tier4_planning_msgs::msg::LateralOffset
lateral offset to trigger side shift ~/system/operation_mode/state \u25cb autoware_adapi_v1_msgs::msg::OperationModeState
Allows planning module to know if vehicle is in autonomous mode or can be controlledref autoware_auto_planning_msgs::msg::PathWithLaneId
the path generated by modules. volatile
~/output/turn_indicators_cmd autoware_auto_vehicle_msgs::msg::TurnIndicatorsCommand
turn indicators command. volatile
~/output/hazard_lights_cmd autoware_auto_vehicle_msgs::msg::HazardLightsCommand
hazard lights command. volatile
~/output/modified_goal autoware_planning_msgs::msg::PoseWithUuidStamped
output modified goal commands. transient_local
~/output/stop_reasons tier4_planning_msgs::msg::StopReasonArray
describe the reason for ego vehicle stop volatile
~/output/reroute_availability tier4_planning_msgs::msg::RerouteAvailability
the path the module is about to take. to be executed as soon as external approval is obtained. volatile
"},{"location":"planning/behavior_path_planner/#debug","title":"Debug","text":"Name Type Description QoS Durability ~/debug/avoidance_debug_message_array tier4_planning_msgs::msg::AvoidanceDebugMsgArray
debug message for avoidance. notify users reasons for avoidance path cannot be generated. volatile
~/debug/lane_change_debug_message_array tier4_planning_msgs::msg::LaneChangeDebugMsgArray
debug message for lane change. notify users unsafe reason during lane changing process volatile
~/debug/maximum_drivable_area visualization_msgs::msg::MarkerArray
shows maximum static drivable area. volatile
~/debug/turn_signal_info visualization_msgs::msg::MarkerArray
TBA volatile
~/debug/bound visualization_msgs::msg::MarkerArray
debug for static drivable area volatile
~/planning/path_candidate/* autoware_auto_planning_msgs::msg::Path
the path before approval. volatile
~/planning/path_reference/* autoware_auto_planning_msgs::msg::Path
reference path generated by each modules. volatile
Note
For specific information of which topics are being subscribed and published, refer to behavior_path_planner.xml.
"},{"location":"planning/behavior_path_planner/#how-to-enable-or-disable-the-modules","title":"How to enable or disable the modules","text":"Enabling and disabling the modules in the behavior path planner is primarily managed through two key files: default_preset.yaml
and behavior_path_planner.launch.xml
.
The default_preset.yaml
file acts as a configuration file for enabling or disabling specific modules within the planner. It contains a series of arguments which represent the behavior path planner's modules or features. For example:
launch_avoidance_module
: Set to true
to enable the avoidance module, or false
to disable it.Note
Click here to view the default_preset.yaml
.
The behavior_path_planner.launch.xml
file references the settings defined in default_preset.yaml
to apply the configurations when the behavior path planner's node is running. For instance, the parameter avoidance.enable_module
in
<param name=\"avoidance.enable_module\" value=\"$(var launch_avoidance_module)\"/>\n
corresponds to launch_avoidance_module from default_preset.yaml
.
Therefore, to enable or disable a module, simply set the corresponding module in default_preset.yaml
to true
or false
. These changes will be applied upon the next launch of Autoware.
A sophisticated methodology is used for path generation, particularly focusing on maneuvers like lane changes and avoidance. At the core of this design is the smooth lateral shifting of the reference path, achieved through a constant-jerk profile. This approach ensures a consistent rate of change in acceleration, facilitating smooth transitions and minimizing abrupt changes in lateral dynamics, crucial for passenger comfort and safety.
The design involves complex mathematical formulations for calculating the lateral shift of the vehicle's path over time. These calculations include determining lateral displacement, velocity, and acceleration, while considering the vehicle's lateral acceleration and velocity limits. This is essential for ensuring that the vehicle's movements remain safe and manageable.
The ShiftLine
struct (as seen here) is utilized to represent points along the path where the lateral shift starts and ends. It includes details like the start and end points in absolute coordinates, the relative shift lengths at these points compared to the reference path, and the associated indexes on the reference path. This struct is integral to managing the path shifts, as it allows the path planner to dynamically adjust the trajectory based on the vehicle's current position and planned maneuver.
Furthermore, the design and its implementation incorporate various equations and mathematical models to calculate essential parameters for the path shift. These include the total distance of the lateral shift, the maximum allowable lateral acceleration and jerk, and the total time required for the shift. Practical considerations are also noted, such as simplifying assumptions in the absence of a specific time interval for most lane change and avoidance cases.
The shifted path generation logic enables the behavior path planner to dynamically generate safe and efficient paths, precisely controlling the vehicle\u2019s lateral movements to ensure the smooth execution of lane changes and avoidance maneuvers. This careful planning and execution adhere to the vehicle's dynamic capabilities and safety constraints, maximizing efficiency and safety in autonomous vehicle navigation.
Note
If you're a math lover, refer to Path Generation Design for the nitty-gritty.
"},{"location":"planning/behavior_path_planner/#collision-assessment-safety-check","title":"Collision Assessment / Safety check","text":"The purpose of the collision assessment function in the Behavior Path Planner is to evaluate the potential for collisions with target objects across all modules. It is utilized in two scenarios:
The safety check process involves several steps. Initially, it obtains the pose of the target object at a specific time, typically through interpolation of the predicted path. It then checks for any overlap between the ego vehicle and the target object at this time. If an overlap is detected, the path is deemed unsafe. The function also identifies which vehicle is in front by using the arc length along the given path. The function operates under the assumption that accurate data on the position, velocity, and shape of both the ego vehicle (the autonomous vehicle) and any target objects are available. It also relies on the yaw angle of each point in the predicted paths of these objects, which is expected to point towards the next path point.
A critical part of the safety check is the calculation of the RSS (Responsibility-Sensitive Safety) distance-inspired algorithm. This algorithm considers factors such as reaction time, safety time margin, and the velocities and decelerations of both vehicles. Extended object polygons are created for both the ego and target vehicles. Notably, the rear object\u2019s polygon is extended by the RSS distance longitudinally and by a lateral margin. The function finally checks for overlap between this extended rear object polygon and the front object polygon. Any overlap indicates a potential unsafe situation.
However, the module does have a limitation concerning the yaw angle of each point in the predicted paths of target objects, which may not always accurately point to the next point, leading to potential inaccuracies in some edge cases.
Note
For further reading on the collision assessment method, please refer to Safety check utils
"},{"location":"planning/behavior_path_planner/#generating-drivable-area","title":"Generating Drivable Area","text":""},{"location":"planning/behavior_path_planner/#static-drivable-area-logic","title":"Static Drivable Area logic","text":"The drivable area is used to determine the area in which the ego vehicle can travel. The primary goal of static drivable area expansion is to ensure safe travel by generating an area that encompasses only the necessary spaces for the vehicle's current behavior, while excluding non-essential areas. For example, while avoidance
module is running, the drivable area includes additional space needed for maneuvers around obstacles, and it limits the behavior by not extending the avoidance path outside of lanelet areas.
Static drivable area expansion operates under assumptions about the correct arrangement of lanes and the coverage of both the front and rear of the vehicle within the left and right boundaries. Key parameters for drivable area generation include extra footprint offsets for the ego vehicle, the handling of dynamic objects, maximum expansion distance, and specific methods for expansion. Additionally, since each module generates its own drivable area, before passing it as the input to generate the next running module's drivable area, or before generating a unified drivable area, the system sorts drivable lanes based on the vehicle's passage order. This ensures the correct definition of the lanes used in drivable area generation.
Note
Further details can is provided in Drivable Area Design.
"},{"location":"planning/behavior_path_planner/#dynamic-drivable-area-logic","title":"Dynamic Drivable Area Logic","text":"Large vehicles require much more space, which sometimes causes them to veer out of their current lane. A typical example being a bus making a turn at a corner. In such cases, relying on a static drivable area is insufficient, since the static method depends on lane information provided by high-definition maps. To overcome the limitations of the static approach, the dynamic drivable area expansion algorithm adjusts the navigable space for an autonomous vehicle in real-time. It conserves computational power by reusing previously calculated path data, updating only when there is a significant change in the vehicle's position. The system evaluates the minimum lane width necessary to accommodate the vehicle's turning radius and other dynamic factors. It then calculates the optimal expansion of the drivable area's boundaries to ensure there is adequate space for safe maneuvering, taking into account the vehicle's path curvature. The rate at which these boundaries can expand or contract is moderated to maintain stability in the vehicle's navigation. The algorithm aims to maximize the drivable space while avoiding fixed obstacles and adhering to legal driving limits. Finally, it applies these boundary adjustments and smooths out the path curvature calculations to ensure a safe and legally compliant navigable path is maintained throughout the vehicle's operation.
Note
The feature can be enabled in the drivable_area_expansion.param.yaml.
"},{"location":"planning/behavior_path_planner/#generating-turn-signal","title":"Generating Turn Signal","text":"The Behavior Path Planner module uses the autoware_auto_vehicle_msgs::msg::TurnIndicatorsCommand
to output turn signal commands (see TurnIndicatorsCommand.idl). The system evaluates the driving context and determines when to activate turn signals based on its maneuver planning\u2014like turning, lane changing, or obstacle avoidance.
Within this framework, the system differentiates between desired and required blinker activations. Desired activations are those recommended by traffic laws for typical driving scenarios, such as signaling before a lane change or turn. Required activations are those that are deemed mandatory for safety reasons, like signaling an abrupt lane change to avoid an obstacle.
The TurnIndicatorsCommand
message structure has a command field that can take one of several constants: NO_COMMAND
indicates no signal is necessary, DISABLE
to deactivate signals, ENABLE_LEFT
to signal a left turn, and ENABLE_RIGHT
to signal a right turn. The Behavior Path Planner sends these commands at the appropriate times, based on its rules-based system that considers both the desired and required scenarios for blinker activation.
Note
For more in-depth information, refer to Turn Signal Design document.
"},{"location":"planning/behavior_path_planner/#rerouting","title":"Rerouting","text":"Warning
Rerouting is a feature that was still under progress. Further information will be included on a later date.
"},{"location":"planning/behavior_path_planner/#parameters-and-configuration","title":"Parameters and Configuration","text":"The configuration files are organized in a hierarchical directory structure for ease of navigation and management. Each subdirectory contains specific configuration files relevant to its module. The root directory holds general configuration files that apply to the overall behavior of the planner. The following is an overview of the directory structure with the respective configuration files.
behavior_path_planner\n\u251c\u2500\u2500 behavior_path_planner.param.yaml\n\u251c\u2500\u2500 drivable_area_expansion.param.yaml\n\u251c\u2500\u2500 scene_module_manager.param.yaml\n\u251c\u2500\u2500 avoidance\n\u2502 \u2514\u2500\u2500 avoidance.param.yaml\n\u251c\u2500\u2500 avoidance_by_lc\n\u2502 \u2514\u2500\u2500 avoidance_by_lc.param.yaml\n\u251c\u2500\u2500 dynamic_avoidance\n\u2502 \u2514\u2500\u2500 dynamic_avoidance.param.yaml\n\u251c\u2500\u2500 goal_planner\n\u2502 \u2514\u2500\u2500 goal_planner.param.yaml\n\u251c\u2500\u2500 lane_change\n\u2502 \u2514\u2500\u2500 lane_change.param.yaml\n\u251c\u2500\u2500 side_shift\n\u2502 \u2514\u2500\u2500 side_shift.param.yaml\n\u2514\u2500\u2500 start_planner\n \u2514\u2500\u2500 start_planner.param.yaml\n
Similarly, the common directory contains configuration files that are used across various modules, providing shared parameters and settings essential for the functioning of the Behavior Path Planner:
common\n\u251c\u2500\u2500 common.param.yaml\n\u251c\u2500\u2500 costmap_generator.param.yaml\n\u2514\u2500\u2500 nearest_search.param.yaml\n
The preset directory contains the configurations for managing the operational state of various modules. It includes the default_preset.yaml file, which specifically caters to enabling and disabling modules within the system.
preset\n\u2514\u2500\u2500 default_preset.yaml\n
"},{"location":"planning/behavior_path_planner/#limitations-future-work","title":"Limitations & Future Work","text":"Warning
Under Construction
"},{"location":"planning/behavior_path_planner/docs/behavior_path_planner_limitations/","title":"Limitations","text":""},{"location":"planning/behavior_path_planner/docs/behavior_path_planner_limitations/#limitations","title":"Limitations","text":"The document describes the limitations that are currently present in the behavior_path_planner
module.
The following items (but not limited to) fall in the scope of limitation:
To fully utilize the Lanelet2
's API, the design of the vector map (.osm
) needs to follow all the criteria described in Lanelet2
documentation. Specifically, in the case of 2 or more lanes, the Linestrings that divide the current lane with the opposite/adjacent lane need to have a matching Linestring ID
. Assume the following ideal case.
In the image, Linestring ID51
is shared by Lanelet A
and Lanelet B
. Hence we can directly use the available left
, adjacentLeft
, right
, adjacentRight
and findUsages
method within Lanelet2
's API to directly query the direction and opposite lane availability.
const auto right_lane = routing_graph_ptr_->right(lanelet);\nconst auto adjacent_right_lane = routing_graph_ptr_->adjacentRight(lanelet);\nconst auto opposite_right_lane = lanelet_map_ptr_->laneletLayer.findUsages(lanelet.rightBound().invert());\n
The following images show the situation where these API does not work directly. This means that we cannot use them straight away, and several assumptions and logical instruction are needed to make these APIs work.
In this example (multiple linestring issues), Lanelet C
contains Linestring ID61
and ID62
, while Lanelet D
contains Linestring ID63
and ID 64
. Although the Linestring ID62
and ID64
have identical point IDs and seem visually connected, the API will treat these Linestring as though they are separated. When it searches for any Lanelet
that is connected via Linestring ID62
, it will return NULL
, since ID62
only connects to Lanelet C
and not other Lanelet
.
Although, in this case, it is possible to forcefully search the lanelet availability by checking the lanelet that contains the points, usinggetLaneletFromPoint
method. But, the implementation requires complex rules for it to work. Take the following images as an example.
Assume Object X
is in Lanelet F
. We can forcefully search Lanelet E
via Point 7
, and it will work if Point 7
is utilized by only 2 lanelet. However, the complexity increases when we want to start searching for the direction of the opposite lane. We can infer the direction of the lanelet by using mathematical operations (dot product of vector V_ID72
(Point 6
minus Point 9
), and V_ID74
(Point 7
minus Point 8
). But, notice that we did not use Point 7 in V_ID72. This is because searching it requires an iteration, adding additional non-beneficial computation.
Suppose the points are used by more than 2 lanelets. In that case, we have to find the differences for all lanelet, and the result might be undefined. The reason is that the differences between the coordinates do not reflect the actual shape of the lanelet. The following image demonstrates this point.
There are many other available solutions to try. However, further attempt to solve this might cause issues in the future, especially for maintaining or scaling up the software.
In conclusion, the multiple Linestring issues will not be supported. Covering these scenarios might give the user an \"everything is possible\" impression. This is dangerous since any attempt to create a non-standardized vector map is not compliant with safety regulations.
"},{"location":"planning/behavior_path_planner/docs/behavior_path_planner_limitations/#limitation-avoidance-at-corners-and-intersections","title":"Limitation: Avoidance at Corners and Intersections","text":"Currently, the implementation doesn't cover avoidance at corners and intersections. The reason is similar to here. However, this case can still be supported in the future (assuming the vector map is defined correctly).
"},{"location":"planning/behavior_path_planner/docs/behavior_path_planner_limitations/#limitation-chattering-shifts","title":"Limitation: Chattering shifts","text":"There are possibilities that the shifted path chatters as a result of various factors. For example, bounded box shape or position from the perception input. Sometimes, it is difficult for the perception to get complete information about the object's size. As the object size is updated, the object length will also be updated. This might cause shifts point to be re-calculated, therefore resulting in chattering shift points.
"},{"location":"planning/behavior_path_planner/docs/behavior_path_planner_manager_design/","title":"Manager design","text":""},{"location":"planning/behavior_path_planner/docs/behavior_path_planner_manager_design/#manager-design","title":"Manager design","text":""},{"location":"planning/behavior_path_planner/docs/behavior_path_planner_manager_design/#purpose-role","title":"Purpose / Role","text":"The manager launches and executes scene modules in behavior_path_planner
depending on the use case, and has been developed to achieve following features:
Movie
Support status:
Name Simple exclusive execution Advanced simultaneous execution Avoidance Avoidance By Lane Change Lane Change External Lane Change Goal Planner (without goal modification) Goal Planner (with goal modification) Pull Out Side ShiftClick here for supported scene modules.
Warning
It is still under development and some functions may be unstable.
"},{"location":"planning/behavior_path_planner/docs/behavior_path_planner_manager_design/#overview","title":"Overview","text":"The manager is the core part of the behavior_path_planner
implementation. It outputs path based on the latest data.
The manager has sub-managers for each scene module, and its main task is
Additionally, the manager generates root reference path, and if any other modules don't request execution, the path is used as the planning result of behavior_path_planner
.
The sub-manager's main task is
registered_modules_
.registered_modules_
.sub-managers
Sub-manager is registered on the manager with the following function.
/**\n * @brief register managers.\n * @param manager pointer.\n */\nvoid registerSceneModuleManager(const SceneModuleManagerPtr & manager_ptr)\n{\nRCLCPP_INFO(logger_, \"register %s module\", manager_ptr->getModuleName().c_str());\nmanager_ptrs_.push_back(manager_ptr);\nprocessing_time_.emplace(manager_ptr->getModuleName(), 0.0);\n}\n
Code is here
Sub-manager has the following parameters that are needed by the manager to manage the launched modules, and these parameters can be set for each module.
struct ModuleConfigParameters\n{\nbool enable_module{false};\nbool enable_rtc{false};\nbool enable_simultaneous_execution_as_approved_module{false};\nbool enable_simultaneous_execution_as_candidate_module{false};\nuint8_t priority{0};\nuint8_t max_module_size{0};\n};\n
Code is here
Name Type Descriptionenable_module
bool if true, the sub-manager is registered on the manager. enable_rtc
bool if true, the scene modules should be approved by (request to cooperate)rtc function. if false, the module can be run without approval from rtc. enable_simultaneous_execution_as_candidate_module
bool if true, the manager allows its scene modules to run with other scene modules as candidate module. enable_simultaneous_execution_as_approved_module
bool if true, the manager allows its scene modules to run with other scene modules as approved module. priority
uint8_t the manager decides execution priority based on this parameter. The smaller the number is, the higher the priority is. max_module_size
uint8_t the sub-manager can run some modules simultaneously. this parameter set the maximum number of the launched modules."},{"location":"planning/behavior_path_planner/docs/behavior_path_planner_manager_design/#scene-modules","title":"Scene modules","text":"Scene modules receives necessary data and RTC command, and outputs candidate path(s), reference path and RTC cooperate status. When multiple modules run in series, the output of the previous module is received as input and the information is used to generate a new modified path, as shown in the following figure. And, when one module is running alone, it receives a reference path generated from the centerline of the lane in which Ego is currently driving as previous module output.
scene module I/O Type Description IN
behavior_path_planner::BehaviorModuleOutput
previous module output. contains data necessary for path planning. IN behavior_path_planner::PlannerData
contains data necessary for path planning. IN tier4_planning_msgs::srv::CooperateCommands
contains approval data for scene module's path modification. (details) OUT behavior_path_planner::BehaviorModuleOutput
contains modified path, turn signal information, etc... OUT tier4_planning_msgs::msg::CooperateStatus
contains RTC cooperate status. (details) OUT autoware_auto_planning_msgs::msg::Path
candidate path output by a module that has not received approval for path change. when it approved, the ego's following path is switched to this path. (just for visualization) OUT autoware_auto_planning_msgs::msg::Path
reference path generated from the centerline of the lane the ego is going to follow. (just for visualization) OUT visualization_msgs::msg::MarkerArray
virtual wall, debug info, etc... Scene modules running on the manager are stored on the candidate modules stack or approved modules stack depending on the condition whether the path modification has been approved or not.
Stack Approval condition Description candidate modules Not approved The candidate modules whose modified path has not been approved by RTC is stored in vectorcandidate_module_ptrs_
in the manager. The candidate modules stack is updated in the following order. 1. The manager selects only those modules that can be executed based on the configuration of the sub-manager whose scene module requests execution. 2. Determines the execution priority. 3. Executes them as candidate module. All of these modules receive the decided (approved) path from approved modules stack and RUN in PARALLEL. approved modules Already approved When the path modification is approved via RTC commands, the manager moves the candidate module to approved modules stack. These modules are stored in approved_module_ptrs_
. In this stack, all scene modules RUN in SERIES."},{"location":"planning/behavior_path_planner/docs/behavior_path_planner_manager_design/#process-flow","title":"Process flow","text":"There are 6 steps in one process:
"},{"location":"planning/behavior_path_planner/docs/behavior_path_planner_manager_design/#step1","title":"Step1","text":"At first, the manager set latest planner data, and run all approved modules and get output path. At this time, the manager checks module status and removes expired modules from approved modules stack.
"},{"location":"planning/behavior_path_planner/docs/behavior_path_planner_manager_design/#step2","title":"Step2","text":"Input approved modules output and necessary data to all registered modules, and the modules judge the necessity of path modification based on it. The manager checks which module makes execution request.
"},{"location":"planning/behavior_path_planner/docs/behavior_path_planner_manager_design/#step3","title":"Step3","text":"Check request module existence.
"},{"location":"planning/behavior_path_planner/docs/behavior_path_planner_manager_design/#step4","title":"Step4","text":"The manager decides which module to execute as candidate modules from the modules that requested to execute path modification.
"},{"location":"planning/behavior_path_planner/docs/behavior_path_planner_manager_design/#step5","title":"Step5","text":"Decides the priority order of execution among candidate modules. And, run all candidate modules. Each modules outputs reference path and RTC cooperate status.
"},{"location":"planning/behavior_path_planner/docs/behavior_path_planner_manager_design/#step6","title":"Step6","text":"Move approved module to approved modules stack from candidate modules stack.
and, within a single planning cycle, these steps are repeated until the following conditions are satisfied.
while (rclcpp::ok()) {\n/**\n * STEP1: get approved modules' output\n */\nconst auto approved_modules_output = runApprovedModules(data);\n\n/**\n * STEP2: check modules that need to be launched\n */\nconst auto request_modules = getRequestModules(approved_modules_output);\n\n/**\n * STEP3: if there is no module that need to be launched, return approved modules' output\n */\nif (request_modules.empty()) {\nprocessing_time_.at(\"total_time\") = stop_watch_.toc(\"total_time\", true);\nreturn approved_modules_output;\n}\n\n/**\n * STEP4: if there is module that should be launched, execute the module\n */\nconst auto [highest_priority_module, candidate_modules_output] =\nrunRequestModules(request_modules, data, approved_modules_output);\nif (!highest_priority_module) {\nprocessing_time_.at(\"total_time\") = stop_watch_.toc(\"total_time\", true);\nreturn approved_modules_output;\n}\n\n/**\n * STEP5: if the candidate module's modification is NOT approved yet, return the result.\n * NOTE: the result is output of the candidate module, but the output path don't contains path\n * shape modification that needs approval. On the other hand, it could include velocity profile\n * modification.\n */\nif (highest_priority_module->isWaitingApproval()) {\nprocessing_time_.at(\"total_time\") = stop_watch_.toc(\"total_time\", true);\nreturn candidate_modules_output;\n}\n\n/**\n * STEP6: if the candidate module is approved, push the module into approved_module_ptrs_\n */\naddApprovedModule(highest_priority_module);\nclearCandidateModules();\n}\n
Code is here
"},{"location":"planning/behavior_path_planner/docs/behavior_path_planner_manager_design/#priority-of-execution-request","title":"Priority of execution request","text":"Compare priorities parameter among sub-managers to determine the order of execution based on config. Therefore, the priority between sub-modules does NOT change at runtime.
/**\n * @brief swap the modules order based on it's priority.\n * @param modules.\n * @details for now, the priority is decided in config file and doesn't change runtime.\n */\nvoid sortByPriority(std::vector<SceneModulePtr> & modules) const\n{\n// TODO(someone) enhance this priority decision method.\nstd::sort(modules.begin(), modules.end(), [this](auto a, auto b) {\nreturn getManager(a)->getPriority() < getManager(b)->getPriority();\n});\n}\n
Code is here
In the future, however, we are considering having the priorities change dynamically depending on the situation in order to achieve more complex use cases.
"},{"location":"planning/behavior_path_planner/docs/behavior_path_planner_manager_design/#how-to-decide-which-request-modules-to-run","title":"How to decide which request modules to run?","text":"On this manager, it is possible that multiple scene modules may request path modification at same time. In that case, the modules to be executed as candidate module is determined in the following order.
"},{"location":"planning/behavior_path_planner/docs/behavior_path_planner_manager_design/#step1_1","title":"Step1","text":"Push back the modules that make a request to request_modules
.
Check approved modules stack, and remove non-executable modules fromrequest_modules
based on the following condition.
enable_simultaneous_execution_as_approved_module
is true
).Executable or not:
Condition A Condition B Condition C Executable as candidate modules? YES - YES YES YES - NO YES NO YES YES YES NO YES NO NO NO NO YES NO NO NO NO NOIf a module that doesn't support simultaneous execution exists in approved modules stack (NOT satisfy Condition B), no more modules can be added to the stack, and therefore none of the modules can be executed as candidate.
For example, if approved module's setting of enable_simultaneous_execution_as_approved_module
is ENABLE, then only modules whose the setting is ENABLE proceed to the next step.
Other examples:
Process Description If approved modules stack is empty, then all request modules proceed to the next step, regardless of the setting ofenable_simultaneous_execution_as_approved_module
. If approved module's setting of enable_simultaneous_execution_as_approved_module
is DISABLE, then all request modules are discarded."},{"location":"planning/behavior_path_planner/docs/behavior_path_planner_manager_design/#step3_1","title":"Step3","text":"Sort request_modules
by priority.
Check and pick up executable modules as candidate in order of priority based on the following conditions.
enable_simultaneous_execution_as_candidate_module
is true
).Executable or not:
Condition A Condition B Condition C Executable as candidate modules? YES - YES YES YES - NO YES NO YES YES YES NO YES NO NO NO NO YES NO NO NO NO NOFor example, if the highest priority module's setting of enable_simultaneous_execution_as_candidate_module
is DISABLE, then all modules after the second priority are discarded.
Other examples:
Process Description If a module with a higher priority exists, lower priority modules whose setting ofenable_simultaneous_execution_as_candidate_module
is DISABLE are discarded. If all modules' setting of enable_simultaneous_execution_as_candidate_module
is ENABLE, then all modules proceed to the next step."},{"location":"planning/behavior_path_planner/docs/behavior_path_planner_manager_design/#step5_1","title":"Step5","text":"Run all candidate modules.
"},{"location":"planning/behavior_path_planner/docs/behavior_path_planner_manager_design/#how-to-decide-which-modules-output-to-use","title":"How to decide which module's output to use?","text":"Sometimes, multiple candidate modules are running simultaneously.
In this case, the manager selects a candidate modules which output path is used as behavior_path_planner
output by approval condition in the following rules.
priority
), approved modules always have a higher priority than unapproved modules.Note
The smaller the number is, the higher the priority is.
module priority
Additionally, the manager moves the highest priority module to approved modules stack if it is already approved.
"},{"location":"planning/behavior_path_planner/docs/behavior_path_planner_manager_design/#scene-module-unregister-process","title":"Scene module unregister process","text":"The manager removes expired module in approved modules stack based on the module's status.
"},{"location":"planning/behavior_path_planner/docs/behavior_path_planner_manager_design/#waiting-approval-modules","title":"Waiting approval modules","text":"If one module requests multiple path changes, the module may be back to waiting approval condition again. In this case, the manager moves the module to candidate modules stack. If there are some modules that was pushed back to approved modules stack later than the waiting approved module, it is also removed from approved modules stack.
This is because module C is planning output path with the output of module B as input, and if module B is removed from approved modules stack and the input of module C changes, the output path of module C may also change greatly, and the output path will be unstable.
As a result, the module A's output is used as approved modules stack.
"},{"location":"planning/behavior_path_planner/docs/behavior_path_planner_manager_design/#failure-modules","title":"Failure modules","text":"The failure modules return the status ModuleStatus::FAILURE
. The manager removes the module from approved modules stack as well as waiting approval modules, but the failure module is not moved to candidate modules stack.
As a result, the module A's output is used as approved modules stack.
"},{"location":"planning/behavior_path_planner/docs/behavior_path_planner_manager_design/#succeeded-modules","title":"Succeeded modules","text":"The succeeded modules return the status ModuleStatus::SUCCESS
. The manager removes those modules based on Last In First Out policy. In other words, if a module added later to approved modules stack is still running (is in ModuleStatus::RUNNING
), the manager doesn't remove the succeeded module. The reason for this is the same as in removal for waiting approval modules, and is to prevent sudden changes of the running module's output.
As an exception, if Lane Change module returns status ModuleStatus::SUCCESS
, the manager doesn't remove any modules until all modules is in status ModuleStatus::SUCCESS
. This is because when the manager removes the Lane Change (normal LC, external LC, avoidance by LC) module as succeeded module, the manager updates the information of the lane Ego is currently driving in, so root reference path (= module A's input path) changes significantly at that moment.
When the manager removes succeeded modules, the last added module's output is used as approved modules stack.
"},{"location":"planning/behavior_path_planner/docs/behavior_path_planner_manager_design/#reference-path-generation","title":"Reference path generation","text":"The root reference path is generated from the centerline of the lanelet sequence that obtained from the root lanelet, and it is not only used as an input to the first added module of approved modules stack, but also used as the output of behavior_path_planner
if none of the modules are running.
root reference path generation
The root lanelet is the closest lanelet within the route, and the update timing is based on Ego's operation mode state.
OperationModeState::AUTONOMOUS
: Update only when the ego moves to right or left lane by lane change module.OperationModeState::AUTONOMOUS
: Update at the beginning of every planning cycle.The manager needs to know the ego behavior and then generate a root reference path from the lanes that Ego should follow.
For example, during autonomous driving, even if Ego moves into the next lane in order to avoid a parked vehicle, the target lanes that Ego should follow will NOT change because Ego will return to the original lane after the avoidance maneuver. Therefore, the manager does NOT update root lanelet even if the avoidance maneuver is finished.
On the other hand, if the lane change is successful, the manager updates root lanelet because the lane that Ego should follow changes.
In addition, while manual driving, the manager always updates root lanelet because the pilot may move to an adjacent lane regardless of the decision of the autonomous driving system.
/**\n * @brief get reference path from root_lanelet_ centerline.\n * @param planner data.\n * @return reference path.\n */\nBehaviorModuleOutput getReferencePath(const std::shared_ptr<PlannerData> & data) const\n{\nconst auto & route_handler = data->route_handler;\nconst auto & pose = data->self_odometry->pose.pose;\nconst auto p = data->parameters;\n\nconstexpr double extra_margin = 10.0;\nconst auto backward_length =\nstd::max(p.backward_path_length, p.backward_path_length + extra_margin);\n\nconst auto lanelet_sequence = route_handler->getLaneletSequence(\nroot_lanelet_.value(), pose, backward_length, std::numeric_limits<double>::max());\n\nlanelet::ConstLanelet closest_lane{};\nif (lanelet::utils::query::getClosestLaneletWithConstrains(\nlanelet_sequence, pose, &closest_lane, p.ego_nearest_dist_threshold,\np.ego_nearest_yaw_threshold)) {\nreturn utils::getReferencePath(closest_lane, data);\n}\n\nif (lanelet::utils::query::getClosestLanelet(lanelet_sequence, pose, &closest_lane)) {\nreturn utils::getReferencePath(closest_lane, data);\n}\n\nreturn {}; // something wrong.\n}\n
Code is here
"},{"location":"planning/behavior_path_planner/docs/behavior_path_planner_manager_design/#drivable-area-generation","title":"Drivable area generation","text":"Warning
Under Construction
"},{"location":"planning/behavior_path_planner/docs/behavior_path_planner_manager_design/#turn-signal-management","title":"Turn signal management","text":"Warning
Under Construction
"},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_drivable_area_design/","title":"Drivable Area design","text":""},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_drivable_area_design/#drivable-area-design","title":"Drivable Area design","text":"Drivable Area represents the area where ego vehicle can pass.
"},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_drivable_area_design/#purpose-role","title":"Purpose / Role","text":"In order to defined the area that ego vehicle can travel safely, we generate drivable area in behavior path planner module. Our drivable area is represented by two line strings, which are left_bound
line and right_bound
line respectively. Both left_bound
and right_bound
are created from left and right boundaries of lanelets. Note that left_bound
and right bound
are generated by generateDrivableArea
function.
Our drivable area has several assumptions.
follow lane
mode, drivable area should not contain adjacent lanes.Currently, when clipping left bound or right bound, it can clip the bound more than necessary and the generated path might be conservative.
"},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_drivable_area_design/#parameters-for-drivable-area-generation","title":"Parameters for drivable area generation","text":""},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_drivable_area_design/#static-expansion","title":"Static expansion","text":"Name Unit Type Description Default value drivable_area_right_bound_offset [m] double right offset length to expand drivable area 5.0 drivable_area_left_bound_offset [m] double left offset length to expand drivable area 5.0 drivable_area_types_to_skip [-] string linestring types (as defined in the lanelet map) that will not be expanded road_border"},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_drivable_area_design/#dynamic-expansion","title":"Dynamic expansion","text":"Name Unit Type Description Default value enabled [-] boolean if true, dynamically expand the drivable area based on the path curvature true print_runtime [-] boolean if true, runtime is logged by the node true max_expansion_distance [m] double maximum distance by which the original drivable area can be expanded (no limit if set to 0) 0.0 smoothing.curvature_average_window [-] int window size used for smoothing the curvatures using a moving window average 3 smoothing.max_bound_rate [m/m] double maximum rate of change of the bound lateral distance over its arc length 1.0 smoothing.arc_length_range [m] double arc length range where an expansion distance is initially applied 2.0 ego.extra_wheel_base [m] double extra ego wheelbase 0.0 ego.extra_front_overhang [m] double extra ego overhang 0.5 ego.extra_width [m] double extra ego width 1.0 dynamic_objects.avoid [-] boolean if true, the drivable area is not expanded in the predicted path of dynamic objects true dynamic_objects.extra_footprint_offset.front [m] double extra length to add to the front of the ego footprint 0.5 dynamic_objects.extra_footprint_offset.rear [m] double extra length to add to the rear of the ego footprint 0.5 dynamic_objects.extra_footprint_offset.left [m] double extra length to add to the left of the ego footprint 0.5 dynamic_objects.extra_footprint_offset.right [m] double extra length to add to the rear of the ego footprint 0.5 path_preprocessing.max_arc_length [m] double maximum arc length along the path where the ego footprint is projected (0.0 means no limit) 100.0 path_preprocessing.resample_interval [m] double fixed interval between resampled path points (0.0 means path points are directly used) 2.0 path_preprocessing.reuse_max_deviation [m] double if the path changes by more than this value, the curvatures are recalculated. Otherwise they are reused 0.5 avoid_linestring.types [-] string array linestring types in the lanelet maps that will not be crossed when expanding the drivable area [\"road_border\", \"curbstone\"] avoid_linestring.distance [m] double distance to keep between the drivable area and the linestrings to avoid 0.0"},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_drivable_area_design/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"This section gives details of the generation of the drivable area (left_bound
and right_bound
).
Before generating drivable areas, drivable lanes need to be sorted. Drivable Lanes are selected in each module (Lane Follow
, Avoidance
, Lane Change
, Goal Planner
, Pull Out
and etc.), so more details about selection of drivable lanes can be found in each module's document. We use the following structure to define the drivable lanes.
struct DrivalbleLanes\n{\nlanelet::ConstLanelet right_lanelet; // right most lane\nlanelet::ConstLanelet left_lanelet; // left most lane\nlanelet::ConstLanelets middle_lanelets; // middle lanes\n};\n
The image of the sorted drivable lanes is depicted in the following picture.
Note that, the order of drivable lanes become
drivable_lanes = {DrivableLane1, DrivableLanes2, DrivableLanes3, DrivableLanes4, DrivableLanes5}\n
"},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_drivable_area_design/#drivable-area-generation","title":"Drivable Area Generation","text":"In this section, a drivable area is created using drivable lanes arranged in the order in which vehicles pass by. We created left_bound
from left boundary of the leftmost lanelet and right_bound
from right boundary of the rightmost lanelet. The image of the created drivable area will be the following blue lines. Note that the drivable area is defined in the Path
and PathWithLaneId
messages as
std::vector<geometry_msgs::msg::Point> left_bound;\nstd::vector<geometry_msgs::msg::Point> right_bound;\n
and each point of right bound and left bound has a position in the absolute coordinate system.
"},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_drivable_area_design/#drivable-area-expansion","title":"Drivable Area Expansion","text":""},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_drivable_area_design/#static-expansion_1","title":"Static Expansion","text":"Each module can statically expand the left and right bounds of the target lanes by the parameter defined values. This enables large vehicles to pass narrow curve. The image of this process can be described as
Note that we only expand right bound of the rightmost lane and left bound of the leftmost lane.
"},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_drivable_area_design/#dynamic-expansion_1","title":"Dynamic Expansion","text":"The drivable area can also be expanded dynamically based on a minimum width calculated from the path curvature and the ego vehicle's properties. If static expansion is also enabled, the dynamic expansion will be done after the static expansion such that both expansions are applied.
Without dynamic expansion With dynamic expansionNext we detail the algorithm used to expand the drivable area bounds.
"},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_drivable_area_design/#1-calculate-and-smooth-the-path-curvature","title":"1 Calculate and smooth the path curvature","text":"To avoid sudden changes of the dynamically expanded drivable area, we first try to reuse as much of the previous path and its calculated curvatures as possible. Previous path points and curvatures are reused up to the first previous path point that deviates from the new path by more than the reuse_max_deviation
parameter. At this stage, the path is also resampled according to the resampled_interval
and cropped according to the max_arc_length
. With the resulting preprocessed path points and previous curvatures, curvatures of the new path points are calculated using the 3 points method and smoothed using a moving window average with window size curvature_average_window
.
Each path point is projected on the original left and right drivable area bounds to calculate its corresponding bound index, original distance from the bounds, and the projected point. Additionally, for each path point, the minimum drivable area width is calculated using the following equation: Where \\(W\\) is the minimum drivable area width, \\(a\\), is the front overhang of ego, \\(l\\) is the wheelbase of ego, \\(w\\) is the width of ego, and \\(k\\) is the path curvature. This equation was derived from the work of Lim, H., Kim, C., and Jo, A., \"Model Predictive Control-Based Lateral Control of Autonomous Large-Size Bus on Road with Large Curvature,\" SAE Technical Paper 2021-01-0099, 2021.
"},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_drivable_area_design/#3-calculate-maximum-expansion-distances-of-each-bound-point-based-on-dynamic-objects-and-linestring-of-the-vector-map-optional","title":"3 Calculate maximum expansion distances of each bound point based on dynamic objects and linestring of the vector map (optional)","text":"For each drivable area bound point, we calculate its maximum expansion distance as its distance to the closest \"obstacle\" (either a map linestring with type avoid_linestrings.type
, or a dynamic object footprint if dynamic_objects.avoid
is set to true
). If max_expansion_distance
is not 0.0
, it is use here if smaller than the distance to the closest obstacle.
For each bound point, a shift distance is calculated. such that the resulting width between corresponding left and right bound points is as close as possible to the minimum width calculated in step 2 but the individual shift distance stays bellow the previously calculated maximum expansion distance.
"},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_drivable_area_design/#5-shift-bound-points-by-the-values-calculated-in-step-4-and-remove-all-loops-in-the-resulting-bound","title":"5 Shift bound points by the values calculated in step 4 and remove all loops in the resulting bound","text":"Finally, each bound point is shifted away from the path by the distance calculated in step 4. Once all points have been shifted, loops are removed from the bound and we obtain our final expanded drivable area.
"},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_drivable_area_design/#visualizing-maximum-drivable-area-debug","title":"Visualizing maximum drivable area (Debug)","text":"Sometimes, the developers might get a different result between two maps that may look identical during visual inspection.
For example, in the same area, one can perform avoidance and another cannot. This might be related to the maximum drivable area issues due to the non-compliance vector map design from the user.
To debug the issue, the maximum drivable area boundary can be visualized.
The maximum drivable area can be visualize by adding the marker from /planning/scenario_planning/lane_driving/behavior_planning/behavior_path_planner/maximum_drivable_area
If the hatched road markings area is defined in the lanelet map, the area can be used as a drivable area. Since the area is expressed as a polygon format of Lanelet2, several steps are required for correct expansion.
"},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_path_generation_design/","title":"Path Generation design","text":""},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_path_generation_design/#path-generation-design","title":"Path Generation design","text":"This document explains how the path is generated for lane change and avoidance, etc. The implementation can be found in path_shifter.hpp.
"},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_path_generation_design/#overview","title":"Overview","text":"The base idea of the path generation in lane change and avoidance is to smoothly shift the reference path, such as the center line, in the lateral direction. This is achieved by using a constant-jerk profile as in the figure below. More details on how it is used can be found in README. It is assumed that the reference path is smooth enough for this algorithm.
The figure below explains how the application of a constant lateral jerk \\(l^{'''}(s)\\) can be used to induce lateral shifting. In order to comply with the limits on lateral acceleration and velocity, zero-jerk time is employed in the figure ( \\(T_a\\) and \\(T_v\\) ). In each interval where constant jerk is applied, the shift position \\(l(s)\\) can be characterized by a third-degree polynomial. Therefore the shift length from the reference path can then be calculated by combining spline curves.
Note that, due to the rarity of the \\(T_v\\) in almost all cases of lane change and avoidance, \\(T_v\\) is not considered in the current implementation.
"},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_path_generation_design/#mathematical-derivation","title":"Mathematical Derivation","text":"With initial longitudinal velocity \\(v_0^{\\rm lon}\\) and longitudinal acceleration \\(a^{\\rm lon}\\), longitudinal position \\(s(t)\\) and longitudinal velocity at each time \\(v^{\\rm lon}(t)\\) can be derived as:
\\[ \\begin{align} s_1&= v^{\\rm lon}_0 T_j + \\frac{1}{2} a^{\\rm lon} T_j^2 \\\\ v_1&= v^{\\rm lon}_0 + a^{\\rm lon} T_j \\\\ s_2&= v^{\\rm lon}_1 T_a + \\frac{1}{2} a^{\\rm lon} T_a^2 \\\\ v_2&= v^{\\rm lon}_1 + a^{\\rm lon} T_a \\\\ s_3&= v^{\\rm lon}_2 T_j + \\frac{1}{2} a^{\\rm lon} T_j^2 \\\\ v_3&= v^{\\rm lon}_2 + a^{\\rm lon} T_j \\\\ s_4&= v^{\\rm lon}_3 T_v + \\frac{1}{2} a^{\\rm lon} T_v^2 \\\\ v_4&= v^{\\rm lon}_3 + a^{\\rm lon} T_v \\\\ s_5&= v^{\\rm lon}_4 T_j + \\frac{1}{2} a^{\\rm lon} T_j^2 \\\\ v_5&= v^{\\rm lon}_4 + a^{\\rm lon} T_j \\\\ s_6&= v^{\\rm lon}_5 T_a + \\frac{1}{2} a^{\\rm lon} T_a^2 \\\\ v_6&= v^{\\rm lon}_5 + a^{\\rm lon} T_a \\\\ s_7&= v^{\\rm lon}_6 T_j + \\frac{1}{2} a^{\\rm lon} T_j^2 \\\\ v_7&= v^{\\rm lon}_6 + a^{\\rm lon} T_j \\end{align} \\]By applying simple integral operations, the following analytical equations can be derived to describe the shift distance \\(l(t)\\) at each time under lateral jerk, lateral acceleration, and velocity constraints.
\\[ \\begin{align} l_1&= \\frac{1}{6}jT_j^3\\\\[10pt] l_2&= \\frac{1}{6}j T_j^3 + \\frac{1}{2} j T_a T_j^2 + \\frac{1}{2} j T_a^2 T_j\\\\[10pt] l_3&= j T_j^3 + \\frac{3}{2} j T_a T_j^2 + \\frac{1}{2} j T_a^2 T_j\\\\[10pt] l_4&= j T_j^3 + \\frac{3}{2} j T_a T_j^2 + \\frac{1}{2} j T_a^2 T_j + j(T_a + T_j)T_j T_v\\\\[10pt] l_5&= \\frac{11}{6} j T_j^3 + \\frac{5}{2} j T_a T_j^2 + \\frac{1}{2} j T_a^2 T_j + j(T_a + T_j)T_j T_v \\\\[10pt] l_6&= \\frac{11}{6} j T_j^3 + 3 j T_a T_j^2 + j T_a^2 T_j + j(T_a + T_j)T_j T_v\\\\[10pt] l_7&= 2 j T_j^3 + 3 j T_a T_j^2 + j T_a^2 T_j + j(T_a + T_j)T_j T_v \\end{align} \\]These equations are used to determine the shape of a path. Additionally, by applying further mathematical operations to these basic equations, the expressions of the following subsections can be derived.
"},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_path_generation_design/#calculation-of-maximum-acceleration-from-transition-time-and-final-shift-length","title":"Calculation of Maximum Acceleration from transition time and final shift length","text":"In the case where there are no limitations on lateral velocity and lateral acceleration, the maximum lateral acceleration during the shifting can be calculated as follows. The constant-jerk time is given by \\(T_j = T_{\\rm total}/4\\) because of its symmetric property. Since \\(T_a=T_v=0\\), the final shift length \\(L=l_7=2jT_j^3\\) can be determined using the above equation. The maximum lateral acceleration is then given by \\(a_{\\rm max} =jT_j\\). This results in the following expression for the maximum lateral acceleration:
\\[ \\begin{align} a_{\\rm max}^{\\rm lat} = \\frac{8L}{T_{\\rm total}^2} \\end{align} \\]"},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_path_generation_design/#calculation-of-ta-tj-and-jerk-from-acceleration-limit","title":"Calculation of Ta, Tj and jerk from acceleration limit","text":"In the case where there are no limitations on lateral velocity, the constant-jerk and acceleration times, as well as the required jerk can be calculated from the acceleration limit, total time, and final shift length as follows. Since \\(T_v=0\\), the final shift length is given by:
\\[ \\begin{align} L = l_7 = 2 j T_j^3 + 3 j T_a T_j^2 + j T_a^2 T_j \\end{align} \\]Additionally, the velocity profile reveals the following relations:
\\[ \\begin{align} a_{\\rm lim}^{\\rm lat} &= j T_j\\\\ T_{\\rm total} &= 4T_j + 2T_a \\end{align} \\]By solving these three equations, the following can be obtained:
\\[ \\begin{align} T_j&=\\frac{T_{\\rm total}}{2} - \\frac{2L}{a_{\\rm lim}^{\\rm lat} T_{\\rm total}}\\\\[10pt] T_a&=\\frac{4L}{a_{\\rm lim}^{\\rm lat} T_{\\rm total}} - \\frac{T_{\\rm total}}{2}\\\\[10pt] jerk&=\\frac{2a_{\\rm lim} ^2T_{\\rm total}}{a_{\\rm lim}^{\\rm lat} T_{\\rm total}^2-4L} \\end{align} \\]where \\(T_j\\) is the constant-jerk time, \\(T_a\\) is the constant acceleration time, \\(j\\) is the required jerk, \\(a_{\\rm lim}^{\\rm lat}\\) is the lateral acceleration limit, and \\(L\\) is the final shift length.
"},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_path_generation_design/#calculation-of-required-time-from-jerk-and-acceleration-constraint","title":"Calculation of Required Time from Jerk and Acceleration Constraint","text":"In the case where there are no limitations on lateral velocity, the total time required for shifting can be calculated from the lateral jerk and lateral acceleration limits and the final shift length as follows. By solving the two equations given above:
\\[ L = l_7 = 2 j T_j^3 + 3 j T_a T_j^2 + j T_a^2 T_j,\\quad a_{\\rm lim}^{\\rm lat} = j T_j \\]we obtain the following expressions:
\\[ \\begin{align} T_j &= \\frac{a_{\\rm lim}^{\\rm lat}}{j}\\\\[10pt] T_a &= \\frac{1}{2}\\sqrt{\\frac{a_{\\rm lim}^{\\rm lat}}{j}^2 + \\frac{4L}{a_{\\rm lim}^{\\rm lat}}} - \\frac{3a_{\\rm lim}^{\\rm lat}}{2j} \\end{align} \\]The total time required for shifting can then be calculated as \\(T_{\\rm total}=4T_j+2T_a\\).
"},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_path_generation_design/#limitation","title":"Limitation","text":"Safety check function checks if the given path will collide with a given target object.
"},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_safety_check/#purpose-role","title":"Purpose / Role","text":"In the behavior path planner, certain modules (e.g., lane change) need to perform collision checks to ensure the safe navigation of the ego vehicle. These utility functions assist the user in conducting safety checks with other road participants.
"},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_safety_check/#assumptions","title":"Assumptions","text":"The safety check module is based on the following assumptions:
Currently the yaw angle of each point of predicted paths of a target object does not point to the next point. Therefore, the safety check function might returns incorrect result for some edge case.
"},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_safety_check/#inner-working-algorithm","title":"Inner working / Algorithm","text":"The flow of the safety check algorithm is described in the following explanations.
Here we explain each step of the algorithm flow.
"},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_safety_check/#1-get-pose-of-the-target-object-at-a-given-time","title":"1. Get pose of the target object at a given time","text":"For the first step, we obtain the pose of the target object at a given time. This can be done by interpolating the predicted path of the object.
"},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_safety_check/#2-check-overlap","title":"2. Check overlap","text":"With the interpolated pose obtained in the step.1, we check if the object and ego vehicle overlaps at a given time. If they are overlapped each other, the given path is unsafe.
"},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_safety_check/#3-get-front-object","title":"3. Get front object","text":"After the overlap check, it starts to perform the safety check for the broader range. In this step, it judges if ego or target object is in front of the other vehicle. We use arc length of the front point of each object along the given path to judge which one is in front of the other. In the following example, target object (red rectangle) is running in front of the ego vehicle (black rectangle).
"},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_safety_check/#4-calculate-rss-distance","title":"4. Calculate RSS distance","text":"After we find which vehicle is running ahead of the other vehicle, we start to compute the RSS distance. With the reaction time \\(t_{reaction}\\) and safety time margin \\(t_{margin}\\), RSS distance can be described as:
\\[ rss_{dist} = v_{rear} (t_{reaction} + t_{margin}) + \\frac{v_{rear}^2}{2|a_{rear, decel}|} - \\frac{v_{front}^2}{2|a_{front, decel|}} \\]where \\(V_{front}\\), \\(v_{rear}\\) are front and rear vehicle velocity respectively and \\(a_{rear, front}\\), \\(a_{rear, decel}\\) are front and rear vehicle deceleration.
"},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_safety_check/#5-create-extended-ego-and-target-object-polygons","title":"5. Create extended ego and target object polygons","text":"In this step, we compute extended ego and target object polygons. The extended polygons can be described as:
As the picture shows, we expand the rear object polygon. For the longitudinal side, we extend it with the RSS distance, and for the lateral side, we extend it by the lateral margin
"},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_safety_check/#6-check-overlap","title":"6. Check overlap","text":"Similar to the previous step, we check the overlap of the extended rear object polygon and front object polygon. If they are overlapped each other, we regard it as the unsafe situation.
"},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_turn_signal_design/","title":"Turn Signal design","text":""},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_turn_signal_design/#turn-signal-design","title":"Turn Signal design","text":"Turn Signal decider determines necessary blinkers.
"},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_turn_signal_design/#purpose-role","title":"Purpose / Role","text":"This module is responsible for activating a necessary blinker during driving. It uses rule-based algorithm to determine blinkers, and the details of this algorithm are described in the following sections. Note that this algorithm is strictly based on the Japanese Road Traffic Raw.
"},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_turn_signal_design/#assumptions","title":"Assumptions","text":"Autoware has following order of priorities for turn signals.
Currently, this algorithm can sometimes give unnatural (not wrong) blinkers in a complicated situations. This is because it tries to follow the road traffic raw and cannot solve blinker conflicts
clearly in that environment.
Note that the default values for turn_signal_intersection_search_distance
and turn_signal_search_time
is strictly followed by Japanese Road Traffic Laws. So if your country does not allow to use these default values, you should change these values in configuration files.
In this algorithm, it assumes that each blinker has two sections, which are desired section
and required section
. The image of these two sections are depicted in the following diagram.
These two sections have the following meanings.
"},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_turn_signal_design/#-desired-section","title":"- Desired Section","text":"- This section is defined by road traffic laws. It cannot be longer or shorter than the designated length defined by the law.\n- In this section, you do not have to activate the designated blinkers if it is dangerous to do so.\n
"},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_turn_signal_design/#-required-section","title":"- Required Section","text":"- In this section, ego vehicle must activate designated blinkers. However, if there are blinker conflicts, it must solve them based on the algorithm we mention later in this document.\n- Required section cannot be longer than desired section.\n
When turning on the blinker, it decides whether or not to turn on the specified blinker based on the distance from the front of the ego vehicle to the start point of each section. Conversely, when turning off the blinker, it calculates the distance from the base link of the ego vehicle to the end point of each section and decide whether or not to turn it off based on that.
For left turn, right turn, avoidance, lane change, goal planner and pull out, we define these two sections, which are elaborated in the following part.
"},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_turn_signal_design/#1-left-and-right-turn","title":"1. Left and Right turn","text":"Turn signal decider checks each lanelet on the map if it has turn_direction
information. If a lanelet has this information, it activates necessary blinker based on this information.
search_distance
for blinkers at intersections is v * turn_signal_search_time + turn_signal_intersection_search_distance
. Then the start point becomes search_distance
meters before the start point of the intersection lanelet(depicted in gree in the following picture), where v
is the velocity of the ego vehicle. However, if we set turn_signal_distance
in the lanelet, we use that length as search distance.Avoidance can be separated into two sections, first section and second section. The first section is from the start point of the path shift to the end of the path shift. The second section is from the end of shift point to the end of avoidance. Note that avoidance module will not activate turn signal when its shift length is below turn_signal_shift_length_threshold
.
First section
v * turn_signal_search_time
meters before the start point of the avoidance shift path.Second section
v * turn_signal_search_time
meters before the start point of the lane change path.v * turn_signal_search_time
meters before the start point of the pull over path.When it comes to handle several blinkers, it gives priority to the first blinker that comes first. However, this rule sometimes activate unnatural blinkers, so turn signal decider uses the following five rules to decide the necessary turn signal.
Based on these five rules, turn signal decider can solve blinker conflicts
. The following pictures show some examples of this kind of conflicts.
In this scenario, ego vehicle has to pass several turns that are close each other. Since this pattern can be solved by the pattern1 rule, the overall result is depicted in the following picture.
"},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_turn_signal_design/#-avoidance-with-left-turn-1","title":"- Avoidance with left turn (1)","text":"In this scene, ego vehicle has to deal with the obstacle that is on its original path as well as make a left turn. The overall result can be varied by the position of the obstacle, but the image of the result is described in the following picture.
"},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_turn_signal_design/#-avoidance-with-left-turn-2","title":"- Avoidance with left turn (2)","text":"Same as the previous scenario, ego vehicle has to avoid the obstacle as well as make a turn left. However, in this scene, the obstacle is parked after the intersection. Similar to the previous one, the overall result can be varied by the position of the obstacle, but the image of the result is described in the following picture.
"},{"location":"planning/behavior_path_planner_common/docs/behavior_path_planner_turn_signal_design/#-lane-change-and-left-turn","title":"- Lane change and left turn","text":"In this scenario, ego vehicle has to do lane change before making a left turn. In the following example, ego vehicle does not activate left turn signal until it reaches the end point of the lane change path.
"},{"location":"planning/behavior_path_side_shift_module/","title":"Side Shift design","text":""},{"location":"planning/behavior_path_side_shift_module/#side-shift-design","title":"Side Shift design","text":"(For remote control) Shift the path to left or right according to an external instruction.
"},{"location":"planning/behavior_path_side_shift_module/#overview-of-the-side-shift-module-process","title":"Overview of the Side Shift Module Process","text":"requested_lateral_offset_
under the following conditions: a. Verify if the last update time has elapsed. b. Ensure the required lateral offset value is different from the previous one.Please be aware that requested_lateral_offset_
is continuously updated with the latest values and is not queued.
The side shift has three distinct statuses. Note that during the SHIFTING status, the path cannot be updated:
side shift status"},{"location":"planning/behavior_path_side_shift_module/#flowchart","title":"Flowchart","text":""},{"location":"planning/behavior_path_start_planner_module/","title":"Start Planner design","text":""},{"location":"planning/behavior_path_start_planner_module/#start-planner-design","title":"Start Planner design","text":""},{"location":"planning/behavior_path_start_planner_module/#purpose-role","title":"Purpose / Role","text":"
The Start Planner module is designed to generate a path from the current ego position to the driving lane, avoiding static obstacles and stopping in response to dynamic obstacles when a collision is detected.
Use cases include:
pull out from side of the road lane
pull out from the shoulder lane"},{"location":"planning/behavior_path_start_planner_module/#design","title":"Design","text":""},{"location":"planning/behavior_path_start_planner_module/#general-parameters-for-start_planner","title":"General parameters for start_planner","text":"Name Unit Type Description Default value th_arrived_distance_m [m] double distance threshold for arrival of path termination 1.0 th_distance_to_middle_of_the_road [m] double distance threshold to determine if the vehicle is on the middle of the road 0.1 th_stopped_velocity_mps [m/s] double velocity threshold for arrival of path termination 0.01 th_stopped_time_sec [s] double time threshold for arrival of path termination 1.0 th_turn_signal_on_lateral_offset [m] double lateral distance threshold for turning on blinker 1.0 intersection_search_length [m] double check if intersections exist within this length 30.0 length_ratio_for_turn_signal_deactivation_near_intersection [m] double deactivate turn signal of this module near intersection 0.5 collision_check_margins [m] [double] Obstacle collision check margins list [2.0, 1.5, 1.0] collision_check_distance_from_end [m] double collision check distance from end shift end pose 1.0 collision_check_margin_from_front_object [m] double collision check margin from front object 5.0 center_line_path_interval [m] double reference center line path point interval 1.0"},{"location":"planning/behavior_path_start_planner_module/#safety-check-with-static-obstacles","title":"Safety check with static obstacles","text":"
1.0 m
), that is judged as a unsafe pathThis is based on the concept of RSS. For the logic used, refer to the link below. See safety check feature explanation
"},{"location":"planning/behavior_path_start_planner_module/#collision-check-performed-range","title":"Collision check performed range","text":"A collision check with dynamic objects is primarily performed between the shift start point and end point. The range for safety check varies depending on the type of path generated, so it will be explained for each pattern.
"},{"location":"planning/behavior_path_start_planner_module/#shift-pull-out","title":"Shift pull out","text":"For the \"shift pull out\", safety verification starts at the beginning of the shift and ends at the shift's conclusion.
"},{"location":"planning/behavior_path_start_planner_module/#geometric-pull-out","title":"Geometric pull out","text":"Since there's a stop at the midpoint during the shift, this becomes the endpoint for safety verification. After stopping, safety verification resumes.
"},{"location":"planning/behavior_path_start_planner_module/#backward-pull-out-start-point-search","title":"Backward pull out start point search","text":"During backward movement, no safety check is performed. Safety check begins at the point where the backward movement ends.
"},{"location":"planning/behavior_path_start_planner_module/#ego-vehicles-velocity-planning","title":"Ego vehicle's velocity planning","text":"WIP
"},{"location":"planning/behavior_path_start_planner_module/#safety-check-in-free-space-area","title":"Safety check in free space area","text":"WIP
"},{"location":"planning/behavior_path_start_planner_module/#parameters-for-safety-check","title":"Parameters for safety check","text":""},{"location":"planning/behavior_path_start_planner_module/#stop-condition-parameters","title":"Stop Condition Parameters","text":"Parameters under stop_condition
define the criteria for stopping conditions.
Parameters under path_safety_check.ego_predicted_path
specify the ego vehicle's predicted path characteristics.
Parameters under target_filtering
are related to filtering target objects for safety check.
Parameters under safety_check_params
define the configuration for safety check.
There are two path generation methods.
"},{"location":"planning/behavior_path_start_planner_module/#shift-pull-out_1","title":"shift pull out","text":"This is the most basic method of starting path planning and is used on road lanes and shoulder lanes when there is no particular obstruction.
Pull out distance is calculated by the speed, lateral deviation, and the lateral jerk. The lateral jerk is searched for among the predetermined minimum and maximum values, and the one that generates a safe path is selected.
shift pull out video
"},{"location":"planning/behavior_path_start_planner_module/#parameters-for-shift-pull-out","title":"parameters for shift pull out","text":"Name Unit Type Description Default value enable_shift_pull_out [-] bool flag whether to enable shift pull out true check_shift_path_lane_departure [-] bool flag whether to check if shift path footprints are out of lane false shift_pull_out_velocity [m/s] double velocity of shift pull out 2.0 pull_out_sampling_num [-] int Number of samplings in the minimum to maximum range of lateral_jerk 4 maximum_lateral_jerk [m/s3] double maximum lateral jerk 2.0 minimum_lateral_jerk [m/s3] double minimum lateral jerk 0.1 minimum_shift_pull_out_distance [m] double minimum shift pull out distance. if calculated pull out distance is shorter than this, use this for path generation. 0.0 maximum_curvature [m] double maximum curvature. The pull out distance is calculated so that the curvature is smaller than this value. 0.07"},{"location":"planning/behavior_path_start_planner_module/#geometric-pull-out_1","title":"geometric pull out","text":"Generate two arc paths with discontinuous curvature. Ego-vehicle stops once in the middle of the path to control the steer on the spot. See also [1] for details of the algorithm.
geometric pull out video
"},{"location":"planning/behavior_path_start_planner_module/#parameters-for-geometric-pull-out","title":"parameters for geometric pull out","text":"Name Unit Type Description Default value enable_geometric_pull_out [-] bool flag whether to enable geometric pull out true divide_pull_out_path [-] bool flag whether to divide arc paths. The path is assumed to be divided because the curvature is not continuous. But it requires a stop during the departure. false geometric_pull_out_velocity [m/s] double velocity of geometric pull out 1.0 arc_path_interval [m] double path points interval of arc paths of geometric pull out 1.0 lane_departure_margin [m] double margin of deviation to lane right 0.2 pull_out_max_steer_angle [rad] double maximum steer angle for path generation 0.26"},{"location":"planning/behavior_path_start_planner_module/#backward-pull-out-start-point-search_1","title":"backward pull out start point search","text":"If a safe path cannot be generated from the current position, search backwards for a pull out start point at regular intervals(default: 2.0
).
pull out after backward driving video
"},{"location":"planning/behavior_path_start_planner_module/#search-priority","title":"search priority","text":"If a safe path with sufficient clearance for static obstacles cannot be generated forward, a backward search from the vehicle's current position is conducted to locate a suitable start point for a pull out path generation.
During this backward search, different policies can be applied based on search_priority
parameters:
Selecting efficient_path
focuses on creating a shift pull out path, regardless of how far back the vehicle needs to move. Opting for short_back_distance
aims to find a location with the least possible backward movement.
PriorityOrder
is defined as a vector of pairs, where each element consists of a size_t
index representing a start pose candidate index, and the planner type. The PriorityOrder vector is processed sequentially from the beginning, meaning that the pairs listed at the top of the vector are given priority in the selection process for pull out path generation.
efficient_path
","text":"When search_priority
is set to efficient_path
and the preference is for prioritizing shift_pull_out
, the PriorityOrder
array is populated in such a way that shift_pull_out
is grouped together for all start pose candidates before moving on to the next planner type. This prioritization is reflected in the order of the array, with shift_pull_out
being listed before geometric_pull_out.
This approach prioritizes trying all candidates with shift_pull_out
before proceeding to geometric_pull_out
, which may be efficient in situations where shift_pull_out
is likely to be appropriate.
short_back_distance
","text":"For search_priority
set to short_back_distance
, the array alternates between planner types for each start pose candidate, which can minimize the distance the vehicle needs to move backward if the earlier candidates are successful.
This ordering is beneficial when the priority is to minimize the backward distance traveled, giving an equal chance for each planner to succeed at the closest possible starting position.
"},{"location":"planning/behavior_path_start_planner_module/#parameters-for-backward-pull-out-start-point-search","title":"parameters for backward pull out start point search","text":"Name Unit Type Description Default value enable_back [-] bool flag whether to search backward for start_point true search_priority [-] string In the case ofefficient_path
, use efficient paths even if the back distance is longer. In case of short_back_distance
, use a path with as short a back distance efficient_path max_back_distance [m] double maximum back distance 30.0 backward_search_resolution [m] double distance interval for searching backward pull out start point 2.0 backward_path_update_duration [s] double time interval for searching backward pull out start point. this prevents chattering between back driving and pull_out 3.0 ignore_distance_from_lane_end [m] double If distance from shift start pose to end of shoulder lane is less than this value, this start pose candidate is ignored 15.0"},{"location":"planning/behavior_path_start_planner_module/#freespace-pull-out","title":"freespace pull out","text":"If the vehicle gets stuck with pull out along lanes, execute freespace pull out. To run this feature, you need to set parking_lot
to the map, activate_by_scenario
of costmap_generator to false
and enable_freespace_planner
to true
See freespace_planner for other parameters.
"},{"location":"planning/behavior_velocity_blind_spot_module/","title":"Index","text":""},{"location":"planning/behavior_velocity_blind_spot_module/#blind-spot","title":"Blind Spot","text":""},{"location":"planning/behavior_velocity_blind_spot_module/#role","title":"Role","text":"Blind spot module checks possible collisions with bicycles and pedestrians running on its left/right side while turing left/right before junctions.
"},{"location":"planning/behavior_velocity_blind_spot_module/#activation-timing","title":"Activation Timing","text":"This function is activated when the lane id of the target path has an intersection label (i.e. the turn_direction
attribute is left
or right
).
Sets a stop line, a pass judge line, a detection area and conflict area based on a map information and a self position.
Stop/Go state: When both conditions are met for any of each object, this module state is transited to the \"stop\" state and insert zero velocity to stop the vehicle.
In order to avoid a rapid stop, the \u201cstop\u201d judgement is not executed after the judgment line is passed.
Once a \"stop\" is judged, it will not transit to the \"go\" state until the \"go\" judgment continues for a certain period in order to prevent chattering of the state (e.g. 2 seconds).
"},{"location":"planning/behavior_velocity_blind_spot_module/#module-parameters","title":"Module Parameters","text":"Parameter Type Descriptionstop_line_margin
double [m] a margin that the vehicle tries to stop before stop_line backward_length
double [m] distance from closest path point to the edge of beginning point. ignore_width_from_center_line
double [m] ignore threshold that vehicle behind is collide with ego vehicle or not max_future_movement_time
double [s] maximum time for considering future movement of object adjacent_extend_width
double [m] if adjacent lane e.g. bicycle only lane exists, blind_spot area is expanded by this length"},{"location":"planning/behavior_velocity_blind_spot_module/#flowchart","title":"Flowchart","text":""},{"location":"planning/behavior_velocity_crosswalk_module/","title":"Crosswalk","text":""},{"location":"planning/behavior_velocity_crosswalk_module/#crosswalk","title":"Crosswalk","text":""},{"location":"planning/behavior_velocity_crosswalk_module/#role","title":"Role","text":"This module judges whether the ego should stop in front of the crosswalk in order to provide safe passage for crosswalk users, such as pedestrians and bicycles, based on the objects' behavior and surround traffic.
"},{"location":"planning/behavior_velocity_crosswalk_module/#features","title":"Features","text":""},{"location":"planning/behavior_velocity_crosswalk_module/#yield","title":"Yield","text":""},{"location":"planning/behavior_velocity_crosswalk_module/#target-object","title":"Target Object","text":"The crosswalk module handles objects of the types defined by the following parameters in the object_filtering.target_object
namespace.
unknown
[-] bool whether to look and stop by UNKNOWN objects pedestrian
[-] bool whether to look and stop by PEDESTRIAN objects bicycle
[-] bool whether to look and stop by BICYCLE objects motorcycle
[-] bool whether to look and stop by MOTORCYCLE objects In order to handle the crosswalk users crossing the neighborhood but outside the crosswalk, the crosswalk module creates an attention area around the crosswalk, shown as the yellow polygon in the figure. If the object's predicted path collides with the attention area, the object will be targeted for yield.
The neighborhood is defined by the following parameter in the object_filtering.target_object
namespace.
crosswalk_attention_range
[m] double the detection area is defined as -X meters before the crosswalk to +X meters behind the crosswalk"},{"location":"planning/behavior_velocity_crosswalk_module/#stop-position","title":"Stop Position","text":"First of all, stop_distance_from_object [m]
is always kept at least between the ego and the target object for safety.
When the stop line exists in the lanelet map, the stop position is calculated based on the line. When the stop line does NOT exist in the lanelet map, the stop position is calculated by keeping stop_distance_from_crosswalk [m]
between the ego and the crosswalk.
As an exceptional case, if a pedestrian (or bicycle) is crossing wide crosswalks seen in scramble intersections, and the pedestrian position is more than far_object_threshold
meters away from the stop line, the actual stop position is determined by stop_distance_from_object
and pedestrian position, not at the stop line.
In the stop_position
namespace, the following parameters are defined.
stop_position_threshold
[m] double If the ego vehicle has stopped near the stop line than this value, this module assumes itself to have achieved yielding. stop_distance_from_crosswalk
[m] double make stop line away from crosswalk for the Lanelet2 map with no explicit stop lines far_object_threshold
[m] double If objects cross X meters behind the stop line, the stop position is determined according to the object position (stop_distance_from_object meters before the object) for the case where the crosswalk width is very wide stop_distance_from_object
[m] double the vehicle decelerates to be able to stop in front of object with margin"},{"location":"planning/behavior_velocity_crosswalk_module/#yield-decision","title":"Yield decision","text":"The module makes a decision to yield only when the pedestrian traffic light is GREEN or UNKNOWN. The decision is based on the following variables, along with the calculation of the collision point.
We classify ego behavior at crosswalks into three categories according to the relative relationship between TTC and TTV [1].
The boundary of A and B is interpolated from ego_pass_later_margin_x
and ego_pass_later_margin_y
. In the case of the upper figure, ego_pass_later_margin_x
is {0, 1, 2}
and ego_pass_later_margin_y
is {1, 4, 6}
. In the same way, the boundary of B and C is calculated from ego_pass_first_margin_x
and ego_pass_first_margin_y
. In the case of the upper figure, ego_pass_first_margin_x
is {3, 5}
and ego_pass_first_margin_y
is {0, 1}
.
In the pass_judge
namespace, the following parameters are defined.
ego_pass_first_margin_x
[[s]] double time to collision margin vector for ego pass first situation (the module judges that ego don't have to stop at TTC + MARGIN < TTV condition) ego_pass_first_margin_y
[[s]] double time to vehicle margin vector for ego pass first situation (the module judges that ego don't have to stop at TTC + MARGIN < TTV condition) ego_pass_first_additional_margin
[s] double additional time margin for ego pass first situation to suppress chattering ego_pass_later_margin_x
[[s]] double time to vehicle margin vector for object pass first situation (the module judges that ego don't have to stop at TTV + MARGIN < TTC condition) ego_pass_later_margin_y
[[s]] double time to collision margin vector for object pass first situation (the module judges that ego don't have to stop at TTV + MARGIN < TTC condition) ego_pass_later_additional_margin
[s] double additional time margin for object pass first situation to suppress chattering"},{"location":"planning/behavior_velocity_crosswalk_module/#smooth-yield-decision","title":"Smooth Yield Decision","text":"If the object is stopped near the crosswalk but has no intention of walking, a situation can arise in which the ego continues to yield the right-of-way to the object. To prevent such a deadlock situation, the ego will cancel yielding depending on the situation.
"},{"location":"planning/behavior_velocity_crosswalk_module/#cases-without-traffic-lights","title":"Cases without traffic lights","text":"For the object stopped around the crosswalk but has no intention to walk (*1), after the ego has keep stopping to yield for a specific time (*2), the ego cancels the yield and starts driving.
*1: The time is calculated by the interpolation of distance between the object and crosswalk with distance_map_for_no_intention_to_walk
and timeout_map_for_no_intention_to_walk
.
In the pass_judge
namespace, the following parameters are defined.
distance_map_for_no_intention_to_walk
[[m]] double distance map to calculate the timeout for no intention to walk with interpolation timeout_map_for_no_intention_to_walk
[[s]] double timeout map to calculate the timeout for no intention to walk with interpolation *2: In the pass_judge
namespace, the following parameters are defined.
timeout_ego_stop_for_yield
[s] double If the ego maintains the stop for this amount of time, then the ego proceeds, assuming it has stopped long time enough."},{"location":"planning/behavior_velocity_crosswalk_module/#cases-with-traffic-lights","title":"Cases with traffic lights","text":"The ego will cancel the yield without stopping when the object stops around the crosswalk but has no intention to walk (*1). This comes from the assumption that the object has no intention to walk since it is stopped even though the pedestrian traffic light is green.
*1: The crosswalk user's intention to walk is calculated in the same way as Cases without traffic lights
.
Due to the perception's limited performance where the tree or poll is recognized as a pedestrian or the tracking failure in the crowd or occlusion, even if the surrounding environment does not change, the new pedestrian (= the new ID's pedestrian) may suddenly appear unexpectedly. If this happens while the ego is going to pass the crosswalk, the ego will stop suddenly.
To deal with this issue, the option disable_yield_for_new_stopped_object
is prepared. If true is set, the yield decisions around the crosswalk with a traffic light will ignore the new stopped object.
In the pass_judge
namespace, the following parameters are defined.
disable_yield_for_new_stopped_object
[-] bool If set to true, the new stopped object will be ignored around the crosswalk with a traffic light"},{"location":"planning/behavior_velocity_crosswalk_module/#safety-slow-down-behavior","title":"Safety Slow Down Behavior","text":"In the current autoware implementation, if no target object is detected around a crosswalk, the ego vehicle will not slow down for the crosswalk. However, it may be desirable to slow down in situations, for example, where there are blind spots. Such a situation can be handled by setting some tags to the related crosswalk as instructed in the lanelet2_format_extension.md document.
Parameter Type Descriptionslow_velocity
[m/s] double target vehicle velocity when module receive slow down command from FOA max_slow_down_jerk
[m/sss] double minimum jerk deceleration for safe brake max_slow_down_accel
[m/ss] double minimum accel deceleration for safe brake no_relax_velocity
[m/s] double if the current velocity is less than X m/s, ego always stops at the stop position(not relax deceleration constraints)"},{"location":"planning/behavior_velocity_crosswalk_module/#stuck-vehicle-detection","title":"Stuck Vehicle Detection","text":"The feature will make the ego not to stop on the crosswalk. When there is a low-speed or stopped vehicle ahead of the crosswalk, and there is not enough space between the crosswalk and the vehicle, the crosswalk module plans to stop before the crosswalk even if there are no pedestrians or bicycles.
min_acc
, min_jerk
, and max_jerk
are met. If the ego cannot stop before the crosswalk with these parameters, the stop position will move forward.
In the stuck_vehicle
namespace, the following parameters are defined.
stuck_vehicle_velocity
[m/s] double maximum velocity threshold whether the target vehicle is stopped or not max_stuck_vehicle_lateral_offset
[m] double maximum lateral offset of the target vehicle position required_clearance
[m] double clearance to be secured between the ego and the ahead vehicle min_acc
[m/ss] double minimum acceleration to stop min_jerk
[m/sss] double minimum jerk to stop max_jerk
[m/sss] double maximum jerk to stop"},{"location":"planning/behavior_velocity_crosswalk_module/#others","title":"Others","text":"In the common
namespace, the following parameters are defined.
show_processing_time
[-] bool whether to show processing time traffic_light_state_timeout
[s] double timeout threshold for traffic light signal enable_rtc
[-] bool if true, the scene modules should be approved by (request to cooperate)rtc function. if false, the module can be run without approval from rtc."},{"location":"planning/behavior_velocity_crosswalk_module/#known-issues","title":"Known Issues","text":"/planning/scenario_planning/lane_driving/behavior_planning/behavior_velocity_planner/debug/crosswalk
shows the following markers.
ros2 run behavior_velocity_crosswalk_module time_to_collision_plotter.py\n
enables you to visualize the following figure of the ego and pedestrian's time to collision. The label of each plot is <crosswalk module id>-<pedestrian uuid>
.
ego_pass_later_margin
described in Yield Decisionego_pass_later_margin
described in Yield Decision[1] \u4f50\u85e4 \u307f\u306a\u307f, \u65e9\u5742 \u7965\u4e00, \u6e05\u6c34 \u653f\u884c, \u6751\u91ce \u9686\u5f66, \u6a2a\u65ad\u6b69\u884c\u8005\u306b\u5bfe\u3059\u308b\u30c9\u30e9\u30a4\u30d0\u306e\u30ea\u30b9\u30af\u56de\u907f\u884c\u52d5\u306e\u30e2\u30c7\u30eb\u5316, \u81ea\u52d5\u8eca\u6280\u8853\u4f1a\u8ad6\u6587\u96c6, 2013, 44 \u5dfb, 3 \u53f7, p. 931-936.
"},{"location":"planning/behavior_velocity_detection_area_module/","title":"Index","text":""},{"location":"planning/behavior_velocity_detection_area_module/#detection-area","title":"Detection Area","text":""},{"location":"planning/behavior_velocity_detection_area_module/#role","title":"Role","text":"If pointcloud is detected in a detection area defined on a map, the stop planning will be executed at the predetermined point.
"},{"location":"planning/behavior_velocity_detection_area_module/#activation-timing","title":"Activation Timing","text":"This module is activated when there is a detection area on the target lane.
"},{"location":"planning/behavior_velocity_detection_area_module/#module-parameters","title":"Module Parameters","text":"Parameter Type Descriptionuse_dead_line
bool [-] weather to use dead line or not use_pass_judge_line
bool [-] weather to use pass judge line or not state_clear_time
double [s] when the vehicle is stopping for certain time without incoming obstacle, move to STOPPED state stop_margin
double [m] a margin that the vehicle tries to stop before stop_line dead_line_margin
double [m] ignore threshold that vehicle behind is collide with ego vehicle or not hold_stop_margin_distance
double [m] parameter for restart prevention (See Algorithm section) distance_to_judge_over_stop_line
double [m] parameter for judging that the stop line has been crossed"},{"location":"planning/behavior_velocity_detection_area_module/#inner-workings-algorithm","title":"Inner-workings / Algorithm","text":"If it needs X meters (e.g. 0.5 meters) to stop once the vehicle starts moving due to the poor vehicle control performance, the vehicle goes over the stopping position that should be strictly observed when the vehicle starts to moving in order to approach the near stop point (e.g. 0.3 meters away).
This module has parameter hold_stop_margin_distance
in order to prevent from these redundant restart. If the vehicle is stopped within hold_stop_margin_distance
meters from stop point of the module (_front_to_stop_line < hold_stop_margin_distance), the module judges that the vehicle has already stopped for the module's stop point and plans to keep stopping current position even if the vehicle is stopped due to other factors.
parameters
outside the hold_stop_margin_distance
inside the hold_stop_margin_distance"},{"location":"planning/behavior_velocity_dynamic_obstacle_stop_module/","title":"Index","text":""},{"location":"planning/behavior_velocity_dynamic_obstacle_stop_module/#dynamic-obstacle-stop","title":"Dynamic Obstacle Stop","text":""},{"location":"planning/behavior_velocity_dynamic_obstacle_stop_module/#role","title":"Role","text":"
dynamic_obstacle_stop
is a module that stops the ego vehicle from entering the immediate path of a dynamic object.
The immediate path of an object is the area that the object would traverse during a given time horizon, assuming constant velocity and heading.
"},{"location":"planning/behavior_velocity_dynamic_obstacle_stop_module/#activation-timing","title":"Activation Timing","text":"This module is activated if the launch parameter launch_dynamic_obstacle_stop_module
is set to true in the behavior planning launch file.
The module insert a stop point where the ego path collides with the immediate path of an object. The overall module flow can be summarized with the following 4 steps.
In addition to these 4 steps, 2 mechanisms are in place to make the stop point of this module more stable: an hysteresis and a decision duration buffer.
The hysteresis
parameter is used when a stop point was already being inserted in the previous iteration and it increases the range where dynamic objects are considered close enough to the ego path to be used by the module.
The decision_duration_buffer
parameter defines the duration when the module will keep inserted the previous stop point, even after no collisions were found.
An object is considered by the module only if it meets all of the following conditions:
minimum_object_velocity
parameter;For the last condition, the object is considered close enough if its lateral distance from the ego path is less than the threshold parameter minimum_object_distance_from_ego_path
plus half the width of ego and of the object (including the extra_object_width
parameter). In addition, the value of the hysteresis
parameter is added to the minimum distance if a stop point was inserted in the previous iteration.
For each considered object, a rectangle is created representing its immediate path. The rectangle has the width of the object plus the extra_object_width
parameter and its length is the current speed of the object multiplied by the time_horizon
.
We build the ego path footprints as the set of ego footprint polygons projected on each path point. We then calculate the intersections between these ego path footprints and the previously calculated immediate path rectangles. An intersection is ignored if the object is not driving toward ego, i.e., the absolute angle between the object and the path point is larger than \\(\\frac{3 \\pi}{4}\\).
The collision point with the lowest arc length when projected on the ego path will be used to calculate the final stop point.
"},{"location":"planning/behavior_velocity_dynamic_obstacle_stop_module/#insert-stop-point","title":"Insert stop point","text":"Before inserting a stop point, we calculate the range of path arc lengths where it can be inserted. The minimum is calculated to satisfy the acceleration and jerk constraints of the vehicle. If a stop point was inserted in the previous iteration of the module, its arc length is used as the maximum. Finally, the stop point arc length is calculated to be the arc length of the previously found collision point minus the stop_distance_buffer
and the ego vehicle longitudinal offset, clamped between the minimum and maximum values.
extra_object_width
double [m] extra width around detected objects minimum_object_velocity
double [m/s] objects with a velocity bellow this value are ignored stop_distance_buffer
double [m] extra distance to add between the stop point and the collision point time_horizon
double [s] time horizon used for collision checks hysteresis
double [m] once a collision has been detected, this hysteresis is used on the collision detection decision_duration_buffer
double [s] duration between no collision being detected and the stop decision being cancelled minimum_object_distance_from_ego_path
double [m] minimum distance between the footprints of ego and an object to consider for collision"},{"location":"planning/behavior_velocity_intersection_module/","title":"Intersection","text":""},{"location":"planning/behavior_velocity_intersection_module/#intersection","title":"Intersection","text":""},{"location":"planning/behavior_velocity_intersection_module/#role","title":"Role","text":"The intersection module is responsible for safely passing urban intersections by:
This module is designed to be agnostic to left-hand/right-hand traffic rules and work for crossroads, T-shape junctions, etc. Roundabout is not formally supported in this module.
"},{"location":"planning/behavior_velocity_intersection_module/#activation-condition","title":"Activation condition","text":"This module is activated when the path contains the lanes with turn_direction tag. More precisely, if the lane_ids of the path contain the ids of those lanes, corresponding instances of intersection module are activated on each lane respectively.
"},{"location":"planning/behavior_velocity_intersection_module/#requirementslimitations","title":"Requirements/Limitations","text":"The attention area in the intersection is defined as the set of lanes that are conflicting with ego path and their preceding lanes up to common.attention_area_length
meters. By default RightOfWay tag is not set, so the attention area covers all the conflicting lanes and its preceding lanes as shown in the first row. RightOfWay tag is used to rule out the lanes that each lane has priority given the traffic light relation and turn_direction priority. In the second row, purple lanes are set as the yield_lane of the ego_lane in the RightOfWay tag.
intersection_area, which is supposed to be defined on the HDMap, is an area converting the entire intersection.
"},{"location":"planning/behavior_velocity_intersection_module/#in-phaseanti-phase-signal-group","title":"In-phase/Anti-phase signal group","text":"The terms \"in-phase signal group\" and \"anti-phase signal group\" are introduced to distinguish the lanes by the timing of traffic light regulation as shown in below figure.
The set of intersection lanes whose color is in sync with lane L1 is called the in-phase signal group of L1, and the set of remaining lanes is called the anti-phase signal group.
"},{"location":"planning/behavior_velocity_intersection_module/#how-towhy-set-rightofway-tag","title":"How-to/Why set RightOfWay tag","text":"Ideally RightOfWay tag is unnecessary if ego has perfect knowledge of all traffic signal information because:
That allows ego to generate the attention area dynamically using the real time traffic signal information. However this ideal condition rarely holds unless the traffic signal information is provided through the infrastructure. Also there maybe be very complicated/bad intersection maps where multiple lanes overlap in a complex manner.
common.use_map_right_of_way
to false and there is no need to set RightOfWay tag on the map. The intersection module will generate the attention area by checking traffic signal and corresponding conflicting lanes. This feature is not implemented yet.common.use_map_right_of_way
to true. If you do not want to detect vehicles on the anti-phase signal group lanes, set them as yield_lane for ego lane.To help the intersection module care only a set of limited lanes, RightOfWay tag needs to be properly set.
Following table shows an example of how to set yield_lanes to each lane in a intersection w/o traffic lights. Since it is not apparent how to uniquely determine signal phase group for a set of intersection lanes in geometric/topological manner, yield_lane needs to be set manually. Straight lanes with traffic lights are exceptionally handled to detect no lanes because commonly it has priority over all the other lanes, so no RightOfWay setting is required.
turn direction of right_of_way yield_lane(with traffic light) yield_lane(without traffic light) straight not need to set yield_lane(this case is special) left/right conflicting lanes of in-phase group left(Left hand traffic) all conflicting lanes of the anti-phase group and right conflicting lanes of in-phase group right conflicting lanes of in-phase group right(Left hand traffic) all conflicting lanes of the anti-phase group no yield_lane left(Right hand traffic) all conflicting lanes of the anti-phase group no yield_lane right(Right hand traffic) all conflicting lanes of the anti-phase group and right conflicting lanes of in-phase group left conflicting lanes of in-phase groupThis setting gives the following attention_area
configurations.
For complex/bad intersection map like the one illustrated below, additional RightOfWay setting maybe necessary.
The bad points are:
Following figure illustrates important positions used in the intersection module. Note that each solid line represents ego front line position and the corresponding dot represents the actual inserted stop point position for the vehicle frame, namely the center of the rear wheel.
To precisely calculate stop positions, the path is interpolated at the certain interval of common.path_interpolation_ds
.
common.default_stopline_margin
meters behind first_attention_stopline is defined as default_stopline instead.For stuck vehicle detection and collision detection, this module checks car, bus, truck, trailer, motor cycle, and bicycle type objects.
Objects that satisfy all of the following conditions are considered as target objects (possible collision objects):
common.attention_area_margin
) .common.attention_area_angle_threshold
).There are several behaviors depending on the scene.
behavior scene action Safe Ego detected no occlusion and collision Ego passes the intersection StuckStop The exit of the intersection is blocked by traffic jam Ego stops before the intersection or the boundary of attention area YieldStuck Another vehicle stops to yield ego Ego stops before the intersection or the boundary of attention area NonOccludedCollisionStop Ego detects no occlusion but detects collision Ego stops at default_stopline FirstWaitBeforeOcclusion Ego detected occlusion when entering the intersection Ego stops at default_stopline at first PeekingTowardOcclusion Ego detected occlusion and but no collision within the FOV (after FirstWaitBeforeOcclusion) Ego approaches the boundary of the attention area slowly OccludedCollisionStop Ego detected both occlusion and collision (after FirstWaitBeforeOcclusion) Ego stops immediately FullyPrioritized Ego is fully prioritized by the RED/Arrow signal Ego only cares vehicles still running inside the intersection. Occlusion is ignored OverPassJudgeLine Ego is already inside the attention area and/or cannot stop before the boundary of attention area Ego does not detect collision/occlusion anymore and passes the intersection "},{"location":"planning/behavior_velocity_intersection_module/#stuck-vehicle-detection","title":"Stuck Vehicle Detection","text":"If there is any object on the path inside the intersection and at the exit of the intersection (up to stuck_vehicle.stuck_vehicle_detect_dist
) lane and its velocity is less than the threshold (stuck_vehicle.stuck_vehicle_velocity_threshold
), the object is regarded as a stuck vehicle. If stuck vehicles exist, this module inserts a stopline a certain distance (=default_stopline_margin
) before the overlapped region with other lanes. The stuck vehicle detection area is generated based on the planned path, so the stuck vehicle stopline is not inserted if the upstream module generated an avoidance path.
If there is any stopped object on the attention lanelet between the intersection point with ego path and the position which is yield_stuck.distance_threshold
before that position, the object is regarded as yielding to ego vehicle. In this case ego is given the right-of-way by the yielding object but this module inserts stopline to prevent entry into the intersection. This scene happens when the object is yielding against ego or the object is waiting before the crosswalk around the exit of the intersection.
The following process is performed for the targets objects to determine whether ego can pass the intersection safely. If it is judged that ego cannot pass the intersection with enough margin, this module inserts a stopline on the path.
collision_detection.min_predicted_path_confidence
is used.collision_detection.collision_start_margin_time
, \\(t\\) + collision_detection.collision_end_margin_time
]The parameters collision_detection.collision_start_margin_time
and collision_detection.collision_end_margin_time
can be interpreted as follows:
collision_detection.collision_start_margin_time
.collision_detection.collision_end_margin_time
.If collision is detected, the state transits to \"STOP\" immediately. On the other hand, the state does not transit to \"GO\" unless safe judgement continues for a certain period collision_detection.collision_detection_hold_time
to prevent the chattering of decisions.
Currently, the intersection module uses motion_velocity_smoother
feature to precisely calculate ego velocity profile along the intersection lane under longitudinal/lateral constraints. If the flag collision_detection.velocity_profile.use_upstream
is true, the target velocity profile of the original path is used. Otherwise the target velocity is set to collision.velocity_profile.default_velocity
. In the trajectory smoothing process the target velocity at/before ego trajectory points are set to ego current velocity. The smoothed trajectory is then converted to an array of (time, distance) which indicates the arrival time to each trajectory point on the path from current ego position. You can visualize this array by adding the lane id to debug.ttc
and running
ros2 run behavior_velocity_intersection_module ttc.py --lane_id <lane_id>\n
"},{"location":"planning/behavior_velocity_intersection_module/#occlusion-detection","title":"Occlusion detection","text":"If the flag occlusion.enable
is true this module checks if there is sufficient field of view (FOV) on the attention area up to occlusion.occlusion_attention_area_length
. If FOV is not clear enough ego first makes a brief stop at default_stopline for occlusion.temporal_stop_time_before_peeking
, and then slowly creeps toward occlusion_peeking_stopline. If occlusion.creep_during_peeking.enable
is true occlusion.creep_during_peeking.creep_velocity
is inserted up to occlusion_peeking_stopline. Otherwise only stop line is inserted.
During the creeping if collision is detected this module inserts a stop line in front of ego immediately, and if the FOV gets sufficiently clear the intersection_occlusion wall will disappear. If occlusion is cleared and no collision is detected ego will pass the intersection.
The occlusion is detected as the common area of occlusion attention area(which is partially the same as the normal attention area) and the unknown cells of the occupancy grid map. The occupancy grid map is denoised using morphology with the window size of occlusion.denoise_kernel
. The occlusion attention area lanes are discretized to line strings and they are used to generate a grid whose each cell represents the distance from ego path along the lane as shown below.
If the nearest occlusion cell value is below the threshold occlusion.occlusion_required_clearance_distance
, it means that the FOV of ego is not clear. It is expected that the occlusion gets cleared as the vehicle approaches the occlusion peeking stop line.
At intersection with traffic light, the whereabout of occlusion is estimated by checking if there are any objects between ego and the nearest occlusion cell. While the occlusion is estimated to be caused by some object (DYNAMICALLY occluded), intersection_wall appears at all times. If no objects are found between ego and the nearest occlusion cell (STATICALLY occluded), after ego stopped for the duration of occlusion.static_occlusion_with_traffic_light_timeout
plus occlusion.occlusion_detection_hold_time
, occlusion is intentionally ignored to avoid stuck.
The remaining time is visualized on the intersection_occlusion virtual wall.
"},{"location":"planning/behavior_velocity_intersection_module/#occlusion-handling-at-intersection-without-traffic-light","title":"Occlusion handling at intersection without traffic light","text":"At intersection without traffic light, if occlusion is detected, ego makes a brief stop at default_stopline and first_attention_stopline respectively. After stopping at the first_attention_area_stopline this module inserts occlusion.absence_traffic_light.creep_velocity
velocity between ego and occlusion_wo_tl_pass_judge_line while occlusion is not cleared. If collision is detected, ego immediately stops. Once the occlusion is cleared or ego has passed occlusion_wo_tl_pass_judge_line this module does not detect collision and occlusion because ego footprint is already inside the intersection.
While ego is creeping, yellow intersection_wall appears in front ego.
"},{"location":"planning/behavior_velocity_intersection_module/#traffic-signal-specific-behavior","title":"Traffic signal specific behavior","text":""},{"location":"planning/behavior_velocity_intersection_module/#collision-detection_1","title":"Collision detection","text":"TTC parameter varies depending on the traffic light color/shape as follows.
traffic light color ttc(start) ttc(end) GREENcollision_detection.not_prioritized.collision_start_margin
collision_detection.not_prioritized.collision_end_margin
AMBER collision_detection.partially_prioritized.collision_start_end_margin
collision_detection.partially_prioritized.collision_start_end_margin
RED / Arrow collision_detection.fully_prioritized.collision_start_end_margin
collision_detection.fully_prioritized.collision_start_end_margin
"},{"location":"planning/behavior_velocity_intersection_module/#yield-on-green","title":"yield on GREEN","text":"If the traffic light color changed to GREEN and ego approached the entry of the intersection lane within the distance collision_detection.yield_on_green_traffic_light.distance_to_assigned_lanelet_start
and there is any object whose distance to its stopline is less than collision_detection.yield_on_green_traffic_light.object_dist_to_stopline
, this module commands to stop for the duration of collision_detection.yield_on_green_traffic_light.duration
at default_stopline.
If the traffic light color is AMBER but the object is expected to stop before its stopline under the deceleration of collision_detection.ignore_on_amber_traffic_light.object_expected_deceleration
, collision checking is skipped.
If the traffic light color is RED or Arrow signal is turned on, the attention lanes which are not conflicting with ego lane are not used for detection. And even if the object stops with a certain overshoot from its stopline, but its expected stop position under the deceleration of collision_detection.ignore_on_amber_traffic_light.object_expected_deceleration
is more than the distance collision_detection.ignore_on_red_traffic_light.object_margin_to_path
from collision point, the object is ignored.
When the traffic light color/shape is RED/Arrow, occlusion detection is skipped.
"},{"location":"planning/behavior_velocity_intersection_module/#pass-judge-line","title":"Pass Judge Line","text":"Generally it is not tolerable for vehicles that have lower traffic priority to stop in the middle of the unprotected area in intersections, and they need to stop at the stop line beforehand if there will be any risk of collision, which introduces two requirements:
The position which is before the boundary of unprotected area by the braking distance which is obtained by
\\[ \\dfrac{v_{\\mathrm{ego}}^{2}}{2a_{\\mathrm{max}}} + v_{\\mathrm{ego}} * t_{\\mathrm{delay}} \\]is called pass_judge_line, and safety decision must be made before ego passes this position because ego does not stop anymore.
1st_pass_judge_line is before the first upcoming lane, and at intersections with multiple upcoming lanes, 2nd_pass_judge_line is defined as the position which is before the centerline of the first attention lane by the braking distance. 1st/2nd_pass_judge_line are illustrated in the following figure.
Intersection module will command to GO if
common.enable_pass_judge_before_default_stopline
is true) ANDbecause it is expected to stop or continue stop decision if
common.enable_pass_judge_before_default_stopline
is false ORFor the 3rd condition, it is possible that ego stops with some overshoot to the unprotected area while it is trying to stop for collision detection, because ego should keep stop decision while UNSAFE decision is made even if it passed 1st_pass_judge_line during deceleration.
For the 4th condition, at intersections with 2nd attention lane, even if ego is over the 1st pass_judge_line, still intersection module commands to stop if the most probable collision is expected to happen in the 2nd attention lane.
Also if occlusion.enable
is true, the position of 1st_pass_judge line changes to occlusion_peeking_stopline if ego passed the original 1st_pass_judge_line position while ego is peeking. Otherwise ego could inadvertently judge that it passed 1st_pass_judge during peeking and then abort peeking.
Each data structure is defined in util_type.hpp
.
IntersectionLanelets
","text":""},{"location":"planning/behavior_velocity_intersection_module/#intersectionstoplines","title":"IntersectionStopLines
","text":"Each stop lines are generated from interpolated path points to obtain precise positions.
"},{"location":"planning/behavior_velocity_intersection_module/#targetobject","title":"TargetObject
","text":"TargetObject
holds the object, its belonging lane and corresponding stopline information.
.attention_area_length
double [m] range for object detection .attention_area_margin
double [m] margin for expanding attention area width .attention_area_angle_threshold
double [rad] threshold of angle difference between the detected object and lane .use_intersection_area
bool [-] flag to use intersection_area for collision detection .default_stopline_margin
double [m] margin before_stop_line .stopline_overshoot_margin
double [m] margin for the overshoot from stopline .max_accel
double [m/ss] max acceleration for stop .max_jerk
double [m/sss] max jerk for stop .delay_response_time
double [s] action delay before stop .enable_pass_judge_before_default_stopline
bool [-] flag not to stop before default_stopline even if ego is over pass_judge_line"},{"location":"planning/behavior_velocity_intersection_module/#stuck_vehicleyield_stuck","title":"stuck_vehicle/yield_stuck","text":"Parameter Type Description stuck_vehicle.turn_direction
- [-] turn_direction specifier for stuck vehicle detection stuck_vehicle.stuck_vehicle_detect_dist
double [m] length toward from the exit of intersection for stuck vehicle detection stuck_vehicle.stuck_vehicle_velocity_threshold
double [m/s] velocity threshold for stuck vehicle detection yield_stuck.distance_threshold
double [m/s] distance threshold of yield stuck vehicle from ego path along the lane"},{"location":"planning/behavior_velocity_intersection_module/#collision_detection","title":"collision_detection","text":"Parameter Type Description .consider_wrong_direction_vehicle
bool [-] flag to detect objects in the wrong direction .collision_detection_hold_time
double [s] hold time of collision detection .min_predicted_path_confidence
double [-] minimum confidence value of predicted path to use for collision detection .keep_detection_velocity_threshold
double [s] ego velocity threshold for continuing collision detection before pass judge line .velocity_profile.use_upstream
bool [-] flag to use velocity profile planned by upstream modules .velocity_profile.minimum_upstream_velocity
double [m/s] minimum velocity of upstream velocity profile to avoid zero division .velocity_profile.default_velocity
double [m/s] constant velocity profile when use_upstream is false .velocity_profile.minimum_default_velocity
double [m/s] minimum velocity of default velocity profile to avoid zero division .yield_on_green_traffic_light
- [-] description .ignore_amber_traffic_light
- [-] description .ignore_on_red_traffic_light
- [-] description"},{"location":"planning/behavior_velocity_intersection_module/#occlusion","title":"occlusion","text":"Parameter Type Description .enable
bool [-] flag to calculate occlusion detection .occlusion_attention_area_length
double [m] the length of attention are for occlusion detection .free_space_max
int [-] maximum value of occupancy grid cell to treat at occluded .occupied_min
int [-] minimum value of occupancy grid cell to treat at occluded .denoise_kernel
double [m] morphology window size for preprocessing raw occupancy grid .attention_lane_crop_curvature_threshold
double [m] curvature threshold for trimming curved part of the lane .attention_lane_crop_curvature_ds
double [m] discretization interval of centerline for lane curvature calculation .creep_during_peeking.enable
bool [-] flag to insert creep_velocity
while peeking to intersection occlusion stopline .creep_during_peeking.creep_velocity
double [m/s] the command velocity while peeking to intersection occlusion stopline .peeking_offset
double [m] the offset of the front of the vehicle into the attention area for peeking to occlusion .occlusion_required_clearance_distance
double [m] threshold for the distance to nearest occlusion cell from ego path .possible_object_bbox
[double] [m] minimum bounding box size for checking if occlusion polygon is small enough .ignore_parked_vehicle_speed_threshold
double [m/s] velocity threshold for checking parked vehicle .occlusion_detection_hold_time
double [s] hold time of occlusion detection .temporal_stop_time_before_peeking
double [s] temporal stop duration at default_stopline before starting peeking .temporal_stop_before_attention_area
bool [-] flag to temporarily stop at first_attention_stopline before peeking into attention_area .creep_velocity_without_traffic_light
double [m/s] creep velocity to occlusion_wo_tl_pass_judge_line .static_occlusion_with_traffic_light_timeout
double [s] the timeout duration for ignoring static occlusion at intersection with traffic light"},{"location":"planning/behavior_velocity_intersection_module/#trouble-shooting","title":"Trouble shooting","text":""},{"location":"planning/behavior_velocity_intersection_module/#intersection-module-stops-against-unrelated-vehicles","title":"Intersection module stops against unrelated vehicles","text":"In this case, first visualize /planning/scenario_planning/lane_driving/behavior_planning/behavior_velocity_planner/debug/intersection
topic and check the attention_area
polygon. Intersection module performs collision checking for vehicles running on this polygon, so if it extends to unintended lanes, it needs to have RightOfWay tag.
By lowering common.attention_area_length
you can check which lanes are conflicting with the intersection lane. Then set part of the conflicting lanes as the yield_lane.
The parameter collision_detection.collision_detection_hold_time
suppresses the chattering by keeping UNSAFE decision for this duration until SAFE decision is finally made. The role of this parameter is to account for unstable detection/tracking of objects. By increasing this value you can suppress the chattering. However it could elongate the stopping duration excessively.
If the chattering arises from the acceleration/deceleration of target vehicles, increase collision_detection.collision_detection.collision_end_margin_time
and/or collision_detection.collision_detection.collision_end_margin_time
.
If the intersection wall appears too fast, or ego tends to stop too conservatively for upcoming vehicles, lower the parameter collision_detection.collision_detection.collision_start_margin_time
. If it lasts too long after the target vehicle passed, then lower the parameter collision_detection.collision_detection.collision_end_margin_time
.
If the traffic light color changed from AMBER/RED to UNKNOWN, the intersection module works in the GREEN color mode. So collision and occlusion are likely to be detected again.
"},{"location":"planning/behavior_velocity_intersection_module/#occlusion-is-detected-overly","title":"Occlusion is detected overly","text":"You can check which areas are detected as occlusion by visualizing /planning/scenario_planning/lane_driving/behavior_planning/behavior_velocity_planner/debug/intersection/occlusion_polygons
.
If you do not want to detect / do want to ignore occlusion far from ego or lower the computational cost of occlusion detection, occlusion.occlusion_attention_area_length
should be set to lower value.
If you want to care the occlusion nearby ego more cautiously, set occlusion.occlusion_required_clearance_distance
to a larger value. Then ego will approach the occlusion_peeking_stopline more closely to assure more clear FOV.
occlusion.possible_object_bbox
is used for checking if detected occlusion area is small enough that no vehicles larger than this size can exist inside. By decreasing this size ego will ignore small occluded area.
Refer to the document of probabilistic_occupancy_grid_map for details. If occlusion tends to be detected at apparently free space, increase occlusion.free_space_max
to ignore them.
intersection_occlusion feature is not recommended for use in planning_simulator because the laserscan_based_occupancy_grid_map generates unnatural UNKNOWN cells in 2D manner:
Also many users do not set traffic light information frequently although it is very critical for intersection_occlusion (and in real traffic environment too).
For these reasons, occlusion.enable
is false by default.
On real vehicle or in end-to-end simulator like AWSIM the following pointcloud_based_occupancy_grid_map configuration is highly recommended:
scan_origin_frame: \"velodyne_top\"\n\ngrid_map_type: \"OccupancyGridMapProjectiveBlindSpot\"\nOccupancyGridMapProjectiveBlindSpot:\nprojection_dz_threshold: 0.01 # [m] for avoiding null division\nobstacle_separation_threshold: 1.0 # [m] fill the interval between obstacles with unknown for this length\n
You should set the top lidar link as the scan_origin_frame
. In the example it is velodyne_top
. The method OccupancyGridMapProjectiveBlindSpot
estimates the FOV by running projective ray-tracing from scan_origin
to obstacle or up to the ground and filling the cells on the \"shadow\" of the object as UNKNOWN.
WIP
"},{"location":"planning/behavior_velocity_intersection_module/#merge-from-private","title":"Merge From Private","text":""},{"location":"planning/behavior_velocity_intersection_module/#role_1","title":"Role","text":"When an ego enters a public road from a private road (e.g. a parking lot), it needs to face and stop before entering the public road to make sure it is safe.
This module is activated when there is an intersection at the private area from which the vehicle enters the public road. The stop line is generated both when the goal is in the intersection lane and when the path goes beyond the intersection lane. The basic behavior is the same as the intersection module, but ego must stop once at the stop line.
"},{"location":"planning/behavior_velocity_intersection_module/#activation-timing","title":"Activation Timing","text":"This module is activated when the following conditions are met:
private
tagmerge_from_private_road/stop_duration_sec
double [m] time margin to change state"},{"location":"planning/behavior_velocity_intersection_module/#known-issue","title":"Known Issue","text":"If ego go over the stop line for a certain distance, then it will not transit from STOP.
"},{"location":"planning/behavior_velocity_no_drivable_lane_module/","title":"Index","text":""},{"location":"planning/behavior_velocity_no_drivable_lane_module/#no-drivable-lane","title":"No Drivable Lane","text":""},{"location":"planning/behavior_velocity_no_drivable_lane_module/#role","title":"Role","text":"This module plans the velocity of the related part of the path in case there is a no drivable lane referring to it.
A no drivable lane is a lanelet or more that are out of operation design domain (ODD), i.e., the vehicle must not drive autonomously in this lanelet. A lanelet can be no drivable (out of ODD) due to many reasons, either technical limitations of the SW and/or HW, business requirements, safety considerations, .... etc, or even a combination of those.
Some examples of No Drivable Lanes
A lanelet becomes invalid by adding a new tag under the relevant lanelet in the map file <tag k=\"no_drivable_lane\" v=\"yes\"/>
.
The target of this module is to stop the vehicle before entering the no drivable lane (with configurable stop margin) or keep the vehicle stationary if autonomous mode started inside a no drivable lane. Then ask the human driver to take the responsibility of the driving task (Takeover Request / Request to Intervene)
"},{"location":"planning/behavior_velocity_no_drivable_lane_module/#activation-timing","title":"Activation Timing","text":"This function is activated when the lane id of the target path has an no drivable lane label (i.e. the no_drivable_lane
attribute is yes
).
stop_margin
double [m] margin for ego vehicle to stop before speed_bump print_debug_info
bool whether debug info will be printed or not"},{"location":"planning/behavior_velocity_no_drivable_lane_module/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"INIT
stateAPPROACHING
toward a no drivable lane if:stop_margin
INSIDE_NO_DRIVABLE_LANE
if:stop_margin
STOPPED
when the vehicle is completely stoppedno_drivable_lane
This module plans to avoid stop in 'no stopping area`.
no_stopping_area
, then vehicle stops inside no_stopping_area
so this module makes stop velocity in front of no_stopping_area
This module allows developers to design vehicle velocity in no_stopping_area
module using specific rules. Once ego vehicle go through pass through point, ego vehicle does't insert stop velocity and does't change decision from GO. Also this module only considers dynamic object in order to avoid unnecessarily stop.
state_clear_time
double [s] time to clear stop state stuck_vehicle_vel_thr
double [m/s] vehicles below this velocity are considered as stuck vehicle. stop_margin
double [m] margin to stop line at no stopping area dead_line_margin
double [m] if ego pass this position GO stop_line_margin
double [m] margin to auto-gen stop line at no stopping area detection_area_length
double [m] length of searching polygon stuck_vehicle_front_margin
double [m] obstacle stop max distance"},{"location":"planning/behavior_velocity_no_stopping_area_module/#flowchart","title":"Flowchart","text":""},{"location":"planning/behavior_velocity_occlusion_spot_module/","title":"Index","text":""},{"location":"planning/behavior_velocity_occlusion_spot_module/#occlusion-spot","title":"Occlusion Spot","text":""},{"location":"planning/behavior_velocity_occlusion_spot_module/#role","title":"Role","text":"This module plans safe velocity to slow down before reaching collision point that hidden object is darting out from occlusion spot
where driver can't see clearly because of obstacles.
This module is activated if launch_occlusion_spot
becomes true. To make pedestrian first zone map tag is one of the TODOs.
This module is prototype implementation to care occlusion spot. To solve the excessive deceleration due to false positive of the perception, the logic of detection method can be selectable. This point has not been discussed in detail and needs to be improved.
TODOs are written in each Inner-workings / Algorithms (see the description below).
"},{"location":"planning/behavior_velocity_occlusion_spot_module/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"planning/behavior_velocity_occlusion_spot_module/#logics-working","title":"Logics Working","text":"There are several types of occlusions, such as \"occlusions generated by parked vehicles\" and \"occlusions caused by obstructions\". In situations such as driving on road with obstacles, where people jump out of the way frequently, all possible occlusion spots must be taken into account. This module considers all occlusion spots calculated from the occupancy grid, but it is not reasonable to take into account all occlusion spots for example, people jumping out from behind a guardrail, or behind cruising vehicle. Therefore currently detection area will be limited to to use predicted object information.
Note that this decision logic is still under development and needs to be improved.
"},{"location":"planning/behavior_velocity_occlusion_spot_module/#detectionarea-polygon","title":"DetectionArea Polygon","text":"This module considers TTV from pedestrian velocity and lateral distance to occlusion spot. TTC is calculated from ego velocity and acceleration and longitudinal distance until collision point using motion velocity smoother. To compute fast this module only consider occlusion spot whose TTV is less than TTC and only consider area within \"max lateral distance\".
"},{"location":"planning/behavior_velocity_occlusion_spot_module/#occlusion-spot-occupancy-grid-base","title":"Occlusion Spot Occupancy Grid Base","text":"This module considers any occlusion spot around ego path computed from the occupancy grid. Due to the computational cost occupancy grid is not high resolution and this will make occupancy grid noisy so this module add information of occupancy to occupancy grid map.
TODO: consider hight of obstacle point cloud to generate occupancy grid.
"},{"location":"planning/behavior_velocity_occlusion_spot_module/#collision-free-judgement","title":"Collision Free Judgement","text":"obstacle that can run out from occlusion should have free space until intersection from ego vehicle
"},{"location":"planning/behavior_velocity_occlusion_spot_module/#partition-lanelet","title":"Partition Lanelet","text":"By using lanelet information of \"guard_rail\", \"fence\", \"wall\" tag, it's possible to remove unwanted occlusion spot.
By using static object information, it is possible to make occupancy grid more accurate.
To make occupancy grid for planning is one of the TODOs.
"},{"location":"planning/behavior_velocity_occlusion_spot_module/#possible-collision","title":"Possible Collision","text":"obstacle that can run out from occlusion is interrupted by moving vehicle.
"},{"location":"planning/behavior_velocity_occlusion_spot_module/#about-safe-motion","title":"About safe motion","text":""},{"location":"planning/behavior_velocity_occlusion_spot_module/#the-concept-of-safe-velocity-and-margin","title":"The Concept of Safe Velocity and Margin","text":"The safe slowdown velocity is calculated from the below parameters of ego emergency braking system and time to collision. Below calculation is included but change velocity dynamically is not recommended for planner.
time to collision of pedestrian[s] with these parameters we can briefly define safe motion before occlusion spot for ideal environment.
This module defines safe margin to consider ego distance to stop and collision path point geometrically. While ego is cruising from safe margin to collision path point, ego vehicle keeps the same velocity as occlusion spot safe velocity.
Note: This logic assumes high-precision vehicle speed tracking and margin for decel point might not be the best solution, and override with manual driver is considered if pedestrian really run out from occlusion spot.
TODO: consider one of the best choices
The maximum slowdown velocity is calculated from the below parameters of ego current velocity and acceleration with maximum slowdown jerk and maximum slowdown acceleration in order not to slowdown too much.
pedestrian_vel
double [m/s] maximum velocity assumed pedestrian coming out from occlusion point. pedestrian_radius
double [m] assumed pedestrian radius which fits in occlusion spot. Parameter Type Description use_object_info
bool [-] whether to reflect object info to occupancy grid map or not. use_partition_lanelet
bool [-] whether to use partition lanelet map data. Parameter /debug Type Description is_show_occlusion
bool [-] whether to show occlusion point markers.\u3000 is_show_cv_window
bool [-] whether to show open_cv debug window. is_show_processing_time
bool [-] whether to show processing time. Parameter /threshold Type Description detection_area_length
double [m] the length of path to consider occlusion spot stuck_vehicle_vel
double [m/s] velocity below this value is assumed to stop lateral_distance
double [m] maximum lateral distance to consider hidden collision Parameter /motion Type Description safety_ratio
double [-] safety ratio for jerk and acceleration max_slow_down_jerk
double [m/s^3] jerk for safe brake max_slow_down_accel
double [m/s^2] deceleration for safe brake non_effective_jerk
double [m/s^3] weak jerk for velocity planning. non_effective_acceleration
double [m/s^2] weak deceleration for velocity planning. min_allowed_velocity
double [m/s] minimum velocity allowed safe_margin
double [m] maximum error to stop with emergency braking system. Parameter /detection_area Type Description min_occlusion_spot_size
double [m] the length of path to consider occlusion spot slice_length
double [m] the distance of divided detection area max_lateral_distance
double [m] buffer around the ego path used to build the detection_area area. Parameter /grid Type Description free_space_max
double [-] maximum value of a free space cell in the occupancy grid occupied_min
double [-] buffer around the ego path used to build the detection_area area."},{"location":"planning/behavior_velocity_occlusion_spot_module/#flowchart","title":"Flowchart","text":""},{"location":"planning/behavior_velocity_occlusion_spot_module/#rough-overview-of-the-whole-process","title":"Rough overview of the whole process","text":""},{"location":"planning/behavior_velocity_occlusion_spot_module/#detail-process-for-predicted-objectnot-updated","title":"Detail process for predicted object(not updated)","text":""},{"location":"planning/behavior_velocity_occlusion_spot_module/#detail-process-for-occupancy-grid-base","title":"Detail process for Occupancy grid base","text":""},{"location":"planning/behavior_velocity_out_of_lane_module/","title":"Index","text":""},{"location":"planning/behavior_velocity_out_of_lane_module/#out-of-lane","title":"Out of Lane","text":""},{"location":"planning/behavior_velocity_out_of_lane_module/#role","title":"Role","text":"out_of_lane
is the module that decelerates and stops to prevent the ego vehicle from entering another lane with incoming dynamic objects.
This module is activated if launch_out_of_lane
is set to true.
The algorithm is made of the following steps.
In this first step, the ego footprint is projected at each path point and are eventually inflated based on the extra_..._offset
parameters.
In the second step, the set of lanes to consider for overlaps is generated. This set is built by selecting all lanelets within some distance from the ego vehicle, and then removing non-relevant lanelets. The selection distance is chosen as the maximum between the slowdown.distance_threshold
and the stop.distance_threshold
.
A lanelet is deemed non-relevant if it meets one of the following conditions.
In the third step, overlaps between the ego path footprints and the other lanes are calculated. For each pair of other lane \\(l\\) and ego path footprint \\(f\\), we calculate the overlapping polygons using boost::geometry::intersection
. For each overlapping polygon found, if the distance inside the other lane \\(l\\) is above the overlap.minimum_distance
threshold, then the overlap is ignored. Otherwise, the arc length range (relative to the ego path) and corresponding points of the overlapping polygons are stored. Ultimately, for each other lane \\(l\\), overlapping ranges of successive overlaps are built with the following information:
In the fourth step, a decision to either slow down or stop before each overlapping range is taken based on the dynamic objects. The conditions for the decision depend on the value of the mode
parameter.
Whether it is decided to slow down or stop is determined by the distance between the ego vehicle and the start of the overlapping range (in arc length along the ego path). If this distance is bellow the actions.slowdown.threshold
, a velocity of actions.slowdown.velocity
will be used. If the distance is bellow the actions.stop.threshold
, a velocity of 0
m/s will be used.
With the mode
set to \"threshold\"
, a decision to stop or slow down before a range is made if an incoming dynamic object is estimated to reach the overlap within threshold.time_threshold
.
With the mode
set to \"ttc\"
, estimates for the times when ego and the dynamic objects reach the start and end of the overlapping range are calculated. This is then used to calculate the time to collision over the period where ego crosses the overlap. If the time to collision is predicted to go bellow the ttc.threshold
, the decision to stop or slow down is made.
With the mode
set to \"intervals\"
, the estimated times when ego and the dynamic objects reach the start and end points of the overlapping range are used to create time intervals. These intervals can be made shorter or longer using the intervals.ego_time_buffer
and intervals.objects_time_buffer
parameters. If the time interval of ego overlaps with the time interval of an object, the decision to stop or slow down is made.
To estimate the times when ego will reach an overlap, it is assumed that ego travels along its path at its current velocity or at half the velocity of the path points, whichever is higher.
"},{"location":"planning/behavior_velocity_out_of_lane_module/#dynamic-objects","title":"Dynamic objects","text":"Two methods are used to estimate the time when a dynamic objects with reach some point. If objects.use_predicted_paths
is set to true
, the predicted paths of the dynamic object are used if their confidence value is higher than the value set by the objects.predicted_path_min_confidence
parameter. Otherwise, the lanelet map is used to estimate the distance between the object and the point and the time is calculated assuming the object keeps its current velocity.
Finally, for each decision to stop or slow down before an overlapping range, a point is inserted in the path. For a decision taken for an overlapping range with a lane \\(l\\) starting at ego path point index \\(i\\), a point is inserted in the path between index \\(i\\) and \\(i-1\\) such that the ego footprint projected at the inserted point does not overlap \\(l\\). Such point with no overlap must exist since, by definition of the overlapping range, we know that there is no overlap at \\(i-1\\).
If the point would cause a higher deceleration than allowed by the max_accel
parameter (node parameter), it is skipped.
Moreover, parameter action.distance_buffer
adds an extra distance between the ego footprint and the overlap when possible.
mode
string [-] mode used to consider a dynamic object. Candidates: threshold, intervals, ttc skip_if_already_overlapping
bool [-] if true, do not run this module when ego already overlaps another lane Parameter /threshold Type Description time_threshold
double [s] consider objects that will reach an overlap within this time Parameter /intervals Type Description ego_time_buffer
double [s] extend the ego time interval by this buffer objects_time_buffer
double [s] extend the time intervals of objects by this buffer Parameter /ttc Type Description threshold
double [s] consider objects with an estimated time to collision bellow this value while ego is on the overlap Parameter /objects Type Description minimum_velocity
double [m/s] ignore objects with a velocity lower than this value predicted_path_min_confidence
double [-] minimum confidence required for a predicted path to be considered use_predicted_paths
bool [-] if true, use the predicted paths to estimate future positions; if false, assume the object moves at constant velocity along all lanelets it currently is located in Parameter /overlap Type Description minimum_distance
double [m] minimum distance inside a lanelet for an overlap to be considered extra_length
double [m] extra arc length to add to the front and back of an overlap (used to calculate enter/exit times) Parameter /action Type Description skip_if_over_max_decel
bool [-] if true, do not take an action that would cause more deceleration than the maximum allowed distance_buffer
double [m] buffer distance to try to keep between the ego footprint and lane slowdown.distance_threshold
double [m] insert a slow down when closer than this distance from an overlap slowdown.velocity
double [m] slow down velocity stop.distance_threshold
double [m] insert a stop when closer than this distance from an overlap Parameter /ego Type Description extra_front_offset
double [m] extra front distance to add to the ego footprint extra_rear_offset
double [m] extra rear distance to add to the ego footprint extra_left_offset
double [m] extra left distance to add to the ego footprint extra_right_offset
double [m] extra right distance to add to the ego footprint"},{"location":"planning/behavior_velocity_planner/","title":"Behavior Velocity Planner","text":""},{"location":"planning/behavior_velocity_planner/#behavior-velocity-planner","title":"Behavior Velocity Planner","text":""},{"location":"planning/behavior_velocity_planner/#overview","title":"Overview","text":"behavior_velocity_planner
is a planner that adjust velocity based on the traffic rules. It loads modules as plugins. Please refer to the links listed below for detail on each module.
When each module plans velocity, it considers based on base_link
(center of rear-wheel axis) pose. So for example, in order to stop at a stop line with the vehicles' front on the stop line, it calculates base_link
position from the distance between base_link
to front and modifies path velocity from the base_link
position.
~input/path_with_lane_id
autoware_auto_planning_msgs::msg::PathWithLaneId path with lane_id ~input/vector_map
autoware_auto_mapping_msgs::msg::HADMapBin vector map ~input/vehicle_odometry
nav_msgs::msg::Odometry vehicle velocity ~input/dynamic_objects
autoware_auto_perception_msgs::msg::PredictedObjects dynamic objects ~input/no_ground_pointcloud
sensor_msgs::msg::PointCloud2 obstacle pointcloud ~/input/compare_map_filtered_pointcloud
sensor_msgs::msg::PointCloud2 obstacle pointcloud filtered by compare map. Note that this is used only when the detection method of run out module is Points. ~input/traffic_signals
autoware_perception_msgs::msg::TrafficSignalArray traffic light states"},{"location":"planning/behavior_velocity_planner/#output-topics","title":"Output topics","text":"Name Type Description ~output/path
autoware_auto_planning_msgs::msg::Path path to be followed ~output/stop_reasons
tier4_planning_msgs::msg::StopReasonArray reasons that cause the vehicle to stop"},{"location":"planning/behavior_velocity_planner/#node-parameters","title":"Node parameters","text":"Parameter Type Description launch_modules
vector<string> module names to launch forward_path_length
double forward path length backward_path_length
double backward path length max_accel
double (to be a global parameter) max acceleration of the vehicle system_delay
double (to be a global parameter) delay time until output control command delay_response_time
double (to be a global parameter) delay time of the vehicle's response to control commands"},{"location":"planning/behavior_velocity_planner_common/","title":"Behavior Velocity Planner Common","text":""},{"location":"planning/behavior_velocity_planner_common/#behavior-velocity-planner-common","title":"Behavior Velocity Planner Common","text":"This package provides common functions as a library, which are used in the behavior_velocity_planner
node and modules.
run_out
is the module that decelerates and stops for dynamic obstacles such as pedestrians and bicycles.
This module is activated if launch_run_out
becomes true
Calculate the expected target velocity for the ego vehicle path to calculate time to collision with obstacles more precisely. The expected target velocity is calculated with motion velocity smoother module by using current velocity, current acceleration and velocity limits directed by the map and external API.
"},{"location":"planning/behavior_velocity_run_out_module/#extend-the-path","title":"Extend the path","text":"The path is extended by the length of base link to front to consider obstacles after the goal.
"},{"location":"planning/behavior_velocity_run_out_module/#trim-path-from-ego-position","title":"Trim path from ego position","text":"The path is trimmed from ego position to a certain distance to reduce calculation time. Trimmed distance is specified by parameter of detection_distance
.
This module can handle multiple types of obstacles by creating abstracted dynamic obstacle data layer. Currently we have 3 types of detection method (Object, ObjectWithoutPath, Points) to create abstracted obstacle data.
"},{"location":"planning/behavior_velocity_run_out_module/#abstracted-dynamic-obstacle","title":"Abstracted dynamic obstacle","text":"Abstracted obstacle data has following information.
Name Type Description posegeometry_msgs::msg::Pose
pose of the obstacle classifications std::vector<autoware_auto_perception_msgs::msg::ObjectClassification>
classifications with probability shape autoware_auto_perception_msgs::msg::Shape
shape of the obstacle predicted_paths std::vector<DynamicObstacle::PredictedPath>
predicted paths with confidence. this data doesn't have time step because we use minimum and maximum velocity instead. min_velocity_mps float
minimum velocity of the obstacle. specified by parameter of dynamic_obstacle.min_vel_kmph
max_velocity_mps float
maximum velocity of the obstacle. specified by parameter of dynamic_obstacle.max_vel_kmph
Enter the maximum/minimum velocity of the object as a parameter, adding enough margin to the expected velocity. This parameter is used to create polygons for collision detection.
Future work: Determine the maximum/minimum velocity from the estimated velocity with covariance of the object
"},{"location":"planning/behavior_velocity_run_out_module/#3-types-of-detection-method","title":"3 types of detection method","text":"We have 3 types of detection method to meet different safety and availability requirements. The characteristics of them are shown in the table below. Method of Object
has high availability (less false positive) because it detects only objects whose predicted path is on the lane. However, sometimes it is not safe because perception may fail to detect obstacles or generate incorrect predicted paths. On the other hand, method of Points
has high safety (less false negative) because it uses pointcloud as input. Since points don't have a predicted path, the path that moves in the direction normal to the path of ego vehicle is considered to be the predicted path of abstracted dynamic obstacle data. However, without proper adjustment of filter of points, it may detect a lot of points and it will result in very low availability. Method of ObjectWithoutPath
has the characteristics of an intermediate of Object
and Points
.
This module can exclude the obstacles outside of partition such as guardrail, fence, and wall. We need lanelet map that has the information of partition to use this feature. By this feature, we can reduce unnecessary deceleration by obstacles that are unlikely to jump out to the lane. You can choose whether to use this feature by parameter of use_partition_lanelet
.
Along the ego vehicle path, determine the points where collision detection is to be performed for each detection_span
.
The travel times to the each points are calculated from the expected target velocity.
For the each points, collision detection is performed using the footprint polygon of the ego vehicle and the polygon of the predicted location of the obstacles. The predicted location of the obstacles is described as rectangle or polygon that has the range calculated by min velocity, max velocity and the ego vehicle's travel time to the point. If the input type of the dynamic obstacle is Points
, the obstacle shape is defined as a small cylinder.
Multiple points are detected as collision points because collision detection is calculated between two polygons. So we select the point that is on the same side as the obstacle and close to ego vehicle as the collision point.
"},{"location":"planning/behavior_velocity_run_out_module/#insert-velocity","title":"Insert velocity","text":""},{"location":"planning/behavior_velocity_run_out_module/#insert-velocity-to-decelerate-for-obstacles","title":"Insert velocity to decelerate for obstacles","text":"If the collision is detected, stop point is inserted on distance of base link to front + stop margin from the selected collision point. The base link to front means the distance between base_link (center of rear-wheel axis) and front of the car. Stop margin is determined by the parameter of stop_margin
.
If you select the method of Points
or ObjectWithoutPath
, sometimes ego keeps stopping in front of the obstacle. To avoid this problem, This feature has option to approach the obstacle with slow velocity after stopping. If the parameter of approaching.enable
is set to true, ego will approach the obstacle after ego stopped for state.stop_time_thresh
seconds. The maximum velocity of approaching can be specified by the parameter of approaching.limit_vel_kmph
. The decision to approach the obstacle is determined by a simple state transition as following image.
The maximum slowdown velocity is calculated in order not to slowdown too much. See the Occlusion Spot document for more details. You can choose whether to use this feature by parameter of slow_down_limit.enable
.
detection_method
string [-] candidate: Object, ObjectWithoutPath, Points use_partition_lanelet
bool [-] whether to use partition lanelet map data specify_decel_jerk
bool [-] whether to specify jerk when ego decelerates stop_margin
double [m] the vehicle decelerates to be able to stop with this margin passing_margin
double [m] the vehicle begins to accelerate if the vehicle's front in predicted position is ahead of the obstacle + this margin deceleration_jerk
double [m/s^3] ego decelerates with this jerk when stopping for obstacles detection_distance
double [m] ahead distance from ego to detect the obstacles detection_span
double [m] calculate collision with this span to reduce calculation time min_vel_ego_kmph
double [km/h] min velocity to calculate time to collision Parameter /detection_area Type Description margin_ahead
double [m] ahead margin for detection area polygon margin_behind
double [m] behind margin for detection area polygon Parameter /dynamic_obstacle Type Description use_mandatory_area
double [-] whether to use mandatory detection area assume_fixed_velocity.enable
double [-] If enabled, the obstacle's velocity is assumed to be within the minimum and maximum velocity values specified below assume_fixed_velocity.min_vel_kmph
double [km/h] minimum velocity for dynamic obstacles assume_fixed_velocity.max_vel_kmph
double [km/h] maximum velocity for dynamic obstacles diameter
double [m] diameter of obstacles. used for creating dynamic obstacles from points height
double [m] height of obstacles. used for creating dynamic obstacles from points max_prediction_time
double [sec] create predicted path until this time time_step
double [sec] time step for each path step. used for creating dynamic obstacles from points or objects without path points_interval
double [m] divide obstacle points into groups with this interval, and detect only lateral nearest point. used only for Points method Parameter /approaching Type Description enable
bool [-] whether to enable approaching after stopping margin
double [m] distance on how close ego approaches the obstacle limit_vel_kmph
double [km/h] limit velocity for approaching after stopping Parameter /state Type Description stop_thresh
double [m/s] threshold to decide if ego is stopping stop_time_thresh
double [sec] threshold for stopping time to transit to approaching state disable_approach_dist
double [m] end the approaching state if distance to the obstacle is longer than this value keep_approach_duration
double [sec] keep approach state for this duration to avoid chattering of state transition Parameter /slow_down_limit Type Description enable
bool [-] whether to enable to limit velocity with max jerk and acc max_jerk
double [m/s^3] minimum jerk deceleration for safe brake. max_acc
double [m/s^2] minimum accel deceleration for safe brake. Parameter /ignore_momentary_detection Type Description enable
bool [-] whether to ignore momentary detection time_threshold
double [sec] ignores detections that persist for less than this duration"},{"location":"planning/behavior_velocity_run_out_module/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":"This module plans the velocity of the related part of the path in case there is speed bump regulatory element referring to it.
"},{"location":"planning/behavior_velocity_speed_bump_module/#activation-timing","title":"Activation Timing","text":"The manager launch speed bump scene module when there is speed bump regulatory element referring to the reference path.
"},{"location":"planning/behavior_velocity_speed_bump_module/#module-parameters","title":"Module Parameters","text":"Parameter Type Descriptionslow_start_margin
double [m] margin for ego vehicle to slow down before speed_bump slow_end_margin
double [m] margin for ego vehicle to accelerate after speed_bump print_debug_info
bool whether debug info will be printed or not"},{"location":"planning/behavior_velocity_speed_bump_module/#speed-calculation","title":"Speed Calculation","text":"min_height
double [m] minimum height assumption of the speed bump max_height
double [m] maximum height assumption of the speed bump min_speed
double [m/s] minimum speed assumption of slow down speed max_speed
double [m/s] maximum speed assumption of slow down speed"},{"location":"planning/behavior_velocity_speed_bump_module/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"slow_down_speed
wrt to speed_bump_height
specified in regulatory element or read slow_down_speed
tag from speed bump annotation if availableNote: If in speed bump annotation slow_down_speed
tag is used then calculating the speed wrt the speed bump height will be ignored. In such case, specified slow_down_speed
value in [kph] is being used.
slow_start_point
& slow_end_point
wrt the intersection points and insert them to pathslow_start_point
or slow_end_point
can not be inserted with given/calculated offset values check if any path point can be virtually assigned as slow_start_point
or slow_end_point
slow_down_speed
to the path points between slow_start_point
or slow_end_point
This module plans velocity so that the vehicle can stop right before stop lines and restart driving after stopped.
"},{"location":"planning/behavior_velocity_stop_line_module/#activation-timing","title":"Activation Timing","text":"This module is activated when there is a stop line in a target lane.
"},{"location":"planning/behavior_velocity_stop_line_module/#module-parameters","title":"Module Parameters","text":"Parameter Type Descriptionstop_margin
double a margin that the vehicle tries to stop before stop_line stop_duration_sec
double [s] time parameter for the ego vehicle to stop in front of a stop line hold_stop_margin_distance
double [m] parameter for restart prevention (See Algorithm section). Also, when the ego vehicle is within this distance from a stop line, the ego state becomes STOPPED from APPROACHING use_initialization_stop_state
bool A flag to determine whether to return to the approaching state when the vehicle moves away from a stop line. show_stop_line_collision_check
bool A flag to determine whether to show the debug information of collision check with a stop line"},{"location":"planning/behavior_velocity_stop_line_module/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"stop_duration_sec
seconds.This algorithm is based on segment
. segment
consists of two node points. It's useful for removing boundary conditions because if segment(i)
exists we can assume node(i)
and node(i+1)
exist.
First, this algorithm finds a collision between reference path and stop line. Then, we can get collision segment
and collision point
.
Next, based on collision point
, it finds offset segment
by iterating backward points up to a specific offset length. The offset length is stop_margin
(parameter) + base_link to front
(to adjust head pose to stop line). Then, we can get offset segment
and offset from segment start
.
After that, we can calculate a offset point from offset segment
and offset
. This will be stop_pose
.
If it needs X meters (e.g. 0.5 meters) to stop once the vehicle starts moving due to the poor vehicle control performance, the vehicle goes over the stopping position that should be strictly observed when the vehicle starts to moving in order to approach the near stop point (e.g. 0.3 meters away).
This module has parameter hold_stop_margin_distance
in order to prevent from these redundant restart. If the vehicle is stopped within hold_stop_margin_distance
meters from stop point of the module (_front_to_stop_line < hold_stop_margin_distance), the module judges that the vehicle has already stopped for the module's stop point and plans to keep stopping current position even if the vehicle is stopped due to other factors.
parameters
outside the hold_stop_margin_distance
inside the hold_stop_margin_distance"},{"location":"planning/behavior_velocity_template_module/","title":"Index","text":""},{"location":"planning/behavior_velocity_template_module/#template","title":"Template","text":"
A template for behavior velocity modules based on the behavior_velocity_speed_bump_module.
"},{"location":"planning/behavior_velocity_template_module/#autoware-behavior-velocity-module-template","title":"Autoware Behavior Velocity Module Template","text":""},{"location":"planning/behavior_velocity_template_module/#scene","title":"Scene
","text":""},{"location":"planning/behavior_velocity_template_module/#templatemodule-class","title":"TemplateModule
Class","text":"The TemplateModule
class serves as a foundation for creating a scene module within the Autoware behavior velocity planner. It defines the core methods and functionality needed for the module's behavior. You should replace the placeholder code with actual implementations tailored to your specific behavior velocity module.
TemplateModule
takes the essential parameters to create a module: const int64_t module_id
, const rclcpp::Logger & logger
, and const rclcpp::Clock::SharedPtr clock
. These parameters are supplied by the TemplateModuleManager
when registering a new module. Other parameters can be added to the constructor, if required by your specific module implementation.modifyPathVelocity
Method","text":"TemplateModule
class, is expected to modify the velocity of the input path based on certain conditions. In the provided code, it logs an informational message once when the template module is executing.createDebugMarkerArray
Method","text":"TemplateModule
class, is responsible for creating a visualization of debug markers and returning them as a visualization_msgs::msg::MarkerArray
. In the provided code, it returns an empty MarkerArray
.createVirtualWalls
Method","text":"createVirtualWalls
method creates virtual walls for the scene and returns them as motion_utils::VirtualWalls
. In the provided code, it returns an empty VirtualWalls
object.Manager
","text":"The managing of your modules is defined in manager.hpp and manager.cpp. The managing is handled by two classes:
TemplateModuleManager
class defines the core logic for managing and launching the behavior_velocity_template scenes (defined in behavior_velocity_template_module/src/scene.cpp/hpp). It inherits essential manager attributes from its parent class SceneModuleManagerInterface
.TemplateModulePlugin
class provides a way to integrate the TemplateModuleManager
into the logic of the Behavior Velocity Planner.TemplateModuleManager
Class","text":""},{"location":"planning/behavior_velocity_template_module/#constructor-templatemodulemanager","title":"Constructor TemplateModuleManager
","text":"TemplateModuleManager
class, and it takes an rclcpp::Node
reference as a parameter.dummy_parameter
to 0.0.getModuleName()
Method","text":"SceneModuleManagerInterface
class.launchNewModules()
Method","text":"autoware_auto_planning_msgs::msg::PathWithLaneId
.TemplateModule
class.module_id
to 0 and checks if a module with the same ID is already registered. If not, it registers a new TemplateModule
with the module ID. Note that each module managed by the TemplateModuleManager
should have a unique ID. The template code registers a single module, so the module_id
is set as 0 for simplicity.getModuleExpiredFunction()
Method","text":"autoware_auto_planning_msgs::msg::PathWithLaneId
.std::function<bool(const std::shared_ptr<SceneModuleInterface>&)>
. This function is used by the behavior velocity planner to determine whether a particular module has expired or not based on the given path.Please note that the specific functionality of the methods launchNewModules()
and getModuleExpiredFunction()
would depend on the details of your behavior velocity modules and how they are intended to be managed within the Autoware system. You would need to implement these methods according to your module's requirements.
TemplateModulePlugin
Class","text":""},{"location":"planning/behavior_velocity_template_module/#templatemoduleplugin-class_1","title":"TemplateModulePlugin
Class","text":"PluginWrapper<TemplateModuleManager>
. It essentially wraps your TemplateModuleManager
class within a plugin, which can be loaded and managed dynamically.Example Usage
","text":"In the following example, we take each point of the path, and multiply it by 2. Essentially duplicating the speed. Note that the velocity smoother will further modify the path speed after all the behavior velocity modules are executed.
bool TemplateModule::modifyPathVelocity(\n[[maybe_unused]] PathWithLaneId * path, [[maybe_unused]] StopReason * stop_reason)\n{\nfor (auto & p : path->points) {\np.point.longitudinal_velocity_mps *= 2.0;\n}\n\nreturn false;\n}\n
"},{"location":"planning/behavior_velocity_traffic_light_module/","title":"Index","text":""},{"location":"planning/behavior_velocity_traffic_light_module/#traffic-light","title":"Traffic Light","text":""},{"location":"planning/behavior_velocity_traffic_light_module/#role","title":"Role","text":"Judgement whether a vehicle can go into an intersection or not by traffic light status, and planning a velocity of the stop if necessary. This module is designed for rule-based velocity decision that is easy for developers to design its behavior. It generates proper velocity for traffic light scene.
"},{"location":"planning/behavior_velocity_traffic_light_module/#limitations","title":"Limitations","text":"This module allows developers to design STOP/GO in traffic light module using specific rules. Due to the property of rule-based planning, the algorithm is greatly depends on object detection and perception accuracy considering traffic light. Also, this module only handles STOP/Go at traffic light scene, so rushing or quick decision according to traffic condition is future work.
"},{"location":"planning/behavior_velocity_traffic_light_module/#activation-timing","title":"Activation Timing","text":"This module is activated when there is traffic light in ego lane.
"},{"location":"planning/behavior_velocity_traffic_light_module/#algorithm","title":"Algorithm","text":"Obtains a traffic light mapped to the route and a stop line correspond to the traffic light from a map information.
Uses the highest reliability one of the traffic light recognition result and if the color of that was not green or corresponding arrow signal, generates a stop point.
stop_time_hysteresis
, it treats as a signal to pass. This feature is to prevent chattering.When vehicle current velocity is
When it to be judged that vehicle can\u2019t stop before stop line, autoware chooses one of the following behaviors
yellow lamp line
It\u2019s called \u201cyellow lamp line\u201d which shows the distance traveled by the vehicle during yellow lamp.
dilemma zone
It\u2019s called \u201cdilemma zone\u201d which satisfies following conditions:
vehicle can\u2019t stop under deceleration and jerk limit.(left side of the pass judge curve)
\u21d2emergency stop(relax deceleration and jerk limitation in order to observe the traffic regulation)
optional zone
It\u2019s called \u201coptional zone\u201d which satisfies following conditions:
vehicle can stop under deceleration and jerk limit.(right side of the pass judge curve)
\u21d2 stop(autoware selects the safety choice)
stop_margin
double [m] margin before stop point tl_state_timeout
double [s] time out for detected traffic light result. stop_time_hysteresis
double [s] time threshold to decide stop planning for chattering prevention yellow_lamp_period
double [s] time for yellow lamp enable_pass_judge
bool [-] whether to use pass judge"},{"location":"planning/behavior_velocity_traffic_light_module/#flowchart","title":"Flowchart","text":""},{"location":"planning/behavior_velocity_traffic_light_module/#known-limits","title":"Known Limits","text":"Autonomous vehicles have to cooperate with the infrastructures such as:
The following items are example cases:
Traffic control by traffic lights with V2X support
Intersection coordination of multiple vehicles by FMS.
It's possible to make each function individually, however, the use cases can be generalized with these three elements.
start
: Start a cooperation procedure after the vehicle enters a certain zone.stop
: Stop at a defined stop line according to the status received from infrastructures.end
: Finalize the cooperation procedure after the vehicle reaches the exit zone. This should be done within the range of stable communication.This module sends/receives status from infrastructures and plans the velocity of the cooperation result.
"},{"location":"planning/behavior_velocity_virtual_traffic_light_module/#system-configuration-diagram","title":"System Configuration Diagram","text":"Planner and each infrastructure communicate with each other using common abstracted messages.
FMS: Intersection coordination when multiple vehicles are in operation and the relevant lane is occupied
Support different communication methods for different infrastructures
Have different meta-information for each geographic location
FMS: Fleet Management System
"},{"location":"planning/behavior_velocity_virtual_traffic_light_module/#module-parameters","title":"Module Parameters","text":"Parameter Type Descriptionmax_delay_sec
double [s] maximum allowed delay for command near_line_distance
double [m] threshold distance to stop line to check ego stop. dead_line_margin
double [m] threshold distance that this module continue to insert stop line. hold_stop_margin_distance
double [m] parameter for restart prevention (See following section) check_timeout_after_stop_line
bool [-] check timeout to stop when linkage is disconnected"},{"location":"planning/behavior_velocity_virtual_traffic_light_module/#restart-prevention","title":"Restart prevention","text":"If it needs X meters (e.g. 0.5 meters) to stop once the vehicle starts moving due to the poor vehicle control performance, the vehicle goes over the stopping position that should be strictly observed when the vehicle starts to moving in order to approach the near stop point (e.g. 0.3 meters away).
This module has parameter hold_stop_margin_distance
in order to prevent from these redundant restart. If the vehicle is stopped within hold_stop_margin_distance
meters from stop point of the module (_front_to_stop_line < hold_stop_margin_distance), the module judges that the vehicle has already stopped for the module's stop point and plans to keep stopping current position even if the vehicle is stopped due to other factors.
parameters
outside the hold_stop_margin_distance
inside the hold_stop_margin_distance"},{"location":"planning/behavior_velocity_virtual_traffic_light_module/#flowchart","title":"Flowchart","text":""},{"location":"planning/behavior_velocity_virtual_traffic_light_module/#map-format","title":"Map Format","text":"
This module decide to stop before the ego will cross the walkway including crosswalk to enter or exit the private area.
"},{"location":"planning/costmap_generator/","title":"costmap_generator","text":""},{"location":"planning/costmap_generator/#costmap_generator","title":"costmap_generator","text":""},{"location":"planning/costmap_generator/#costmap_generator_node","title":"costmap_generator_node","text":"This node reads PointCloud
and/or DynamicObjectArray
and creates an OccupancyGrid
and GridMap
. VectorMap(Lanelet2)
is optional.
~input/objects
autoware_auto_perception_msgs::PredictedObjects predicted objects, for obstacles areas ~input/points_no_ground
sensor_msgs::PointCloud2 ground-removed points, for obstacle areas which can't be detected as objects ~input/vector_map
autoware_auto_mapping_msgs::HADMapBin vector map, for drivable areas ~input/scenario
tier4_planning_msgs::Scenario scenarios to be activated, for node activation"},{"location":"planning/costmap_generator/#output-topics","title":"Output topics","text":"Name Type Description ~output/grid_map
grid_map_msgs::GridMap costmap as GridMap, values are from 0.0 to 1.0 ~output/occupancy_grid
nav_msgs::OccupancyGrid costmap as OccupancyGrid, values are from 0 to 100"},{"location":"planning/costmap_generator/#output-tfs","title":"Output TFs","text":"None
"},{"location":"planning/costmap_generator/#how-to-launch","title":"How to launch","text":"Execute the command source install/setup.bash
to setup the environment
Run ros2 launch costmap_generator costmap_generator.launch.xml
to launch the node
update_rate
double timer's update rate activate_by_scenario
bool if true, activate by scenario = parking. Otherwise, activate if vehicle is inside parking lot. use_objects
bool whether using ~input/objects
or not use_points
bool whether using ~input/points_no_ground
or not use_wayarea
bool whether using wayarea
from ~input/vector_map
or not use_parkinglot
bool whether using parkinglot
from ~input/vector_map
or not costmap_frame
string created costmap's coordinate vehicle_frame
string vehicle's coordinate map_frame
string map's coordinate grid_min_value
double minimum cost for gridmap grid_max_value
double maximum cost for gridmap grid_resolution
double resolution for gridmap grid_length_x
int size of gridmap for x direction grid_length_y
int size of gridmap for y direction grid_position_x
int offset from coordinate in x direction grid_position_y
int offset from coordinate in y direction maximum_lidar_height_thres
double maximum height threshold for pointcloud data minimum_lidar_height_thres
double minimum height threshold for pointcloud data expand_rectangle_size
double expand object's rectangle with this value size_of_expansion_kernel
int kernel size for blurring effect on object's costmap"},{"location":"planning/costmap_generator/#flowchart","title":"Flowchart","text":""},{"location":"planning/external_velocity_limit_selector/","title":"External Velocity Limit Selector","text":""},{"location":"planning/external_velocity_limit_selector/#external-velocity-limit-selector","title":"External Velocity Limit Selector","text":""},{"location":"planning/external_velocity_limit_selector/#purpose","title":"Purpose","text":"The external_velocity_limit_selector_node
is a node that keeps consistency of external velocity limits. This module subscribes
VelocityLimit.msg contains not only max velocity but also information about the acceleration/jerk constraints on deceleration. The external_velocity_limit_selector_node
integrates the lowest velocity limit and the highest jerk constraint to calculate the hardest velocity limit that protects all the deceleration points and max velocities sent by API and Autoware internal modules.
WIP
"},{"location":"planning/external_velocity_limit_selector/#inputs","title":"Inputs","text":"Name Type Description~input/velocity_limit_from_api
tier4_planning_msgs::VelocityLimit velocity limit from api ~input/velocity_limit_from_internal
tier4_planning_msgs::VelocityLimit velocity limit from autoware internal modules ~input/velocity_limit_clear_command_from_internal
tier4_planning_msgs::VelocityLimitClearCommand velocity limit clear command"},{"location":"planning/external_velocity_limit_selector/#outputs","title":"Outputs","text":"Name Type Description ~output/max_velocity
tier4_planning_msgs::VelocityLimit current information of the hardest velocity limit"},{"location":"planning/external_velocity_limit_selector/#parameters","title":"Parameters","text":"Parameter Type Description max_velocity
double default max velocity [m/s] normal.min_acc
double minimum acceleration [m/ss] normal.max_acc
double maximum acceleration [m/ss] normal.min_jerk
double minimum jerk [m/sss] normal.max_jerk
double maximum jerk [m/sss] limit.min_acc
double minimum acceleration to be observed [m/ss] limit.max_acc
double maximum acceleration to be observed [m/ss] limit.min_jerk
double minimum jerk to be observed [m/sss] limit.max_jerk
double maximum jerk to be observed [m/sss]"},{"location":"planning/external_velocity_limit_selector/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"planning/external_velocity_limit_selector/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"planning/external_velocity_limit_selector/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"planning/external_velocity_limit_selector/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"planning/external_velocity_limit_selector/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"planning/freespace_planner/","title":"The `freespace_planner`","text":""},{"location":"planning/freespace_planner/#the-freespace_planner","title":"The freespace_planner
","text":""},{"location":"planning/freespace_planner/#freespace_planner_node","title":"freespace_planner_node","text":"freespace_planner_node
is a global path planner node that plans trajectory in the space having static/dynamic obstacles. This node is currently based on Hybrid A* search algorithm in freespace_planning_algorithms
package. Other algorithms such as rrt* will be also added and selectable in the future.
Note Due to the constraint of trajectory following, the output trajectory will be split to include only the single direction path. In other words, the output trajectory doesn't include both forward and backward trajectories at once.
"},{"location":"planning/freespace_planner/#input-topics","title":"Input topics","text":"Name Type Description~input/route
autoware_auto_planning_msgs::Route route and goal pose ~input/occupancy_grid
nav_msgs::OccupancyGrid costmap, for drivable areas ~input/odometry
nav_msgs::Odometry vehicle velocity, for checking whether vehicle is stopped ~input/scenario
tier4_planning_msgs::Scenario scenarios to be activated, for node activation"},{"location":"planning/freespace_planner/#output-topics","title":"Output topics","text":"Name Type Description ~output/trajectory
autoware_auto_planning_msgs::Trajectory trajectory to be followed is_completed
bool (implemented as rosparam) whether all split trajectory are published"},{"location":"planning/freespace_planner/#output-tfs","title":"Output TFs","text":"None
"},{"location":"planning/freespace_planner/#how-to-launch","title":"How to launch","text":"freespace_planner.launch
or add args when executing roslaunch
roslaunch freespace_planner freespace_planner.launch
planning_algorithms
string algorithms used in the node vehicle_shape_margin_m
float collision margin in planning algorithm update_rate
double timer's update rate waypoints_velocity
double velocity in output trajectory (currently, only constant velocity is supported) th_arrived_distance_m
double threshold distance to check if vehicle has arrived at the trajectory's endpoint th_stopped_time_sec
double threshold time to check if vehicle is stopped th_stopped_velocity_mps
double threshold velocity to check if vehicle is stopped th_course_out_distance_m
double threshold distance to check if vehicle is out of course vehicle_shape_margin_m
double vehicle margin replan_when_obstacle_found
bool whether replanning when obstacle has found on the trajectory replan_when_course_out
bool whether replanning when vehicle is out of course"},{"location":"planning/freespace_planner/#planner-common-parameters","title":"Planner common parameters","text":"Parameter Type Description time_limit
double time limit of planning minimum_turning_radius
double minimum turning radius of robot maximum_turning_radius
double maximum turning radius of robot theta_size
double the number of angle's discretization lateral_goal_range
double goal range of lateral position longitudinal_goal_range
double goal range of longitudinal position angle_goal_range
double goal range of angle curve_weight
double additional cost factor for curve actions reverse_weight
double additional cost factor for reverse actions obstacle_threshold
double threshold for regarding a certain grid as obstacle"},{"location":"planning/freespace_planner/#a-search-parameters","title":"A* search parameters","text":"Parameter Type Description only_behind_solutions
bool whether restricting the solutions to be behind the goal use_back
bool whether using backward trajectory distance_heuristic_weight
double heuristic weight for estimating node's cost"},{"location":"planning/freespace_planner/#rrt-search-parameters","title":"RRT* search parameters","text":"Parameter Type Description max planning time
double maximum planning time [msec] (used only when enable_update
is set true
) enable_update
bool whether update after feasible solution found until max_planning time
elapse use_informed_sampling
bool Use informed RRT* (of Gammell et al.) neighbor_radius
double neighbor radius of RRT* algorithm margin
double safety margin ensured in path's collision checking in RRT* algorithm"},{"location":"planning/freespace_planner/#flowchart","title":"Flowchart","text":""},{"location":"planning/freespace_planning_algorithms/","title":"freespace planning algorithms","text":""},{"location":"planning/freespace_planning_algorithms/#freespace-planning-algorithms","title":"freespace planning algorithms","text":""},{"location":"planning/freespace_planning_algorithms/#role","title":"Role","text":"This package is for development of path planning algorithms in free space.
"},{"location":"planning/freespace_planning_algorithms/#implemented-algorithms","title":"Implemented algorithms","text":"Please see rrtstar.md for a note on the implementation for informed-RRT*.
NOTE: As for RRT*, one can choose whether update after feasible solution found in RRT*. If not doing so, the algorithm is the almost (but exactly because of rewiring procedure) same as vanilla RRT. If you choose update, then you have option if the sampling after feasible solution found is \"informed\". If set true, then the algorithm is equivalent to informed RRT\\* of Gammell et al. 2014
.
There is a trade-off between algorithm speed and resulting solution quality. When we sort the algorithms by the spectrum of (high quality solution/ slow) -> (low quality solution / fast) it would be A* -> informed RRT* -> RRT. Note that in almost all case informed RRT* is better than RRT* for solution quality given the same computational time budget. So, RRT* is omitted in the comparison.
Some selection criteria would be:
AbstractPlanningAlgorithm
class. If necessary, please overwrite the virtual functions.nav_msgs::OccupancyGrid
-typed costmap. Thus, AbstractPlanningAlgorithm
class mainly implements the collision checking using the costmap, grid-based indexing, and coordinate transformation related to costmap.PlannerCommonParam
-typed and algorithm-specific- type structs as inputs of the constructor. For example, AstarSearch
class's constructor takes both PlannerCommonParam
and AstarParam
.Building the package with ros-test and run tests:
colcon build --packages-select freespace_planning_algorithms\ncolcon test --packages-select freespace_planning_algorithms\n
Inside the test, simulation results are stored in /tmp/fpalgos-{algorithm_type}-case{scenario_number}
as a rosbag. Loading these resulting files, by using test/debug_plot.py, one can create plots visualizing the path and obstacles as shown in the figures below. The created figures are then again saved in /tmp
.
The black cells, green box, and red box, respectively, indicate obstacles, start configuration, and goal configuration. The sequence of the blue boxes indicate the solution path.
"},{"location":"planning/freespace_planning_algorithms/#license-notice","title":"License notice","text":"Files src/reeds_shepp.cpp
and include/astar_search/reeds_shepp.h
are fetched from pyReedsShepp. Note that the implementation in pyReedsShepp
is also heavily based on the code in ompl. Both pyReedsShepp
and ompl
are distributed under 3-clause BSD license.
Let us define \\(f(x)\\) as minimum cost of the path when path is constrained to pass through \\(x\\) (so path will be \\(x_{\\mathrm{start}} \\to \\mathrm{x} \\to \\mathrm{x_{\\mathrm{goal}}}\\)). Also, let us define \\(c_{\\mathrm{best}}\\) as the current minimum cost of the feasible paths. Let us define a set $ X(f) = \\left{ x \\in X | f(x) < c*{\\mathrm{best}} \\right} $. If we could sample a new point from \\(X_f\\) instead of \\(X\\) as in vanilla RRT*, chance that \\(c*{\\mathrm{best}}\\) is updated is increased, thus the convergence rate is improved.
In most case, \\(f(x)\\) is unknown, thus it is straightforward to approximate the function \\(f\\) by a heuristic function \\(\\hat{f}\\). A heuristic function is admissible if \\(\\forall x \\in X, \\hat{f}(x) < f(x)\\), which is sufficient condition of conversion to optimal path. The good heuristic function \\(\\hat{f}\\) has two properties: 1) it is an admissible tight lower bound of \\(f\\) and 2) sampling from \\(X(\\hat{f})\\) is easy.
According to Gammell et al [1], a good heuristic function when path is always straight is \\(\\hat{f}(x) = ||x_{\\mathrm{start}} - x|| + ||x - x_{\\mathrm{goal}}||\\). If we don't assume any obstacle information the heuristic is tightest. Also, \\(X(\\hat{f})\\) is hyper-ellipsoid, and hence sampling from it can be done analytically.
"},{"location":"planning/freespace_planning_algorithms/rrtstar/#modification-to-fit-reeds-sheep-path-case","title":"Modification to fit reeds-sheep path case","text":"In the vehicle case, state is \\(x = (x_{1}, x_{2}, \\theta)\\). Unlike normal informed-RRT* where we can connect path by a straight line, here we connect the vehicle path by a reeds-sheep path. So, we need some modification of the original algorithm a bit. To this end, one might first consider a heuristic function \\(\\hat{f}_{\\mathrm{RS}}(x) = \\mathrm{RS}(x_{\\mathrm{start}}, x) + \\mathrm{RS}(x, x_{\\mathrm{goal}}) < f(x)\\) where \\(\\mathrm{RS}\\) computes reeds-sheep distance. Though it is good in the sense of tightness, however, sampling from \\(X(\\hat{f}_{RS})\\) is really difficult. Therefore, we use \\(\\hat{f}_{euc} = ||\\mathrm{pos}(x_{\\mathrm{start}}) - \\mathrm{pos}(x)|| + ||\\mathrm{pos}(x)- \\mathrm{pos}(x_{\\mathrm{goal}})||\\), which is admissible because \\(\\forall x \\in X, \\hat{f}_{euc}(x) < \\hat{f}_{\\mathrm{RS}}(x) < f(x)\\). Here, \\(\\mathrm{pos}\\) function returns position \\((x_{1}, x_{2})\\) of the vehicle.
Sampling from \\(X(\\hat{f}_{\\mathrm{euc}})\\) is easy because \\(X(\\hat{f}_{\\mathrm{euc}}) = \\mathrm{Ellipse} \\times (-\\pi, \\pi]\\). Here \\(\\mathrm{Ellipse}\\)'s focal points are \\(x_{\\mathrm{start}}\\) and \\(x_{\\mathrm{goal}}\\) and conjugate diameters is $\\sqrt{c^{2}{\\mathrm{best}} - ||\\mathrm{pos}(x}) - \\mathrm{pos}(x_{\\mathrm{goal}}))|| } $ (similar to normal informed-rrtstar's ellipsoid). Please notice that \\(\\theta\\) can be arbitrary because \\(\\hat{f}_{\\mathrm{euc}}\\) is independent of \\(\\theta\\).
[1] Gammell et al., \"Informed RRT*: Optimal sampling-based path planning focused via direct sampling of an admissible ellipsoidal heuristic.\" IROS (2014)
"},{"location":"planning/mission_planner/","title":"Mission Planner","text":""},{"location":"planning/mission_planner/#mission-planner","title":"Mission Planner","text":""},{"location":"planning/mission_planner/#purpose","title":"Purpose","text":"Mission Planner
calculates a route that navigates from the current ego pose to the goal pose following the given check points. The route is made of a sequence of lanes on a static map. Dynamic objects (e.g. pedestrians and other vehicles) and dynamic map information (e.g. road construction which blocks some lanes) are not considered during route planning. Therefore, the output topic is only published when the goal pose or check points are given and will be latched until the new goal pose or check points are given.
The core implementation does not depend on a map format. In current Autoware.universe, only Lanelet2 map format is supported.
"},{"location":"planning/mission_planner/#interfaces","title":"Interfaces","text":""},{"location":"planning/mission_planner/#parameters","title":"Parameters","text":"Name Type Descriptionmap_frame
string The frame name for map arrival_check_angle_deg
double Angle threshold for goal check arrival_check_distance
double Distance threshold for goal check arrival_check_duration
double Duration threshold for goal check goal_angle_threshold
double Max goal pose angle for goal approve enable_correct_goal_pose
bool Enabling correction of goal pose according to the closest lanelet orientation reroute_time_threshold
double If the time to the rerouting point at the current velocity is greater than this threshold, rerouting is possible minimum_reroute_length
double Minimum Length for publishing a new route consider_no_drivable_lanes
bool This flag is for considering no_drivable_lanes in planning or not."},{"location":"planning/mission_planner/#services","title":"Services","text":"Name Type Description /planning/mission_planning/clear_route
autoware_adapi_v1_msgs/srv/ClearRoute route clear request /planning/mission_planning/set_route_points
autoware_adapi_v1_msgs/srv/SetRoutePoints route request with pose waypoints. Assumed the vehicle is stopped. /planning/mission_planning/set_route
autoware_adapi_v1_msgs/srv/SetRoute route request with lanelet waypoints. Assumed the vehicle is stopped. /planning/mission_planning/change_route_points
autoware_adapi_v1_msgs/srv/SetRoutePoints route change request with pose waypoints. This can be called when the vehicle is moving. /planning/mission_planning/change_route
autoware_adapi_v1_msgs/srv/SetRoute route change request with lanelet waypoints. This can be called when the vehicle is moving. ~/srv/set_mrm_route
autoware_adapi_v1_msgs/srv/SetRoutePoints set emergency route. This can be called when the vehicle is moving. ~/srv/clear_mrm_route
std_srvs/srv/Trigger clear emergency route."},{"location":"planning/mission_planner/#subscriptions","title":"Subscriptions","text":"Name Type Description input/vector_map
autoware_auto_mapping_msgs/HADMapBin vector map of Lanelet2 input/modified_goal
geometry_msgs/PoseWithUuidStamped modified goal pose"},{"location":"planning/mission_planner/#publications","title":"Publications","text":"Name Type Description /planning/mission_planning/route_state
autoware_adapi_v1_msgs/msg/RouteState route state /planning/mission_planning/route
autoware_planning_msgs/LaneletRoute route debug/route_marker
visualization_msgs/msg/MarkerArray route marker for debug debug/goal_footprint
visualization_msgs/msg/MarkerArray goal footprint for debug"},{"location":"planning/mission_planner/#route-section","title":"Route section","text":"Route section, whose type is autoware_planning_msgs/LaneletSegment
, is a \"slice\" of a road that bundles lane changeable lanes. Note that the most atomic unit of route is autoware_auto_mapping_msgs/LaneletPrimitive
, which has the unique id of a lane in a vector map and its type. Therefore, route message does not contain geometric information about the lane since we did not want to have planning module\u2019s message to have dependency on map data structure.
The ROS message of route section contains following three elements for each route section.
preferred_primitive
: Preferred lane to follow towards the goal.primitives
: All neighbor lanes in the same direction including the preferred lane.The mission planner has control mechanism to validate the given goal pose and create a route. If goal pose angle between goal pose lanelet and goal pose' yaw is greater than goal_angle_threshold
parameter, the goal is rejected. Another control mechanism is the creation of a footprint of the goal pose according to the dimensions of the vehicle and checking whether this footprint is within the lanelets. If goal footprint exceeds lanelets, then the goal is rejected.
At the image below, there are sample goal pose validation cases.
"},{"location":"planning/mission_planner/#implementation","title":"Implementation","text":""},{"location":"planning/mission_planner/#mission-planner_1","title":"Mission Planner","text":"Two callbacks (goal and check points) are a trigger for route planning. Routing graph, which plans route in Lanelet2, must be created before those callbacks, and this routing graph is created in vector map callback.
plan route
is explained in detail in the following section.
plan route
is executed with check points including current ego pose and goal pose.
plan path between each check points
firstly calculates closest lanes to start and goal pose. Then routing graph of Lanelet2 plans the shortest path from start and goal pose.
initialize route lanelets
initializes route handler, and calculates route_lanelets
. route_lanelets
, all of which will be registered in route sections, are lanelets next to the lanelets in the planned path, and used when planning lane change. To calculate route_lanelets
,
route_lanelets
.candidate_lanelets
.candidate_lanelets
are route_lanelets
, the candidate_lanelet
is registered as route_lanelets
candidate_lanelet
(an adjacent lane) is not lane-changeable, we can pass the candidate_lanelet
without lane change if the following and previous lanelets of the candidate_lanelet
are route_lanelets
get preferred lanelets
extracts preferred_primitive
from route_lanelets
with the route handler.
create route sections
extracts primitives
from route_lanelets
for each route section with the route handler, and creates route sections.
Reroute here means changing the route while driving. Unlike route setting, it is required to keep a certain distance from vehicle to the point where the route is changed.
And there are three use cases that require reroute.
change_route_points
change_route
This is route change that the application makes using the API. It is used when changing the destination while driving or when driving a divided loop route. When the vehicle is driving on a MRM route, normal rerouting by this interface is not allowed.
"},{"location":"planning/mission_planner/#emergency-route","title":"Emergency route","text":"set_mrm_route
clear_mrm_route
This interface for the MRM that pulls over the road shoulder. It has to be stopped as soon as possible, so a reroute is required. The MRM route has priority over the normal route. And if MRM route is cleared, try to return to the normal route also with a rerouting safety check.
"},{"location":"planning/mission_planner/#goal-modification","title":"Goal modification","text":"modified_goal
This is a goal change to pull over, avoid parked vehicles, and so on by a planning component. If the modified goal is outside the calculated route, a reroute is required. This goal modification is executed by checking the local environment and path safety as the vehicle actually approaches the destination. And this modification is allowed for both normal_route and mrm_route. The new route generated here is sent to the AD API so that it can also be referenced by the application. Note, however, that the specifications here are subject to change in the future.
"},{"location":"planning/mission_planner/#rerouting-limitations","title":"Rerouting Limitations","text":"modified_goal
needs to be guaranteed by the behavior_path_planner, e.g., that it is not placed in the wrong lane, that it can be safely rerouted, etc.motion_velocity_smoother
outputs a desired velocity profile on a reference trajectory. This module plans a velocity profile within the limitations of the velocity, the acceleration and the jerk to realize both the maximization of velocity and the ride quality. We call this module motion_velocity_smoother
because the limitations of the acceleration and the jerk means the smoothness of the velocity profile.
For the point on the reference trajectory closest to the center of the rear wheel axle of the vehicle, it extracts the reference path between extract_behind_dist
behind and extract_ahead_dist
ahead.
It applies the velocity limit input from the external of motion_velocity_smoother
. Remark that the external velocity limit is different from the velocity limit already set on the map and the reference trajectory. The external velocity is applied at the position that it is able to reach the velocity limit with the deceleration and the jerk constraints set as the parameter.
It applies the velocity limit near the stopping point. This function is used to approach near the obstacle or improve the accuracy of stopping.
"},{"location":"planning/motion_velocity_smoother/#apply-lateral-acceleration-limit","title":"Apply lateral acceleration limit","text":"It applies the velocity limit to decelerate at the curve. It calculates the velocity limit from the curvature of the reference trajectory and the maximum lateral acceleration max_lateral_accel
. The velocity limit is set as not to fall under min_curve_velocity
.
Note: velocity limit that requests larger than nominal.jerk
is not applied. In other words, even if a sharp curve is planned just in front of the ego, no deceleration is performed.
It calculates the desired steering angles of trajectory points. and it applies the steering rate limit. If the (steering_angle_rate
> max_steering_angle_rate
), it decreases the velocity of the trajectory point to acceptable velocity.
It resamples the points on the reference trajectory with designated time interval. Note that the range of the length of the trajectory is set between min_trajectory_length
and max_trajectory_length
, and the distance between two points is longer than min_trajectory_interval_distance
. It samples densely up to the distance traveled between resample_time
with the current velocity, then samples sparsely after that. By sampling according to the velocity, both calculation load and accuracy are achieved since it samples finely at low velocity and coarsely at high velocity.
Calculate initial values for velocity planning. The initial values are calculated according to the situation as shown in the following table.
Situation Initial velocity Initial acceleration First calculation Current velocity 0.0 Engagingengage_velocity
engage_acceleration
Deviate between the planned velocity and the current velocity Current velocity Previous planned value Normal Previous planned value Previous planned value"},{"location":"planning/motion_velocity_smoother/#smooth-velocity","title":"Smooth velocity","text":"It plans the velocity. The algorithm of velocity planning is chosen from JerkFiltered
, L2
and Linf
, and it is set in the launch file. In these algorithms, they use OSQP[1] as the solver of the optimization.
It minimizes the sum of the minus of the square of the velocity and the square of the violation of the velocity limit, the acceleration limit and the jerk limit.
"},{"location":"planning/motion_velocity_smoother/#l2","title":"L2","text":"It minimizes the sum of the minus of the square of the velocity, the square of the the pseudo-jerk[2] and the square of the violation of the velocity limit and the acceleration limit.
"},{"location":"planning/motion_velocity_smoother/#linf","title":"Linf","text":"It minimizes the sum of the minus of the square of the velocity, the maximum absolute value of the the pseudo-jerk[2] and the square of the violation of the velocity limit and the acceleration limit.
"},{"location":"planning/motion_velocity_smoother/#post-process","title":"Post process","text":"It performs the post-process of the planned velocity.
max_velocity
post resampling
)After the optimization, a resampling called post resampling
is performed before passing the optimized trajectory to the next node. Since the required path interval from optimization may be different from the one for the next module, post resampling
helps to fill this gap. Therefore, in post resampling
, it is necessary to check the path specification of the following module to determine the parameters. Note that if the computational load of the optimization algorithm is high and the path interval is sparser than the path specification of the following module in the first resampling, post resampling
would resample the trajectory densely. On the other hand, if the computational load of the optimization algorithm is small and the path interval is denser than the path specification of the following module in the first resampling, the path is sparsely resampled according to the specification of the following module.
~/input/trajectory
autoware_auto_planning_msgs/Trajectory
Reference trajectory /planning/scenario_planning/max_velocity
std_msgs/Float32
External velocity limit [m/s] /localization/kinematic_state
nav_msgs/Odometry
Current odometry /tf
tf2_msgs/TFMessage
TF /tf_static
tf2_msgs/TFMessage
TF static"},{"location":"planning/motion_velocity_smoother/#output","title":"Output","text":"Name Type Description ~/output/trajectory
autoware_auto_planning_msgs/Trajectory
Modified trajectory /planning/scenario_planning/current_max_velocity
std_msgs/Float32
Current external velocity limit [m/s] ~/closest_velocity
std_msgs/Float32
Planned velocity closest to ego base_link (for debug) ~/closest_acceleration
std_msgs/Float32
Planned acceleration closest to ego base_link (for debug) ~/closest_jerk
std_msgs/Float32
Planned jerk closest to ego base_link (for debug) ~/debug/trajectory_raw
autoware_auto_planning_msgs/Trajectory
Extracted trajectory (for debug) ~/debug/trajectory_external_velocity_limited
autoware_auto_planning_msgs/Trajectory
External velocity limited trajectory (for debug) ~/debug/trajectory_lateral_acc_filtered
autoware_auto_planning_msgs/Trajectory
Lateral acceleration limit filtered trajectory (for debug) ~/debug/trajectory_steering_rate_limited
autoware_auto_planning_msgs/Trajectory
Steering angle rate limit filtered trajectory (for debug) ~/debug/trajectory_time_resampled
autoware_auto_planning_msgs/Trajectory
Time resampled trajectory (for debug) ~/distance_to_stopline
std_msgs/Float32
Distance to stop line from current ego pose (max 50 m) (for debug) ~/stop_speed_exceeded
std_msgs/Bool
It publishes true
if planned velocity on the point which the maximum velocity is zero is over threshold"},{"location":"planning/motion_velocity_smoother/#parameters","title":"Parameters","text":""},{"location":"planning/motion_velocity_smoother/#constraint-parameters","title":"Constraint parameters","text":"Name Type Description Default value max_velocity
double
Max velocity limit [m/s] 20.0 max_accel
double
Max acceleration limit [m/ss] 1.0 min_decel
double
Min deceleration limit [m/ss] -0.5 stop_decel
double
Stop deceleration value at a stop point [m/ss] 0.0 max_jerk
double
Max jerk limit [m/sss] 1.0 min_jerk
double
Min jerk limit [m/sss] -0.5"},{"location":"planning/motion_velocity_smoother/#external-velocity-limit-parameter","title":"External velocity limit parameter","text":"Name Type Description Default value margin_to_insert_external_velocity_limit
double
margin distance to insert external velocity limit [m] 0.3"},{"location":"planning/motion_velocity_smoother/#curve-parameters","title":"Curve parameters","text":"Name Type Description Default value enable_lateral_acc_limit
bool
To toggle the lateral acceleration filter on and off. You can switch it dynamically at runtime. true max_lateral_accel
double
Max lateral acceleration limit [m/ss] 0.5 min_curve_velocity
double
Min velocity at lateral acceleration limit [m/ss] 2.74 decel_distance_before_curve
double
Distance to slowdown before a curve for lateral acceleration limit [m] 3.5 decel_distance_after_curve
double
Distance to slowdown after a curve for lateral acceleration limit [m] 2.0 min_decel_for_lateral_acc_lim_filter
double
Deceleration limit to avoid sudden braking by the lateral acceleration filter [m/ss]. Strong limitation degrades the deceleration response to the appearance of sharp curves due to obstacle avoidance, etc. -2.5"},{"location":"planning/motion_velocity_smoother/#engage-replan-parameters","title":"Engage & replan parameters","text":"Name Type Description Default value replan_vel_deviation
double
Velocity deviation to replan initial velocity [m/s] 5.53 engage_velocity
double
Engage velocity threshold [m/s] (if the trajectory velocity is higher than this value, use this velocity for engage vehicle speed) 0.25 engage_acceleration
double
Engage acceleration [m/ss] (use this acceleration when engagement) 0.1 engage_exit_ratio
double
Exit engage sequence to normal velocity planning when the velocity exceeds engage_exit_ratio x engage_velocity. 0.5 stop_dist_to_prohibit_engage
double
If the stop point is in this distance, the speed is set to 0 not to move the vehicle [m] 0.5"},{"location":"planning/motion_velocity_smoother/#stopping-velocity-parameters","title":"Stopping velocity parameters","text":"Name Type Description Default value stopping_velocity
double
change target velocity to this value before v=0 point [m/s] 2.778 stopping_distance
double
distance for the stopping_velocity [m]. 0 means the stopping velocity is not applied. 0.0"},{"location":"planning/motion_velocity_smoother/#extraction-parameters","title":"Extraction parameters","text":"Name Type Description Default value extract_ahead_dist
double
Forward trajectory distance used for planning [m] 200.0 extract_behind_dist
double
backward trajectory distance used for planning [m] 5.0 delta_yaw_threshold
double
Allowed delta yaw between ego pose and trajectory pose [radian] 1.0472"},{"location":"planning/motion_velocity_smoother/#resampling-parameters","title":"Resampling parameters","text":"Name Type Description Default value max_trajectory_length
double
Max trajectory length for resampling [m] 200.0 min_trajectory_length
double
Min trajectory length for resampling [m] 30.0 resample_time
double
Resample total time [s] 10.0 dense_dt
double
resample time interval for dense sampling [s] 0.1 dense_min_interval_distance
double
minimum points-interval length for dense sampling [m] 0.1 sparse_dt
double
resample time interval for sparse sampling [s] 0.5 sparse_min_interval_distance
double
minimum points-interval length for sparse sampling [m] 4.0"},{"location":"planning/motion_velocity_smoother/#resampling-parameters-for-post-process","title":"Resampling parameters for post process","text":"Name Type Description Default value post_max_trajectory_length
double
max trajectory length for resampling [m] 300.0 post_min_trajectory_length
double
min trajectory length for resampling [m] 30.0 post_resample_time
double
resample total time for dense sampling [s] 10.0 post_dense_dt
double
resample time interval for dense sampling [s] 0.1 post_dense_min_interval_distance
double
minimum points-interval length for dense sampling [m] 0.1 post_sparse_dt
double
resample time interval for sparse sampling [s] 0.1 post_sparse_min_interval_distance
double
minimum points-interval length for sparse sampling [m] 1.0"},{"location":"planning/motion_velocity_smoother/#limit-steering-angle-rate-parameters","title":"Limit steering angle rate parameters","text":"Name Type Description Default value enable_steering_rate_limit
bool
To toggle the steer rate filter on and off. You can switch it dynamically at runtime. true max_steering_angle_rate
double
Maximum steering angle rate [degree/s] 40.0 resample_ds
double
Distance between trajectory points [m] 0.1 curvature_threshold
double
If curvature > curvature_threshold, steeringRateLimit is triggered [1/m] 0.02 curvature_calculation_distance
double
Distance of points while curvature is calculating [m] 1.0"},{"location":"planning/motion_velocity_smoother/#weights-for-optimization","title":"Weights for optimization","text":""},{"location":"planning/motion_velocity_smoother/#jerkfiltered_1","title":"JerkFiltered","text":"Name Type Description Default value jerk_weight
double
Weight for \"smoothness\" cost for jerk 10.0 over_v_weight
double
Weight for \"over speed limit\" cost 100000.0 over_a_weight
double
Weight for \"over accel limit\" cost 5000.0 over_j_weight
double
Weight for \"over jerk limit\" cost 1000.0"},{"location":"planning/motion_velocity_smoother/#l2_1","title":"L2","text":"Name Type Description Default value pseudo_jerk_weight
double
Weight for \"smoothness\" cost 100.0 over_v_weight
double
Weight for \"over speed limit\" cost 100000.0 over_a_weight
double
Weight for \"over accel limit\" cost 1000.0"},{"location":"planning/motion_velocity_smoother/#linf_1","title":"Linf","text":"Name Type Description Default value pseudo_jerk_weight
double
Weight for \"smoothness\" cost 100.0 over_v_weight
double
Weight for \"over speed limit\" cost 100000.0 over_a_weight
double
Weight for \"over accel limit\" cost 1000.0"},{"location":"planning/motion_velocity_smoother/#others","title":"Others","text":"Name Type Description Default value over_stop_velocity_warn_thr
double
Threshold to judge that the optimized velocity exceeds the input velocity on the stop point [m/s] 1.389"},{"location":"planning/motion_velocity_smoother/#assumptions-known-limits","title":"Assumptions / Known limits","text":"[1] B. Stellato, et al., \"OSQP: an operator splitting solver for quadratic programs\", Mathematical Programming Computation, 2020, 10.1007/s12532-020-00179-2.
[2] Y. Zhang, et al., \"Toward a More Complete, Flexible, and Safer Speed Planning for Autonomous Driving via Convex Optimization\", Sensors, vol. 18, no. 7, p. 2185, 2018, 10.3390/s18072185
"},{"location":"planning/motion_velocity_smoother/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"planning/motion_velocity_smoother/README.ja/","title":"Motion Velocity Smoother","text":""},{"location":"planning/motion_velocity_smoother/README.ja/#motion-velocity-smoother","title":"Motion Velocity Smoother","text":""},{"location":"planning/motion_velocity_smoother/README.ja/#purpose","title":"Purpose","text":"motion_velocity_smoother
\u306f\u76ee\u6a19\u8ecc\u9053\u4e0a\u306e\u5404\u70b9\u306b\u304a\u3051\u308b\u671b\u307e\u3057\u3044\u8eca\u901f\u3092\u8a08\u753b\u3057\u3066\u51fa\u529b\u3059\u308b\u30e2\u30b8\u30e5\u30fc\u30eb\u3067\u3042\u308b\u3002 \u3053\u306e\u30e2\u30b8\u30e5\u30fc\u30eb\u306f\u3001\u901f\u5ea6\u306e\u6700\u5927\u5316\u3068\u4e57\u308a\u5fc3\u5730\u306e\u826f\u3055\u3092\u4e21\u7acb\u3059\u308b\u305f\u3081\u306b\u3001\u4e8b\u524d\u306b\u6307\u5b9a\u3055\u308c\u305f\u5236\u9650\u901f\u5ea6\u3001\u5236\u9650\u52a0\u901f\u5ea6\u304a\u3088\u3073\u5236\u9650\u8e8d\u5ea6\u306e\u7bc4\u56f2\u3067\u8eca\u901f\u3092\u8a08\u753b\u3059\u308b\u3002 \u52a0\u901f\u5ea6\u3084\u8e8d\u5ea6\u306e\u5236\u9650\u3092\u4e0e\u3048\u308b\u3053\u3068\u306f\u8eca\u901f\u306e\u5909\u5316\u3092\u6ed1\u3089\u304b\u306b\u3059\u308b\u3053\u3068\u306b\u5bfe\u5fdc\u3059\u308b\u305f\u3081\u3001\u3053\u306e\u30e2\u30b8\u30e5\u30fc\u30eb\u3092motion_velocity_smoother
\u3068\u547c\u3093\u3067\u3044\u308b\u3002
\u81ea\u8eca\u5f8c\u8f2a\u8ef8\u4e2d\u5fc3\u4f4d\u7f6e\u306b\u6700\u3082\u8fd1\u3044\u53c2\u7167\u7d4c\u8def\u4e0a\u306e\u70b9\u306b\u5bfe\u3057\u3001extract_behind_dist
\u3060\u3051\u623b\u3063\u305f\u70b9\u304b\u3089extract_ahead_dist
\u3060\u3051\u9032\u3093\u3060\u70b9\u307e\u3067\u306e\u53c2\u7167\u7d4c\u8def\u3092\u629c\u304d\u51fa\u3059\u3002
\u30e2\u30b8\u30e5\u30fc\u30eb\u5916\u90e8\u304b\u3089\u6307\u5b9a\u3055\u308c\u305f\u901f\u5ea6\u5236\u9650\u3092\u9069\u7528\u3059\u308b\u3002 \u3053\u3053\u3067\u6271\u3046\u5916\u90e8\u306e\u901f\u5ea6\u5236\u9650\u306f/planning/scenario_planning/max_velocity
\u306e topic \u3067\u6e21\u3055\u308c\u308b\u3082\u306e\u3067\u3001\u5730\u56f3\u4e0a\u3067\u8a2d\u5b9a\u3055\u308c\u305f\u901f\u5ea6\u5236\u9650\u306a\u3069\u3001\u53c2\u7167\u7d4c\u8def\u306b\u3059\u3067\u306b\u8a2d\u5b9a\u3055\u308c\u3066\u3044\u308b\u5236\u9650\u901f\u5ea6\u3068\u306f\u5225\u3067\u3042\u308b\u3002 \u5916\u90e8\u304b\u3089\u6307\u5b9a\u3055\u308c\u308b\u901f\u5ea6\u5236\u9650\u306f\u3001\u30d1\u30e9\u30e1\u30fc\u30bf\u3067\u6307\u5b9a\u3055\u308c\u3066\u3044\u308b\u6e1b\u901f\u5ea6\u304a\u3088\u3073\u8e8d\u5ea6\u306e\u5236\u9650\u306e\u7bc4\u56f2\u3067\u6e1b\u901f\u53ef\u80fd\u306a\u4f4d\u7f6e\u304b\u3089\u901f\u5ea6\u5236\u9650\u3092\u9069\u7528\u3059\u308b\u3002
\u505c\u6b62\u70b9\u306b\u8fd1\u3065\u3044\u305f\u3068\u304d\u306e\u901f\u5ea6\u3092\u8a2d\u5b9a\u3059\u308b\u3002\u969c\u5bb3\u7269\u8fd1\u508d\u307e\u3067\u8fd1\u3065\u304f\u5834\u5408\u3084\u3001\u6b63\u7740\u7cbe\u5ea6\u5411\u4e0a\u306a\u3069\u306e\u76ee\u7684\u306b\u7528\u3044\u308b\u3002
"},{"location":"planning/motion_velocity_smoother/README.ja/#apply-lateral-acceleration-limit","title":"Apply lateral acceleration limit","text":"\u7d4c\u8def\u306e\u66f2\u7387\u306b\u5fdc\u3058\u3066\u3001\u6307\u5b9a\u3055\u308c\u305f\u6700\u5927\u6a2a\u52a0\u901f\u5ea6max_lateral_accel
\u3092\u8d85\u3048\u306a\u3044\u901f\u5ea6\u3092\u5236\u9650\u901f\u5ea6\u3068\u3057\u3066\u8a2d\u5b9a\u3059\u308b\u3002\u305f\u3060\u3057\u3001\u5236\u9650\u901f\u5ea6\u306fmin_curve_velocity
\u3092\u4e0b\u56de\u3089\u306a\u3044\u3088\u3046\u306b\u8a2d\u5b9a\u3059\u308b\u3002
\u6307\u5b9a\u3055\u308c\u305f\u6642\u9593\u9593\u9694\u3067\u7d4c\u8def\u306e\u70b9\u3092\u518d\u30b5\u30f3\u30d7\u30eb\u3059\u308b\u3002\u305f\u3060\u3057\u3001\u7d4c\u8def\u5168\u4f53\u306e\u9577\u3055\u306fmin_trajectory_length
\u304b\u3089max_trajectory_length
\u306e\u9593\u3068\u306a\u308b\u3088\u3046\u306b\u518d\u30b5\u30f3\u30d7\u30eb\u3092\u884c\u3044\u3001\u70b9\u306e\u9593\u9694\u306fmin_trajectory_interval_distance
\u3088\u308a\u5c0f\u3055\u304f\u306a\u3089\u306a\u3044\u3088\u3046\u306b\u3059\u308b\u3002 \u73fe\u5728\u8eca\u901f\u3067resample_time
\u306e\u9593\u9032\u3080\u8ddd\u96e2\u307e\u3067\u306f\u5bc6\u306b\u30b5\u30f3\u30d7\u30ea\u30f3\u30b0\u3057\u3001\u305d\u308c\u4ee5\u964d\u306f\u758e\u306b\u30b5\u30f3\u30d7\u30ea\u30f3\u30b0\u3059\u308b\u3002 \u3053\u306e\u65b9\u6cd5\u3067\u30b5\u30f3\u30d7\u30ea\u30f3\u30b0\u3059\u308b\u3053\u3068\u3067\u3001\u4f4e\u901f\u6642\u306f\u5bc6\u306b\u3001\u9ad8\u901f\u6642\u306f\u758e\u306b\u30b5\u30f3\u30d7\u30eb\u3055\u308c\u308b\u305f\u3081\u3001\u505c\u6b62\u7cbe\u5ea6\u3068\u8a08\u7b97\u8ca0\u8377\u8efd\u6e1b\u306e\u4e21\u7acb\u3092\u56f3\u3063\u3066\u3044\u308b\u3002
\u901f\u5ea6\u8a08\u753b\u306e\u305f\u3081\u306e\u521d\u671f\u5024\u3092\u8a08\u7b97\u3059\u308b\u3002\u521d\u671f\u5024\u306f\u72b6\u6cc1\u306b\u5fdc\u3058\u3066\u4e0b\u8868\u306e\u3088\u3046\u306b\u8a08\u7b97\u3059\u308b\u3002
\u72b6\u6cc1 \u521d\u671f\u901f\u5ea6 \u521d\u671f\u52a0\u901f\u5ea6 \u6700\u521d\u306e\u8a08\u7b97\u6642 \u73fe\u5728\u8eca\u901f 0.0 \u767a\u9032\u6642engage_velocity
engage_acceleration
\u73fe\u5728\u8eca\u901f\u3068\u8a08\u753b\u8eca\u901f\u304c\u4e56\u96e2 \u73fe\u5728\u8eca\u901f \u524d\u56de\u8a08\u753b\u5024 \u901a\u5e38\u6642 \u524d\u56de\u8a08\u753b\u5024 \u524d\u56de\u8a08\u753b\u5024"},{"location":"planning/motion_velocity_smoother/README.ja/#smooth-velocity","title":"Smooth velocity","text":"\u901f\u5ea6\u306e\u8a08\u753b\u3092\u884c\u3046\u3002\u901f\u5ea6\u8a08\u753b\u306e\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u306fJerkFiltered
, L2
, Linf
\u306e 3 \u7a2e\u985e\u306e\u3046\u3061\u304b\u3089\u30b3\u30f3\u30d5\u30a3\u30b0\u3067\u6307\u5b9a\u3059\u308b\u3002 \u6700\u9069\u5316\u306e\u30bd\u30eb\u30d0\u306f OSQP[1]\u3092\u5229\u7528\u3059\u308b\u3002
\u901f\u5ea6\u306e 2 \u4e57\uff08\u6700\u5c0f\u5316\u3067\u8868\u3059\u305f\u3081\u8ca0\u5024\u3067\u8868\u73fe\uff09\u3001\u5236\u9650\u901f\u5ea6\u9038\u8131\u91cf\u306e 2 \u4e57\u3001\u5236\u9650\u52a0\u5ea6\u9038\u8131\u91cf\u306e 2 \u4e57\u3001\u5236\u9650\u30b8\u30e3\u30fc\u30af\u9038\u8131\u91cf\u306e 2 \u4e57\u3001\u30b8\u30e3\u30fc\u30af\u306e 2 \u4e57\u306e\u7dcf\u548c\u3092\u6700\u5c0f\u5316\u3059\u308b\u3002
"},{"location":"planning/motion_velocity_smoother/README.ja/#l2","title":"L2","text":"\u901f\u5ea6\u306e 2 \u4e57\uff08\u6700\u5c0f\u5316\u3067\u8868\u3059\u305f\u3081\u8ca0\u5024\u3067\u8868\u73fe\uff09\u3001\u5236\u9650\u901f\u5ea6\u9038\u8131\u91cf\u306e 2 \u4e57\u3001\u5236\u9650\u52a0\u5ea6\u9038\u8131\u91cf\u306e 2 \u4e57\u3001\u7591\u4f3c\u30b8\u30e3\u30fc\u30af[2]\u306e 2 \u4e57\u306e\u7dcf\u548c\u3092\u6700\u5c0f\u5316\u3059\u308b\u3002
"},{"location":"planning/motion_velocity_smoother/README.ja/#linf","title":"Linf","text":"\u901f\u5ea6\u306e 2 \u4e57\uff08\u6700\u5c0f\u5316\u3067\u8868\u3059\u305f\u3081\u8ca0\u5024\u3067\u8868\u73fe\uff09\u3001\u5236\u9650\u901f\u5ea6\u9038\u8131\u91cf\u306e 2 \u4e57\u3001\u5236\u9650\u52a0\u5ea6\u9038\u8131\u91cf\u306e 2 \u4e57\u306e\u7dcf\u548c\u3068\u7591\u4f3c\u30b8\u30e3\u30fc\u30af[2]\u306e\u7d76\u5bfe\u6700\u5927\u5024\u306e\u548c\u306e\u6700\u5c0f\u5316\u3059\u308b\u3002
"},{"location":"planning/motion_velocity_smoother/README.ja/#post-process","title":"Post process","text":"\u8a08\u753b\u3055\u308c\u305f\u8ecc\u9053\u306e\u5f8c\u51e6\u7406\u3092\u884c\u3046\u3002
max_velocity
\u4ee5\u4e0b\u3068\u306a\u308b\u3088\u3046\u306b\u8a2d\u5b9apost resampling
)\u6700\u9069\u5316\u306e\u8a08\u7b97\u304c\u7d42\u308f\u3063\u305f\u3042\u3068\u3001\u6b21\u306e\u30ce\u30fc\u30c9\u306b\u7d4c\u8def\u3092\u6e21\u3059\u524d\u306bpost resampling
\u3068\u547c\u3070\u308c\u308b\u30ea\u30b5\u30f3\u30d7\u30ea\u30f3\u30b0\u3092\u884c\u3046\u3002\u3053\u3053\u3067\u518d\u5ea6\u30ea\u30b5\u30f3\u30d7\u30ea\u30f3\u30b0\u3092\u884c\u3063\u3066\u3044\u308b\u7406\u7531\u3068\u3057\u3066\u306f\u3001\u6700\u9069\u5316\u524d\u3067\u5fc5\u8981\u306a\u7d4c\u8def\u9593\u9694\u3068\u5f8c\u6bb5\u306e\u30e2\u30b8\u30e5\u30fc\u30eb\u306b\u6e21\u3059\u7d4c\u8def\u9593\u9694\u304c\u5fc5\u305a\u3057\u3082\u4e00\u81f4\u3057\u3066\u3044\u306a\u3044\u304b\u3089\u3067\u3042\u308a\u3001\u305d\u306e\u5dee\u3092\u57cb\u3081\u308b\u305f\u3081\u306b\u518d\u5ea6\u30b5\u30f3\u30d7\u30ea\u30f3\u30b0\u3092\u884c\u3063\u3066\u3044\u308b\u3002\u305d\u306e\u305f\u3081\u3001post resampling
\u3067\u306f\u5f8c\u6bb5\u30e2\u30b8\u30e5\u30fc\u30eb\u306e\u7d4c\u8def\u4ed5\u69d8\u3092\u78ba\u8a8d\u3057\u3066\u30d1\u30e9\u30e1\u30fc\u30bf\u3092\u6c7a\u3081\u308b\u5fc5\u8981\u304c\u3042\u308b\u3002\u306a\u304a\u3001\u6700\u9069\u5316\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u306e\u8a08\u7b97\u8ca0\u8377\u304c\u9ad8\u304f\u3001\u6700\u521d\u306e\u30ea\u30b5\u30f3\u30d7\u30ea\u30f3\u30b0\u3067\u7d4c\u8def\u9593\u9694\u304c\u5f8c\u6bb5\u30e2\u30b8\u30e5\u30fc\u30eb\u306e\u7d4c\u8def\u4ed5\u69d8\u3088\u308a\u758e\u306b\u306a\u3063\u3066\u3044\u308b\u5834\u5408\u3001post resampling
\u3067\u7d4c\u8def\u3092\u871c\u306b\u30ea\u30b5\u30f3\u30d7\u30ea\u30f3\u30b0\u3059\u308b\u3002\u9006\u306b\u6700\u9069\u5316\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u306e\u8a08\u7b97\u8ca0\u8377\u304c\u5c0f\u3055\u304f\u3001\u6700\u521d\u306e\u30ea\u30b5\u30f3\u30d7\u30ea\u30f3\u30b0\u3067\u7d4c\u8def\u9593\u9694\u304c\u5f8c\u6bb5\u306e\u7d4c\u8def\u4ed5\u69d8\u3088\u308a\u871c\u306b\u306a\u3063\u3066\u3044\u308b\u5834\u5408\u306f\u3001post resampling
\u3067\u7d4c\u8def\u3092\u305d\u306e\u4ed5\u69d8\u306b\u5408\u308f\u305b\u3066\u758e\u306b\u30ea\u30b5\u30f3\u30d7\u30ea\u30f3\u30b0\u3059\u308b\u3002
~/input/trajectory
autoware_auto_planning_msgs/Trajectory
Reference trajectory /planning/scenario_planning/max_velocity
std_msgs/Float32
External velocity limit [m/s] /localization/kinematic_state
nav_msgs/Odometry
Current odometry /tf
tf2_msgs/TFMessage
TF /tf_static
tf2_msgs/TFMessage
TF static"},{"location":"planning/motion_velocity_smoother/README.ja/#output","title":"Output","text":"Name Type Description ~/output/trajectory
autoware_auto_planning_msgs/Trajectory
Modified trajectory /planning/scenario_planning/current_max_velocity
std_msgs/Float32
Current external velocity limit [m/s] ~/closest_velocity
std_msgs/Float32
Planned velocity closest to ego base_link (for debug) ~/closest_acceleration
std_msgs/Float32
Planned acceleration closest to ego base_link (for debug) ~/closest_jerk
std_msgs/Float32
Planned jerk closest to ego base_link (for debug) ~/debug/trajectory_raw
autoware_auto_planning_msgs/Trajectory
Extracted trajectory (for debug) ~/debug/trajectory_external_velocity_limited
autoware_auto_planning_msgs/Trajectory
External velocity limited trajectory (for debug) ~/debug/trajectory_lateral_acc_filtered
autoware_auto_planning_msgs/Trajectory
Lateral acceleration limit filtered trajectory (for debug) ~/debug/trajectory_time_resampled
autoware_auto_planning_msgs/Trajectory
Time resampled trajectory (for debug) ~/distance_to_stopline
std_msgs/Float32
Distance to stop line from current ego pose (max 50 m) (for debug) ~/stop_speed_exceeded
std_msgs/Bool
It publishes true
if planned velocity on the point which the maximum velocity is zero is over threshold"},{"location":"planning/motion_velocity_smoother/README.ja/#parameters","title":"Parameters","text":""},{"location":"planning/motion_velocity_smoother/README.ja/#constraint-parameters","title":"Constraint parameters","text":"Name Type Description Default value max_velocity
double
Max velocity limit [m/s] 20.0 max_accel
double
Max acceleration limit [m/ss] 1.0 min_decel
double
Min deceleration limit [m/ss] -0.5 stop_decel
double
Stop deceleration value at a stop point [m/ss] 0.0 max_jerk
double
Max jerk limit [m/sss] 1.0 min_jerk
double
Min jerk limit [m/sss] -0.5"},{"location":"planning/motion_velocity_smoother/README.ja/#external-velocity-limit-parameter","title":"External velocity limit parameter","text":"Name Type Description Default value margin_to_insert_external_velocity_limit
double
margin distance to insert external velocity limit [m] 0.3"},{"location":"planning/motion_velocity_smoother/README.ja/#curve-parameters","title":"Curve parameters","text":"Name Type Description Default value max_lateral_accel
double
Max lateral acceleration limit [m/ss] 0.5 min_curve_velocity
double
Min velocity at lateral acceleration limit [m/ss] 2.74 decel_distance_before_curve
double
Distance to slowdown before a curve for lateral acceleration limit [m] 3.5 decel_distance_after_curve
double
Distance to slowdown after a curve for lateral acceleration limit [m] 2.0"},{"location":"planning/motion_velocity_smoother/README.ja/#engage-replan-parameters","title":"Engage & replan parameters","text":"Name Type Description Default value replan_vel_deviation
double
Velocity deviation to replan initial velocity [m/s] 5.53 engage_velocity
double
Engage velocity threshold [m/s] (if the trajectory velocity is higher than this value, use this velocity for engage vehicle speed) 0.25 engage_acceleration
double
Engage acceleration [m/ss] (use this acceleration when engagement) 0.1 engage_exit_ratio
double
Exit engage sequence to normal velocity planning when the velocity exceeds engage_exit_ratio x engage_velocity. 0.5 stop_dist_to_prohibit_engage
double
If the stop point is in this distance, the speed is set to 0 not to move the vehicle [m] 0.5"},{"location":"planning/motion_velocity_smoother/README.ja/#stopping-velocity-parameters","title":"Stopping velocity parameters","text":"Name Type Description Default value stopping_velocity
double
change target velocity to this value before v=0 point [m/s] 2.778 stopping_distance
double
distance for the stopping_velocity [m]. 0 means the stopping velocity is not applied. 0.0"},{"location":"planning/motion_velocity_smoother/README.ja/#extraction-parameters","title":"Extraction parameters","text":"Name Type Description Default value extract_ahead_dist
double
Forward trajectory distance used for planning [m] 200.0 extract_behind_dist
double
backward trajectory distance used for planning [m] 5.0 delta_yaw_threshold
double
Allowed delta yaw between ego pose and trajectory pose [radian] 1.0472"},{"location":"planning/motion_velocity_smoother/README.ja/#resampling-parameters","title":"Resampling parameters","text":"Name Type Description Default value max_trajectory_length
double
Max trajectory length for resampling [m] 200.0 min_trajectory_length
double
Min trajectory length for resampling [m] 30.0 resample_time
double
Resample total time [s] 10.0 dense_resample_dt
double
resample time interval for dense sampling [s] 0.1 dense_min_interval_distance
double
minimum points-interval length for dense sampling [m] 0.1 sparse_resample_dt
double
resample time interval for sparse sampling [s] 0.5 sparse_min_interval_distance
double
minimum points-interval length for sparse sampling [m] 4.0"},{"location":"planning/motion_velocity_smoother/README.ja/#resampling-parameters-for-post-process","title":"Resampling parameters for post process","text":"Name Type Description Default value post_max_trajectory_length
double
max trajectory length for resampling [m] 300.0 post_min_trajectory_length
double
min trajectory length for resampling [m] 30.0 post_resample_time
double
resample total time for dense sampling [s] 10.0 post_dense_resample_dt
double
resample time interval for dense sampling [s] 0.1 post_dense_min_interval_distance
double
minimum points-interval length for dense sampling [m] 0.1 post_sparse_resample_dt
double
resample time interval for sparse sampling [s] 0.1 post_sparse_min_interval_distance
double
minimum points-interval length for sparse sampling [m] 1.0"},{"location":"planning/motion_velocity_smoother/README.ja/#weights-for-optimization","title":"Weights for optimization","text":""},{"location":"planning/motion_velocity_smoother/README.ja/#jerkfiltered_1","title":"JerkFiltered","text":"Name Type Description Default value jerk_weight
double
Weight for \"smoothness\" cost for jerk 10.0 over_v_weight
double
Weight for \"over speed limit\" cost 100000.0 over_a_weight
double
Weight for \"over accel limit\" cost 5000.0 over_j_weight
double
Weight for \"over jerk limit\" cost 1000.0"},{"location":"planning/motion_velocity_smoother/README.ja/#l2_1","title":"L2","text":"Name Type Description Default value pseudo_jerk_weight
double
Weight for \"smoothness\" cost 100.0 over_v_weight
double
Weight for \"over speed limit\" cost 100000.0 over_a_weight
double
Weight for \"over accel limit\" cost 1000.0"},{"location":"planning/motion_velocity_smoother/README.ja/#linf_1","title":"Linf","text":"Name Type Description Default value pseudo_jerk_weight
double
Weight for \"smoothness\" cost 100.0 over_v_weight
double
Weight for \"over speed limit\" cost 100000.0 over_a_weight
double
Weight for \"over accel limit\" cost 1000.0"},{"location":"planning/motion_velocity_smoother/README.ja/#others","title":"Others","text":"Name Type Description Default value over_stop_velocity_warn_thr
double
Threshold to judge that the optimized velocity exceeds the input velocity on the stop point [m/s] 1.389"},{"location":"planning/motion_velocity_smoother/README.ja/#assumptions-known-limits","title":"Assumptions / Known limits","text":"[1] B. Stellato, et al., \"OSQP: an operator splitting solver for quadratic programs\", Mathematical Programming Computation, 2020, 10.1007/s12532-020-00179-2.
[2] Y. Zhang, et al., \"Toward a More Complete, Flexible, and Safer Speed Planning for Autonomous Driving via Convex Optimization\", Sensors, vol. 18, no. 7, p. 2185, 2018, 10.3390/s18072185
"},{"location":"planning/motion_velocity_smoother/README.ja/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"planning/objects_of_interest_marker_interface/","title":"Objects Of Interest Marker Interface","text":""},{"location":"planning/objects_of_interest_marker_interface/#objects-of-interest-marker-interface","title":"Objects Of Interest Marker Interface","text":"Warning
Under Construction
"},{"location":"planning/objects_of_interest_marker_interface/#purpose","title":"Purpose","text":""},{"location":"planning/objects_of_interest_marker_interface/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"planning/objects_of_interest_marker_interface/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"planning/objects_of_interest_marker_interface/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"planning/objects_of_interest_marker_interface/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":""},{"location":"planning/obstacle_avoidance_planner/","title":"Obstacle Avoidance Planner","text":""},{"location":"planning/obstacle_avoidance_planner/#obstacle-avoidance-planner","title":"Obstacle Avoidance Planner","text":""},{"location":"planning/obstacle_avoidance_planner/#purpose","title":"Purpose","text":"This package generates a trajectory that is kinematically-feasible to drive and collision-free based on the input path, drivable area. Only position and orientation of trajectory are updated in this module, and velocity is just taken over from the one in the input path.
"},{"location":"planning/obstacle_avoidance_planner/#feature","title":"Feature","text":"This package is able to
Note that the velocity is just taken over from the input path.
"},{"location":"planning/obstacle_avoidance_planner/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"planning/obstacle_avoidance_planner/#input","title":"input","text":"Name Type Description~/input/path
autoware_auto_planning_msgs/msg/Path Reference path and the corresponding drivable area ~/input/odometry
nav_msgs/msg/Odometry Current Velocity of ego vehicle"},{"location":"planning/obstacle_avoidance_planner/#output","title":"output","text":"Name Type Description ~/output/trajectory
autoware_auto_planning_msgs/msg/Trajectory Optimized trajectory that is feasible to drive and collision-free"},{"location":"planning/obstacle_avoidance_planner/#flowchart","title":"Flowchart","text":"Flowchart of functions is explained here.
"},{"location":"planning/obstacle_avoidance_planner/#createplannerdata","title":"createPlannerData","text":"The following data for planning is created.
struct PlannerData\n{\n// input\nHeader header;\nstd::vector<TrajectoryPoint> traj_points; // converted from the input path\nstd::vector<geometry_msgs::msg::Point> left_bound;\nstd::vector<geometry_msgs::msg::Point> right_bound;\n\n// ego\ngeometry_msgs::msg::Pose ego_pose;\ndouble ego_vel;\n};\n
"},{"location":"planning/obstacle_avoidance_planner/#check-replan","title":"check replan","text":"When one of the following conditions are met, trajectory optimization will be executed. Otherwise, previously optimized trajectory is used with updating the velocity from the latest input path.
max_path_shape_around_ego_lat_dist
replan.max_ego_moving_dist
in one cycle. (default: 3.0 [m])replan.max_goal_moving_dist
in one cycle. (default: 15.0 [ms])replan.max_path_shape_around_ego_lat_dist
in one cycle. (default: 2.0)This module makes the trajectory kinematically-feasible and collision-free. We define vehicle pose in the frenet coordinate, and minimize tracking errors by optimization. This optimization considers vehicle kinematics and collision checking with road boundary and obstacles. To decrease the computation cost, the optimization is applied to the shorter trajectory (default: 50 [m]) than the whole trajectory, and concatenate the remained trajectory with the optimized one at last.
The trajectory just in front of the ego must not be changed a lot so that the steering wheel will be stable. Therefore, we use the previously generated trajectory in front of the ego.
Optimization center on the vehicle, that tries to locate just on the trajectory, can be tuned along side the vehicle vertical axis. This parameter mpt.kinematics.optimization center offset
is defined as the signed length from the back-wheel center to the optimization center. Some examples are shown in the following figure, and it is shown that the trajectory of vehicle shape differs according to the optimization center even if the reference trajectory (green one) is the same.
More details can be seen here.
"},{"location":"planning/obstacle_avoidance_planner/#applyinputvelocity","title":"applyInputVelocity","text":"Velocity is assigned in the optimized trajectory from the velocity in the behavior path. The shapes of the optimized trajectory and the path are different, therefore the each nearest trajectory point to the path is searched and the velocity is interpolated with zero-order hold.
"},{"location":"planning/obstacle_avoidance_planner/#insertzerovelocityoutsidedrivablearea","title":"insertZeroVelocityOutsideDrivableArea","text":"Optimized trajectory is too short for velocity planning, therefore extend the trajectory by concatenating the optimized trajectory and the behavior path considering drivability. Generated trajectory is checked if it is inside the drivable area or not, and if outside drivable area, output a trajectory inside drivable area with the behavior path or the previously generated trajectory.
As described above, the behavior path is separated into two paths: one is for optimization and the other is the remain. The first path becomes optimized trajectory, and the second path just is transformed to a trajectory. Then a trajectory inside the drivable area is calculated as follows.
Optimization failure is dealt with the same as if the optimized trajectory is outside the drivable area. The output trajectory is memorized as a previously generated trajectory for the next cycle.
Rationale In the current design, since there are some modelling errors, the constraints are considered to be soft constraints. Therefore, we have to make sure that the optimized trajectory is inside the drivable area or not after optimization.
"},{"location":"planning/obstacle_avoidance_planner/#limitation","title":"Limitation","text":"behavior_path_planner
and obstacle_avoidance_planner
are not decided clearly. Both can avoid obstacles.Trajectory planning problem that satisfies kinematically-feasibility and collision-free has two main characteristics that makes hard to be solved: one is non-convex and the other is high dimension. Based on the characteristics, we investigate pros/cons of the typical planning methods: optimization-based, sampling-based, and learning-based method.
"},{"location":"planning/obstacle_avoidance_planner/#optimization-based-method","title":"Optimization-based method","text":"Based on these pros/cons, we chose the optimization-based planner first. Although it has a cons to converge to the local minima, it can get a good solution by the preprocessing to approximate the problem to convex that almost equals to the original non-convex problem.
"},{"location":"planning/obstacle_avoidance_planner/#how-to-tune-parameters","title":"How to Tune Parameters","text":""},{"location":"planning/obstacle_avoidance_planner/#drivability-in-narrow-roads","title":"Drivability in narrow roads","text":"mpt.clearance.soft_clearance_from_road
modify mpt.kinematics.optimization_center_offset
mpt.weight.steer_input_weight
or mpt.weight.steer_rate_weight
larger, which are stability of steering wheel along the trajectory.option.enable_skip_optimization
skips MPT optimization.option.enable_calculation_time_info
enables showing each calculation time for functions and total calculation time on the terminal.option.enable_outside_drivable_area_stop
enables stopping just before the generated trajectory point will be outside the drivable area.How to debug can be seen here.
"},{"location":"planning/obstacle_avoidance_planner/docs/debug/","title":"Debug","text":""},{"location":"planning/obstacle_avoidance_planner/docs/debug/#debug","title":"Debug","text":""},{"location":"planning/obstacle_avoidance_planner/docs/debug/#debug-visualization","title":"Debug visualization","text":"The visualization markers of the planning flow (Input, Model Predictive Trajectory, and Output) are explained here.
All the following markers can be visualized by
ros2 launch obstacle_avoidance_planner launch_visualiation.launch.xml vehilce_model:=sample_vehicle\n
The vehicle_model
must be specified to make footprints with vehicle's size.
behavior
planner.behavior
planner is converted to footprints.behavior
planner does not support it.behavior
planner.obstacle_avoidance_planner
will try to make the trajectory fully inside the drivable area.behavior
planner, please make sure that the drivable area is expanded correctly.obstacle_avoidance_planner
will try to make the these circles inside the above boundaries' width.The obstacle_avoidance_planner
consists of many functions such as boundaries' width calculation, collision-free planning, etc. We can see the calculation time for each function as follows.
Enable option.enable_calculation_time_info
or echo the topic as follows.
$ ros2 topic echo /planning/scenario_planning/lane_driving/motion_planning/obstacle_avoidance_planner/debug/calculation_time --field data\n---\n insertFixedPoint:= 0.008 [ms]\ngetPaddedTrajectoryPoints:= 0.002 [ms]\nupdateConstraint:= 0.741 [ms]\noptimizeTrajectory:= 0.101 [ms]\nconvertOptimizedPointsToTrajectory:= 0.014 [ms]\ngetEBTrajectory:= 0.991 [ms]\nresampleReferencePoints:= 0.058 [ms]\nupdateFixedPoint:= 0.237 [ms]\nupdateBounds:= 0.22 [ms]\nupdateVehicleBounds:= 0.509 [ms]\ncalcReferencePoints:= 1.649 [ms]\ncalcMatrix:= 0.209 [ms]\ncalcValueMatrix:= 0.015 [ms]\ncalcObjectiveMatrix:= 0.305 [ms]\ncalcConstraintMatrix:= 0.641 [ms]\ninitOsqp:= 6.896 [ms]\nsolveOsqp:= 2.796 [ms]\ncalcOptimizedSteerAngles:= 9.856 [ms]\ncalcMPTPoints:= 0.04 [ms]\ngetModelPredictiveTrajectory:= 12.782 [ms]\noptimizeTrajectory:= 12.981 [ms]\napplyInputVelocity:= 0.577 [ms]\ninsertZeroVelocityOutsideDrivableArea:= 0.81 [ms]\ngetDebugMarker:= 0.684 [ms]\npublishDebugMarker:= 4.354 [ms]\npublishDebugMarkerOfOptimization:= 5.047 [ms]\ngenerateOptimizedTrajectory:= 20.374 [ms]\nextendTrajectory:= 0.326 [ms]\npublishDebugData:= 0.008 [ms]\nonPath:= 20.737 [ms]\n
"},{"location":"planning/obstacle_avoidance_planner/docs/debug/#plot","title":"Plot","text":"With the following script, any calculation time of the above functions can be plot.
ros2 run obstacle_avoidance_planner calculation_time_plotter.py\n
You can specify functions to plot with the -f
option.
ros2 run obstacle_avoidance_planner calculation_time_plotter.py -f \"onPath, generateOptimizedTrajectory, calcReferencePoints\"\n
"},{"location":"planning/obstacle_avoidance_planner/docs/debug/#qa-for-debug","title":"Q&A for Debug","text":""},{"location":"planning/obstacle_avoidance_planner/docs/debug/#the-output-frequency-is-low","title":"The output frequency is low","text":"Check the function which is comparatively heavy according to this information.
For your information, the following functions for optimization and its initialization may be heavy in some complicated cases.
initOsqp
solveOsqp
Some of the following may have an issue. Please check if there is something weird by the visualization.
Some of the following may have an issue. Please check if there is something weird by the visualization.
Model Predictive Trajectory (MPT) calculates the trajectory that meets the following conditions.
Conditions for collision free is considered to be not hard constraints but soft constraints. When the optimization failed or the optimized trajectory is not collision free, the output trajectory will be previously generated trajectory.
Trajectory near the ego must be stable, therefore the condition where trajectory points near the ego are the same as previously generated trajectory is considered, and this is the only hard constraints in MPT.
"},{"location":"planning/obstacle_avoidance_planner/docs/mpt/#flowchart","title":"Flowchart","text":""},{"location":"planning/obstacle_avoidance_planner/docs/mpt/#vehicle-kinematics","title":"Vehicle kinematics","text":"As the following figure, we consider the bicycle kinematics model in the frenet frame to track the reference path. At time step \\(k\\), we define lateral distance to the reference path, heading angle against the reference path, and steer angle as \\(y_k\\), \\(\\theta_k\\), and \\(\\delta_k\\) respectively.
Assuming that the commanded steer angle is \\(\\delta_{des, k}\\), the kinematics model in the frenet frame is formulated as follows. We also assume that the steer angle \\(\\delta_k\\) is first-order lag to the commanded one.
\\[ \\begin{align} y_{k+1} & = y_{k} + v \\sin \\theta_k dt \\\\ \\theta_{k+1} & = \\theta_k + \\frac{v \\tan \\delta_k}{L}dt - \\kappa_k v \\cos \\theta_k dt \\\\ \\delta_{k+1} & = \\delta_k - \\frac{\\delta_k - \\delta_{des,k}}{\\tau}dt \\end{align} \\]"},{"location":"planning/obstacle_avoidance_planner/docs/mpt/#linearization","title":"Linearization","text":"Then we linearize these equations. \\(y_k\\) and \\(\\theta_k\\) are tracking errors, so we assume that those are small enough. Therefore \\(\\sin \\theta_k \\approx \\theta_k\\).
Since \\(\\delta_k\\) is a steer angle, it is not always small. By using a reference steer angle \\(\\delta_{\\mathrm{ref}, k}\\) calculated by the reference path curvature \\(\\kappa_k\\), we express \\(\\delta_k\\) with a small value \\(\\Delta \\delta_k\\).
Note that the steer angle \\(\\delta_k\\) is within the steer angle limitation \\(\\delta_{\\max}\\). When the reference steer angle \\(\\delta_{\\mathrm{ref}, k}\\) is larger than the steer angle limitation \\(\\delta_{\\max}\\), and \\(\\delta_{\\mathrm{ref}, k}\\) is used to linearize the steer angle, \\(\\Delta \\delta_k\\) is \\(\\Delta \\delta_k = \\delta - \\delta_{\\mathrm{ref}, k} = \\delta_{\\max} - \\delta_{\\mathrm{ref}, k}\\), and the absolute \\(\\Delta \\delta_k\\) gets larger. Therefore, we have to apply the steer angle limitation to \\(\\delta_{\\mathrm{ref}, k}\\) as well.
\\[ \\begin{align} \\delta_{\\mathrm{ref}, k} & = \\mathrm{clamp}(\\arctan(L \\kappa_k), -\\delta_{\\max}, \\delta_{\\max}) \\\\ \\delta_k & = \\delta_{\\mathrm{ref}, k} + \\Delta \\delta_k, \\ \\Delta \\delta_k \\ll 1 \\\\ \\end{align} \\]\\(\\mathrm{clamp}(v, v_{\\min}, v_{\\max})\\) is a function to convert \\(v\\) to be larger than \\(v_{\\min}\\) and smaller than \\(v_{\\max}\\).
Using this \\(\\delta_{\\mathrm{ref}, k}\\), \\(\\tan \\delta_k\\) is linearized as follows.
\\[ \\begin{align} \\tan \\delta_k & \\approx \\tan \\delta_{\\mathrm{ref}, k} + \\left.\\frac{d \\tan \\delta}{d \\delta}\\right|_{\\delta = \\delta_{\\mathrm{ref}, k}} \\Delta \\delta_k \\\\ & = \\tan \\delta_{\\mathrm{ref}, k} + \\left.\\frac{d \\tan \\delta}{d \\delta}\\right|_{\\delta = \\delta_{\\mathrm{ref}, k}} (\\delta_{\\mathrm{ref}, k} - \\delta_k) \\\\ & = \\tan \\delta_{\\mathrm{ref}, k} - \\frac{\\delta_{\\mathrm{ref}, k}}{\\cos^2 \\delta_{\\mathrm{ref}, k}} + \\frac{1}{\\cos^2 \\delta_{\\mathrm{ref}, k}} \\delta_k \\end{align} \\]"},{"location":"planning/obstacle_avoidance_planner/docs/mpt/#one-step-state-equation","title":"One-step state equation","text":"Based on the linearization, the error kinematics is formulated with the following linear equations,
\\[ \\begin{align} \\begin{pmatrix} y_{k+1} \\\\ \\theta_{k+1} \\end{pmatrix} = \\begin{pmatrix} 1 & v dt \\\\ 0 & 1 \\\\ \\end{pmatrix} \\begin{pmatrix} y_k \\\\ \\theta_k \\\\ \\end{pmatrix} + \\begin{pmatrix} 0 \\\\ \\frac{v dt}{L \\cos^{2} \\delta_{\\mathrm{ref}, k}} \\\\ \\end{pmatrix} \\delta_{k} + \\begin{pmatrix} 0 \\\\ \\frac{v \\tan(\\delta_{\\mathrm{ref}, k}) dt}{L} - \\frac{v \\delta_{\\mathrm{ref}, k} dt}{L \\cos^{2} \\delta_{\\mathrm{ref}, k}} - \\kappa_k v dt\\\\ \\end{pmatrix} \\end{align} \\]which can be formulated as follows with the state \\(\\boldsymbol{x}\\), control input \\(u\\) and some matrices, where \\(\\boldsymbol{x} = (y_k, \\theta_k)\\)
\\[ \\begin{align} \\boldsymbol{x}_{k+1} = A_k \\boldsymbol{x}_k + \\boldsymbol{b}_k u_k + \\boldsymbol{w}_k \\end{align} \\]"},{"location":"planning/obstacle_avoidance_planner/docs/mpt/#time-series-state-equation","title":"Time-series state equation","text":"Then, we formulate time-series state equation by concatenating states, control inputs and matrices respectively as
\\[ \\begin{align} \\boldsymbol{x} = A \\boldsymbol{x}_0 + B \\boldsymbol{u} + \\boldsymbol{w} \\end{align} \\]where
\\[ \\begin{align} \\boldsymbol{x} = (\\boldsymbol{x}^T_1, \\boldsymbol{x}^T_2, \\boldsymbol{x}^T_3, \\dots, \\boldsymbol{x}^T_{n-1})^T \\\\ \\boldsymbol{u} = (u_0, u_1, u_2, \\dots, u_{n-2})^T \\\\ \\boldsymbol{w} = (\\boldsymbol{w}^T_0, \\boldsymbol{w}^T_1, \\boldsymbol{w}^T_2, \\dots, \\boldsymbol{w}^T_{n-1})^T. \\\\ \\end{align} \\]In detail, each matrices are constructed as follows.
\\[ \\begin{align} \\begin{pmatrix} \\boldsymbol{x}_1 \\\\ \\boldsymbol{x}_2 \\\\ \\boldsymbol{x}_3 \\\\ \\vdots \\\\ \\boldsymbol{x}_{n-1} \\end{pmatrix} = \\begin{pmatrix} A_0 \\\\ A_1 A_0 \\\\ A_2 A_1 A_0\\\\ \\vdots \\\\ \\prod\\limits_{k=0}^{n-1} A_{k} \\end{pmatrix} \\boldsymbol{x}_0 + \\begin{pmatrix} B_0 & 0 & & \\dots & 0 \\\\ A_0 B_0 & B_1 & 0 & \\dots & 0 \\\\ A_1 A_0 B_0 & A_0 B_1 & B_2 & \\dots & 0 \\\\ \\vdots & \\vdots & & \\ddots & 0 \\\\ \\prod\\limits_{k=0}^{n-3} A_k B_0 & \\prod\\limits_{k=0}^{n-4} A_k B_1 & \\dots & A_0 B_{n-3} & B_{n-2} \\end{pmatrix} \\begin{pmatrix} u_0 \\\\ u_1 \\\\ u_2 \\\\ \\vdots \\\\ u_{n-2} \\end{pmatrix} + \\begin{pmatrix} I & 0 & & \\dots & 0 \\\\ A_0 & I & 0 & \\dots & 0 \\\\ A_1 A_0 & A_0 & I & \\dots & 0 \\\\ \\vdots & \\vdots & & \\ddots & 0 \\\\ \\prod\\limits_{k=0}^{n-3} A_k & \\prod\\limits_{k=0}^{n-4} A_k & \\dots & A_0 & I \\end{pmatrix} \\begin{pmatrix} \\boldsymbol{w}_0 \\\\ \\boldsymbol{w}_1 \\\\ \\boldsymbol{w}_2 \\\\ \\vdots \\\\ \\boldsymbol{w}_{n-2} \\end{pmatrix} \\end{align} \\]"},{"location":"planning/obstacle_avoidance_planner/docs/mpt/#free-boundary-conditioned-time-series-state-equation","title":"Free-boundary-conditioned time-series state equation","text":"For path planning which does not start from the current ego pose, \\(\\boldsymbol{x}_0\\) should be the design variable of optimization. Therefore, we make \\(\\boldsymbol{u}'\\) by concatenating \\(\\boldsymbol{x}_0\\) and \\(\\boldsymbol{u}\\), and redefine \\(\\boldsymbol{x}\\) as follows.
\\[ \\begin{align} \\boldsymbol{u}' & = (\\boldsymbol{x}^T_0, \\boldsymbol{u}^T)^T \\\\ \\boldsymbol{x} & = (\\boldsymbol{x}^T_0, \\boldsymbol{x}^T_1, \\boldsymbol{x}^T_2, \\dots, \\boldsymbol{x}^T_{n-1})^T \\end{align} \\]Then we get the following state equation
\\[ \\begin{align} \\boldsymbol{x}' = B \\boldsymbol{u}' + \\boldsymbol{w}, \\end{align} \\]which is in detail
\\[ \\begin{align} \\begin{pmatrix} \\boldsymbol{x}_0 \\\\ \\boldsymbol{x}_1 \\\\ \\boldsymbol{x}_2 \\\\ \\boldsymbol{x}_3 \\\\ \\vdots \\\\ \\boldsymbol{x}_{n-1} \\end{pmatrix} = \\begin{pmatrix} I & 0 & \\dots & & & 0 \\\\ A_0 & B_0 & 0 & & \\dots & 0 \\\\ A_1 A_0 & A_0 B_0 & B_1 & 0 & \\dots & 0 \\\\ A_2 A_1 A_0 & A_1 A_0 B_0 & A_0 B_1 & B_2 & \\dots & 0 \\\\ \\vdots & \\vdots & \\vdots & & \\ddots & 0 \\\\ \\prod\\limits_{k=0}^{n-1} A_k & \\prod\\limits_{k=0}^{n-3} A_k B_0 & \\prod\\limits_{k=0}^{n-4} A_k B_1 & \\dots & A_0 B_{n-3} & B_{n-2} \\end{pmatrix} \\begin{pmatrix} \\boldsymbol{x}_0 \\\\ u_0 \\\\ u_1 \\\\ u_2 \\\\ \\vdots \\\\ u_{n-2} \\end{pmatrix} + \\begin{pmatrix} 0 & \\dots & & & 0 \\\\ I & 0 & & \\dots & 0 \\\\ A_0 & I & 0 & \\dots & 0 \\\\ A_1 A_0 & A_0 & I & \\dots & 0 \\\\ \\vdots & \\vdots & & \\ddots & 0 \\\\ \\prod\\limits_{k=0}^{n-3} A_k & \\prod\\limits_{k=0}^{n-4} A_k & \\dots & A_0 & I \\end{pmatrix} \\begin{pmatrix} \\boldsymbol{w}_0 \\\\ \\boldsymbol{w}_1 \\\\ \\boldsymbol{w}_2 \\\\ \\vdots \\\\ \\boldsymbol{w}_{n-2} \\end{pmatrix}. \\end{align} \\]"},{"location":"planning/obstacle_avoidance_planner/docs/mpt/#objective-function","title":"Objective function","text":"The objective function for smoothing and tracking is shown as follows, which can be formulated with value function matrices \\(Q, R\\).
\\[ \\begin{align} J_1 (\\boldsymbol{x}', \\boldsymbol{u}') & = w_y \\sum_{k} y_k^2 + w_{\\theta} \\sum_{k} \\theta_k^2 + w_{\\delta} \\sum_k \\delta_k^2 + w_{\\dot{\\delta}} \\sum_k \\dot{\\delta}_k^2 + w_{\\ddot{\\delta}} \\sum_k \\ddot{\\delta}_k^2 \\\\ & = \\boldsymbol{x}'^T Q \\boldsymbol{x}' + \\boldsymbol{u}'^T R \\boldsymbol{u}' \\\\ & = \\boldsymbol{u}'^T H \\boldsymbol{u}' + \\boldsymbol{u}'^T \\boldsymbol{f} \\end{align} \\]As mentioned before, the constraints to be collision free with obstacles and road boundaries are formulated to be soft constraints. Assuming that the lateral distance to the road boundaries or obstacles from the back wheel center, front wheel center, and the point between them are \\(y_{\\mathrm{base}, k}, y_{\\mathrm{top}, k}, y_{\\mathrm{mid}, k}\\) respectively, and slack variables for each point are \\(\\lambda_{\\mathrm{base}}, \\lambda_{\\mathrm{top}}, \\lambda_{\\mathrm{mid}}\\), the soft constraints can be formulated as follows.
\\[ y_{\\mathrm{base}, k, \\min} - \\lambda_{\\mathrm{base}, k} \\leq y_{\\mathrm{base}, k} (y_k) \\leq y_{\\mathrm{base}, k, \\max} + \\lambda_{\\mathrm{base}, k}\\\\ y_{\\mathrm{top}, k, \\min} - \\lambda_{\\mathrm{top}, k} \\leq y_{\\mathrm{top}, k} (y_k) \\leq y_{\\mathrm{top}, k, \\max} + \\lambda_{\\mathrm{top}, k}\\\\ y_{\\mathrm{mid}, k, \\min} - \\lambda_{\\mathrm{mid}, k} \\leq y_{\\mathrm{mid}, k} (y_k) \\leq y_{\\mathrm{mid}, k, \\max} + \\lambda_{\\mathrm{mid}, k} \\\\ 0 \\leq \\lambda_{\\mathrm{base}, k} \\\\ 0 \\leq \\lambda_{\\mathrm{top}, k} \\\\ 0 \\leq \\lambda_{\\mathrm{mid}, k} \\]Since \\(y_{\\mathrm{base}, k}, y_{\\mathrm{top}, k}, y_{\\mathrm{mid}, k}\\) is formulated as a linear function of \\(y_k\\), the objective function for soft constraints is formulated as follows.
\\[ \\begin{align} J_2 & (\\boldsymbol{\\lambda}_\\mathrm{base}, \\boldsymbol{\\lambda}_\\mathrm{top}, \\boldsymbol {\\lambda}_\\mathrm{mid})\\\\ & = w_{\\mathrm{base}} \\sum_{k} \\lambda_{\\mathrm{base}, k} + w_{\\mathrm{mid}} \\sum_k \\lambda_{\\mathrm{mid}, k} + w_{\\mathrm{top}} \\sum_k \\lambda_{\\mathrm{top}, k} \\end{align} \\]Slack variables are also design variables for optimization. We define a vector \\(\\boldsymbol{v}\\), that concatenates all the design variables.
\\[ \\begin{align} \\boldsymbol{v} = \\begin{pmatrix} \\boldsymbol{u}'^T & \\boldsymbol{\\lambda}_\\mathrm{base}^T & \\boldsymbol{\\lambda}_\\mathrm{top}^T & \\boldsymbol{\\lambda}_\\mathrm{mid}^T \\end{pmatrix}^T \\end{align} \\]The summation of these two objective functions is the objective function for the optimization problem.
\\[ \\begin{align} \\min_{\\boldsymbol{v}} J (\\boldsymbol{v}) = \\min_{\\boldsymbol{v}} J_1 (\\boldsymbol{u}') + J_2 (\\boldsymbol{\\lambda}_\\mathrm{base}, \\boldsymbol{\\lambda}_\\mathrm{top}, \\boldsymbol{\\lambda}_\\mathrm{mid}) \\end{align} \\]As mentioned before, we use hard constraints where some trajectory points in front of the ego are the same as the previously generated trajectory points. This hard constraints is formulated as follows.
\\[ \\begin{align} \\delta_k = \\delta_{k}^{\\mathrm{prev}} (0 \\leq i \\leq N_{\\mathrm{fix}}) \\end{align} \\]Finally we transform those objective functions to the following QP problem, and solve it.
\\[ \\begin{align} \\min_{\\boldsymbol{v}} \\ & \\frac{1}{2} \\boldsymbol{v}^T \\boldsymbol{H} \\boldsymbol{v} + \\boldsymbol{f} \\boldsymbol{v} \\\\ \\mathrm{s.t.} \\ & \\boldsymbol{b}_{lower} \\leq \\boldsymbol{A} \\boldsymbol{v} \\leq \\boldsymbol{b}_{upper} \\end{align} \\]"},{"location":"planning/obstacle_avoidance_planner/docs/mpt/#constraints","title":"Constraints","text":""},{"location":"planning/obstacle_avoidance_planner/docs/mpt/#steer-angle-limitation","title":"Steer angle limitation","text":"Steer angle has a limitation \\(\\delta_{max}\\) and \\(\\delta_{min}\\). Therefore we add linear inequality equations.
\\[ \\begin{align} \\delta_{min} \\leq \\delta_i \\leq \\delta_{max} \\end{align} \\]"},{"location":"planning/obstacle_avoidance_planner/docs/mpt/#collision-free","title":"Collision free","text":"To realize collision-free trajectory planning, we have to formulate constraints that the vehicle is inside the road and also does not collide with obstacles in linear equations. For linearity, we implemented some methods to approximate the vehicle shape with a set of circles, that is reliable and easy to implement.
Now we formulate the linear constraints where a set of circles on each trajectory point is collision-free. By using the drivable area, we calculate upper and lower boundaries along reference points, which will be interpolated on any position on the trajectory. NOTE that upper and lower boundary is left and right, respectively.
Assuming that upper and lower boundaries are \\(b_l\\), \\(b_u\\) respectively, and \\(r\\) is a radius of a circle, lateral deviation of the circle center \\(y'\\) has to be
\\[ b_l + r \\leq y' \\leq b_u - r. \\]Based on the following figure, \\(y'\\) can be formulated as follows.
\\[ \\begin{align} y' & = L \\sin(\\theta + \\beta) + y \\cos \\beta - l \\sin(\\gamma - \\phi_a) \\\\ & = L \\sin \\theta \\cos \\beta + L \\cos \\theta \\sin \\beta + y \\cos \\beta - l \\sin(\\gamma - \\phi_a) \\\\ & \\approx L \\theta \\cos \\beta + L \\sin \\beta + y \\cos \\beta - l \\sin(\\gamma - \\phi_a) \\end{align} \\] \\[ b_l + r - \\lambda \\leq y' \\leq b_u - r + \\lambda. \\] \\[ \\begin{align} y' & = C_1 \\boldsymbol{x} + C_2 \\\\ & = C_1 (B \\boldsymbol{v} + \\boldsymbol{w}) + C_2 \\\\ & = C_1 B \\boldsymbol{v} + \\boldsymbol{w} + C_2 \\end{align} \\]Note that longitudinal position of the circle center and the trajectory point to calculate boundaries are different. But each boundaries are vertical against the trajectory, resulting in less distortion by the longitudinal position difference since road boundaries does not change so much. For example, if the boundaries are not vertical against the trajectory and there is a certain difference of longitudinal position between the circe center and the trajectory point, we can easily guess that there is much more distortion when comparing lateral deviation and boundaries.
\\[ \\begin{align} A_{blk} & = \\begin{pmatrix} C_1 B & O & \\dots & O & I_{N_{ref} \\times N_{ref}} & O \\dots & O\\\\ -C_1 B & O & \\dots & O & I & O \\dots & O\\\\ O & O & \\dots & O & I & O \\dots & O \\end{pmatrix} \\in \\boldsymbol{R}^{3 N_{ref} \\times D_v + N_{circle} N_{ref}} \\\\ \\boldsymbol{b}_{lower, blk} & = \\begin{pmatrix} \\boldsymbol{b}_{lower} - C_1 \\boldsymbol{w} - C_2 \\\\ -\\boldsymbol{b}_{upper} + C_1 \\boldsymbol{w} + C_2 \\\\ O \\end{pmatrix} \\in \\boldsymbol{R}^{3 N_{ref}} \\\\ \\boldsymbol{b}_{upper, blk} & = \\boldsymbol{\\infty} \\in \\boldsymbol{R}^{3 N_{ref}} \\end{align} \\]We will explain options for optimization.
"},{"location":"planning/obstacle_avoidance_planner/docs/mpt/#l-infinity-optimization","title":"L-infinity optimization","text":"The above formulation is called L2 norm for slack variables. Instead, if we use L-infinity norm where slack variables are shared by enabling l_inf_norm
.
In order to make the trajectory optimization problem stabler to solve, the boundary constraint which the trajectory footprints should be inside and optimization weights are modified.
"},{"location":"planning/obstacle_avoidance_planner/docs/mpt/#keep-minimum-boundary-width","title":"Keep minimum boundary width","text":"The drivable area's width is sometimes smaller than the vehicle width since the behavior module does not consider the width. To realize the stable trajectory optimization, the drivable area's width is guaranteed to be larger than the vehicle width and an additional margin in a rule-based way.
We cannot distinguish the boundary by roads from the boundary by obstacles for avoidance in the motion planner, the drivable area is modified in the following multi steps assuming that \\(l_{width}\\) is the vehicle width and \\(l_{margin}\\) is an additional margin.
"},{"location":"planning/obstacle_avoidance_planner/docs/mpt/#extend-violated-boundary","title":"Extend violated boundary","text":""},{"location":"planning/obstacle_avoidance_planner/docs/mpt/#avoid-sudden-steering","title":"Avoid sudden steering","text":"When the obstacle suddenly appears which is determined to avoid by the behavior module, the drivable area's shape just in front of the ego will change, resulting in the sudden steering. To prevent this, the drivable area's shape close to the ego is fixed as previous drivable area's shape.
Assume that \\(v_{ego}\\) is the ego velocity, and \\(t_{fix}\\) is the time to fix the forward drivable area's shape.
"},{"location":"planning/obstacle_avoidance_planner/docs/mpt/#calculate-avoidance-cost","title":"Calculate avoidance cost","text":""},{"location":"planning/obstacle_avoidance_planner/docs/mpt/#change-optimization-weights","title":"Change optimization weights","text":"\\[ \\begin{align} r & = \\mathrm{lerp}(w^{\\mathrm{steer}}_{\\mathrm{normal}}, w^{\\mathrm{steer}}_{\\mathrm{avoidance}}, c) \\\\ w^{\\mathrm{lat}} & = \\mathrm{lerp}(w^{\\mathrm{lat}}_{\\mathrm{normal}}, w^{\\mathrm{lat}}_{\\mathrm{avoidance}}, r) \\\\ w^{\\mathrm{yaw}} & = \\mathrm{lerp}(w^{\\mathrm{yaw}}_{\\mathrm{normal}}, w^{\\mathrm{yaw}}_{\\mathrm{avoidance}}, r) \\end{align} \\]Assume that \\(c\\) is the normalized avoidance cost, \\(w^{\\mathrm{lat}}\\) is the weight for lateral error, \\(w^{\\mathrm{yaw}}\\) is the weight for yaw error, and other variables are as follows.
Parameter Type Description \\(w^{\\mathrm{steer}}_{\\mathrm{normal}}\\) double weight for steering minimization in normal cases \\(w^{\\mathrm{steer}}_{\\mathrm{avoidance}}\\) double weight for steering minimization in avoidance cases \\(w^{\\mathrm{lat}}_{\\mathrm{normal}}\\) double weight for lateral error minimization in normal cases \\(w^{\\mathrm{lat}}_{\\mathrm{avoidance}}\\) double weight for lateral error minimization in avoidance cases \\(w^{\\mathrm{yaw}}_{\\mathrm{normal}}\\) double weight for yaw error minimization in normal cases \\(w^{\\mathrm{yaw}}_{\\mathrm{avoidance}}\\) double weight for yaw error minimization in avoidance cases"},{"location":"planning/obstacle_cruise_planner/","title":"Obstacle Cruise Planner","text":""},{"location":"planning/obstacle_cruise_planner/#obstacle-cruise-planner","title":"Obstacle Cruise Planner","text":""},{"location":"planning/obstacle_cruise_planner/#overview","title":"Overview","text":"The obstacle_cruise_planner
package has following modules.
~/input/trajectory
autoware_auto_planning_msgs::Trajectory input trajectory ~/input/objects
autoware_auto_perception_msgs::PredictedObjects dynamic objects ~/input/odometry
nav_msgs::msg::Odometry ego odometry"},{"location":"planning/obstacle_cruise_planner/#output-topics","title":"Output topics","text":"Name Type Description ~/output/trajectory
autoware_auto_planning_msgs::Trajectory output trajectory ~/output/velocity_limit
tier4_planning_msgs::VelocityLimit velocity limit for cruising ~/output/clear_velocity_limit
tier4_planning_msgs::VelocityLimitClearCommand clear command for velocity limit ~/output/stop_reasons
tier4_planning_msgs::StopReasonArray reasons that make the vehicle to stop"},{"location":"planning/obstacle_cruise_planner/#design","title":"Design","text":"Design for the following functions is defined here.
A data structure for cruise and stop planning is as follows. This planner data is created first, and then sent to the planning algorithm.
struct PlannerData\n{\nrclcpp::Time current_time;\nautoware_auto_planning_msgs::msg::Trajectory traj;\ngeometry_msgs::msg::Pose current_pose;\ndouble ego_vel;\ndouble current_acc;\nstd::vector<Obstacle> target_obstacles;\n};\n
struct Obstacle\n{\nrclcpp::Time stamp; // This is not the current stamp, but when the object was observed.\ngeometry_msgs::msg::Pose pose; // interpolated with the current stamp\nbool orientation_reliable;\nTwist twist;\nbool twist_reliable;\nObjectClassification classification;\nstd::string uuid;\nShape shape;\nstd::vector<PredictedPath> predicted_paths;\n};\n
"},{"location":"planning/obstacle_cruise_planner/#behavior-determination-against-obstacles","title":"Behavior determination against obstacles","text":"Obstacles for cruising, stopping and slowing down are selected in this order based on their pose and velocity. The obstacles not in front of the ego will be ignored.
"},{"location":"planning/obstacle_cruise_planner/#determine-cruise-vehicles","title":"Determine cruise vehicles","text":"The obstacles meeting the following condition are determined as obstacles for cruising.
behavior_determination.cruise.max_lat_margin
.common.cruise_obstacle_type.*
.common.cruise_obstacle_type.inside.*
.behavior_determination.obstacle_velocity_threshold_from_cruise_to_stop
.common.cruise_obstacle_type.outside.*
.behavior_determination.cruise.outside_obstacle.obstacle_velocity_threshold
.behavior_determination.cruise.outside_obstacle.ego_obstacle_overlap_time_threshold
.common.cruise_obstacle_type.inside.unknown
bool flag to consider unknown objects for cruising common.cruise_obstacle_type.inside.car
bool flag to consider unknown objects for cruising common.cruise_obstacle_type.inside.truck
bool flag to consider unknown objects for cruising ... bool ... common.cruise_obstacle_type.outside.unknown
bool flag to consider unknown objects for cruising common.cruise_obstacle_type.outside.car
bool flag to consider unknown objects for cruising common.cruise_obstacle_type.outside.truck
bool flag to consider unknown objects for cruising ... bool ... behavior_determination.cruise.max_lat_margin
double maximum lateral margin for cruise obstacles behavior_determination.obstacle_velocity_threshold_from_cruise_to_stop
double maximum obstacle velocity for cruise obstacle inside the trajectory behavior_determination.cruise.outside_obstacle.obstacle_velocity_threshold
double maximum obstacle velocity for cruise obstacle outside the trajectory behavior_determination.cruise.outside_obstacle.ego_obstacle_overlap_time_threshold
double maximum overlap time of the collision between the ego and obstacle"},{"location":"planning/obstacle_cruise_planner/#determine-stop-vehicles","title":"Determine stop vehicles","text":"Among obstacles which are not for cruising, the obstacles meeting the following condition are determined as obstacles for stopping.
common.stop_obstacle_type.*
.behavior_determination.stop.max_lat_margin
.behavior_determination.obstacle_velocity_threshold_from_stop_to_cruise
.behavior_determination.crossing_obstacle.obstacle_velocity_threshold
common.stop_obstacle_type.unknown
bool flag to consider unknown objects for stopping common.stop_obstacle_type.car
bool flag to consider unknown objects for stopping common.stop_obstacle_type.truck
bool flag to consider unknown objects for stopping ... bool ... behavior_determination.stop.max_lat_margin
double maximum lateral margin for stop obstacles behavior_determination.crossing_obstacle.obstacle_velocity_threshold
double maximum crossing obstacle velocity to ignore behavior_determination.obstacle_velocity_threshold_from_stop_to_cruise
double maximum obstacle velocity for stop"},{"location":"planning/obstacle_cruise_planner/#determine-slow-down-vehicles","title":"Determine slow down vehicles","text":"Among obstacles which are not for cruising and stopping, the obstacles meeting the following condition are determined as obstacles for slowing down.
common.slow_down_obstacle_type.*
.behavior_determination.slow_down.max_lat_margin
.common.slow_down_obstacle_type.unknown
bool flag to consider unknown objects for slowing down common.slow_down_obstacle_type.car
bool flag to consider unknown objects for slowing down common.slow_down_obstacle_type.truck
bool flag to consider unknown objects for slowing down ... bool ... behavior_determination.slow_down.max_lat_margin
double maximum lateral margin for slow down obstacles"},{"location":"planning/obstacle_cruise_planner/#note","title":"NOTE","text":""},{"location":"planning/obstacle_cruise_planner/#1-crossing-obstacles","title":"*1: Crossing obstacles","text":"Crossing obstacle is the object whose orientation's yaw angle against the ego's trajectory is smaller than behavior_determination.crossing_obstacle.obstacle_traj_angle_threshold
.
behavior_determination.crossing_obstacle.obstacle_traj_angle_threshold
double maximum angle against the ego's trajectory to judge the obstacle is crossing the trajectory [rad]"},{"location":"planning/obstacle_cruise_planner/#2-enough-collision-time-margin","title":"*2: Enough collision time margin","text":"We predict the collision area and its time by the ego with a constant velocity motion and the obstacle with its predicted path. Then, we calculate a collision time margin which is the difference of the time when the ego will be inside the collision area and the obstacle will be inside the collision area. When this time margin is smaller than behavior_determination.stop.crossing_obstacle.collision_time_margin
, the margin is not enough.
behavior_determination.stop.crossing_obstacle.collision_time_margin
double maximum collision time margin of the ego and obstacle"},{"location":"planning/obstacle_cruise_planner/#stop-planning","title":"Stop planning","text":"Parameter Type Description common.min_strong_accel
double ego's minimum acceleration to stop [m/ss] common.safe_distance_margin
double distance with obstacles for stop [m] common.terminal_safe_distance_margin
double terminal_distance with obstacles for stop, which cannot be exceed safe distance margin [m] The role of the stop planning is keeping a safe distance with static vehicle objects or dynamic/static non vehicle objects.
The stop planning just inserts the stop point in the trajectory to keep a distance with obstacles. The safe distance is parameterized as common.safe_distance_margin
. When it stops at the end of the trajectory, and obstacle is on the same point, the safe distance becomes terminal_safe_distance_margin
.
When inserting the stop point, the required acceleration for the ego to stop in front of the stop point is calculated. If the acceleration is less than common.min_strong_accel
, the stop planning will be cancelled since this package does not assume a strong sudden brake for emergency.
common.safe_distance_margin
double minimum distance with obstacles for cruise [m] The role of the cruise planning is keeping a safe distance with dynamic vehicle objects with smoothed velocity transition. This includes not only cruising a front vehicle, but also reacting a cut-in and cut-out vehicle.
The safe distance is calculated dynamically based on the Responsibility-Sensitive Safety (RSS) by the following equation.
\\[ d_{rss} = v_{ego} t_{idling} + \\frac{1}{2} a_{ego} t_{idling}^2 + \\frac{v_{ego}^2}{2 a_{ego}} - \\frac{v_{obstacle}^2}{2 a_{obstacle}}, \\]assuming that \\(d_{rss}\\) is the calculated safe distance, \\(t_{idling}\\) is the idling time for the ego to detect the front vehicle's deceleration, \\(v_{ego}\\) is the ego's current velocity, \\(v_{obstacle}\\) is the front obstacle's current velocity, \\(a_{ego}\\) is the ego's acceleration, and \\(a_{obstacle}\\) is the obstacle's acceleration. These values are parameterized as follows. Other common values such as ego's minimum acceleration is defined in common.param.yaml
.
common.idling_time
double idling time for the ego to detect the front vehicle starting deceleration [s] common.min_ego_accel_for_rss
double ego's acceleration for RSS [m/ss] common.min_object_accel_for_rss
double front obstacle's acceleration for RSS [m/ss] The detailed formulation is as follows.
\\[ \\begin{align} d_{error} & = d - d_{rss} \\\\ d_{normalized} & = lpf(d_{error} / d_{obstacle}) \\\\ d_{quad, normalized} & = sign(d_{normalized}) *d_{normalized}*d_{normalized} \\\\ v_{pid} & = pid(d_{quad, normalized}) \\\\ v_{add} & = v_{pid} > 0 ? v_{pid}* w_{acc} : v_{pid} \\\\ v_{target} & = max(v_{ego} + v_{add}, v_{min, cruise}) \\end{align} \\] Variable Descriptiond
actual distance to obstacle d_{rss}
ideal distance to obstacle based on RSS v_{min, cruise}
min_cruise_target_vel
w_{acc}
output_ratio_during_accel
lpf(val)
apply low-pass filter to val
pid(val)
apply pid to val
"},{"location":"planning/obstacle_cruise_planner/#slow-down-planning","title":"Slow down planning","text":"Parameter Type Description slow_down.labels
vector(string) A vector of labels for customizing obstacle-label-based slow down behavior. Each label represents an obstacle type that will be treated differently when applying slow down. The possible labels are (\"default\" (Mandatory), \"unknown\",\"car\",\"truck\",\"bus\",\"trailer\",\"motorcycle\",\"bicycle\" or \"pedestrian\") slow_down.default.static.min_lat_velocity
double minimum velocity to linearly calculate slow down velocity [m]. Note: This default value will be used when the detected obstacle label does not match any of the slow_down.labels and the obstacle is considered to be static, or not moving slow_down.default.static.max_lat_velocity
double maximum velocity to linearly calculate slow down velocity [m]. Note: This default value will be used when the detected obstacle label does not match any of the slow_down.labels and the obstacle is considered to be static, or not moving slow_down.default.static.min_lat_margin
double minimum lateral margin to linearly calculate slow down velocity [m]. Note: This default value will be used when the detected obstacle label does not match any of the slow_down.labels and the obstacle is considered to be static, or not moving slow_down.default.static.max_lat_margin
double maximum lateral margin to linearly calculate slow down velocity [m]. Note: This default value will be used when the detected obstacle label does not match any of the slow_down.labels and the obstacle is considered to be static, or not moving slow_down.default.moving.min_lat_velocity
double minimum velocity to linearly calculate slow down velocity [m]. Note: This default value will be used when the detected obstacle label does not match any of the slow_down.labels and the obstacle is considered to be moving slow_down.default.moving.max_lat_velocity
double maximum velocity to linearly calculate slow down velocity [m]. Note: This default value will be used when the detected obstacle label does not match any of the slow_down.labels and the obstacle is considered to be moving slow_down.default.moving.min_lat_margin
double minimum lateral margin to linearly calculate slow down velocity [m]. Note: This default value will be used when the detected obstacle label does not match any of the slow_down.labels and the obstacle is considered to be moving slow_down.default.moving.max_lat_margin
double maximum lateral margin to linearly calculate slow down velocity [m]. Note: This default value will be used when the detected obstacle label does not match any of the slow_down.labels and the obstacle is considered to be moving (optional) slow_down.\"label\".(static & moving).min_lat_velocity
double minimum velocity to linearly calculate slow down velocity [m]. Note: only for obstacles specified in slow_down.labels
. Requires a static
and a moving
value (optional) slow_down.\"label\".(static & moving).max_lat_velocity
double maximum velocity to linearly calculate slow down velocity [m]. Note: only for obstacles specified in slow_down.labels
. Requires a static
and a moving
value (optional) slow_down.\"label\".(static & moving).min_lat_margin
double minimum lateral margin to linearly calculate slow down velocity [m]. Note: only for obstacles specified in slow_down.labels
. Requires a static
and a moving
value (optional) slow_down.\"label\".(static & moving).max_lat_margin
double maximum lateral margin to linearly calculate slow down velocity [m]. Note: only for obstacles specified in slow_down.labels
. Requires a static
and a moving
value The role of the slow down planning is inserting slow down velocity in the trajectory where the trajectory points are close to the obstacles. The parameters can be customized depending on the obstacle type (see slow_down.labels
), making it possible to adjust the slow down behavior depending if the obstacle is a pedestrian, bicycle, car, etc. Each obstacle type has a static
and a moving
parameter set, so it is possible to customize the slow down response of the ego vehicle according to the obstacle type and if it is moving or not. If an obstacle is determined to be moving, the corresponding moving
set of parameters will be used to compute the vehicle slow down, otherwise, the static
parameters will be used. The static
and moving
separation is useful for customizing the ego vehicle slow down behavior to, for example, slow down more significantly when passing stopped vehicles that might cause occlusion or that might suddenly open its doors.
An obstacle is classified as static
if its total speed is less than the moving_object_speed_threshold
parameter. Furthermore, a hysteresis based approach is used to avoid chattering, it uses the moving_object_hysteresis_range
parameter range and the obstacle's previous state (moving
or static
) to determine if the obstacle is moving or not. In other words, if an obstacle was previously classified as static
, it will not change its classification to moving
unless its total speed is greater than moving_object_speed_threshold
+ moving_object_hysteresis_range
. Likewise, an obstacle previously classified as moving
, will only change to static
if its speed is lower than moving_object_speed_threshold
- moving_object_hysteresis_range
.
The closest point on the obstacle to the ego's trajectory is calculated. Then, the slow down velocity is calculated by linear interpolation with the distance between the point and trajectory as follows.
Variable Descriptionv_{out}
calculated velocity for slow down v_{min}
slow_down.min_lat_velocity
v_{max}
slow_down.max_lat_velocity
l_{min}
slow_down.min_lat_margin
l_{max}
slow_down.max_lat_margin
l'_{max}
behavior_determination.slow_down.max_lat_margin
The calculated velocity is inserted in the trajectory where the obstacle is inside the area with behavior_determination.slow_down.max_lat_margin
.
Successive functions consist of obstacle_cruise_planner
as follows.
Various algorithms for stop and cruise planning will be implemented, and one of them is designated depending on the use cases. The core algorithm implementation generateTrajectory
depends on the designated algorithm.
Currently, only a PID-based planner is supported. Each planner will be explained in the following.
Parameter Type Descriptioncommon.planning_method
string cruise and stop planning algorithm, selected from \"pid_base\""},{"location":"planning/obstacle_cruise_planner/#pid-based-planner","title":"PID-based planner","text":""},{"location":"planning/obstacle_cruise_planner/#stop-planning_1","title":"Stop planning","text":"In the pid_based_planner
namespace,
obstacle_velocity_threshold_from_cruise_to_stop
double obstacle velocity threshold to be stopped from cruised [m/s] Only one obstacle is targeted for the stop planning. It is the obstacle among obstacle candidates whose velocity is less than obstacle_velocity_threshold_from_cruise_to_stop
, and which is the nearest to the ego along the trajectory. A stop point is inserted keepingcommon.safe_distance_margin
distance between the ego and obstacle.
Note that, as explained in the stop planning design, a stop planning which requires a strong acceleration (less than common.min_strong_accel
) will be canceled.
In the pid_based_planner
namespace,
kp
double p gain for pid control [-] ki
double i gain for pid control [-] kd
double d gain for pid control [-] output_ratio_during_accel
double The output velocity will be multiplied by the ratio during acceleration to follow the front vehicle. [-] vel_to_acc_weight
double target acceleration is target velocity * vel_to_acc_weight
[-] min_cruise_target_vel
double minimum target velocity during cruise [m/s] In order to keep the safe distance, the target velocity and acceleration is calculated and sent as an external velocity limit to the velocity smoothing package (motion_velocity_smoother
by default). The target velocity and acceleration is respectively calculated with the PID controller according to the error between the reference safe distance and the actual distance.
under construction
"},{"location":"planning/obstacle_cruise_planner/#minor-functions","title":"Minor functions","text":""},{"location":"planning/obstacle_cruise_planner/#prioritization-of-behavior-modules-stop-point","title":"Prioritization of behavior module's stop point","text":"When stopping for a pedestrian walking on the crosswalk, the behavior module inserts the zero velocity in the trajectory in front of the crosswalk. Also obstacle_cruise_planner
's stop planning also works, and the ego may not reach the behavior module's stop point since the safe distance defined in obstacle_cruise_planner
may be longer than the behavior module's safe distance. To resolve this non-alignment of the stop point between the behavior module and obstacle_cruise_planner
, common.min_behavior_stop_margin
is defined. In the case of the crosswalk described above, obstacle_cruise_planner
inserts the stop point with a distance common.min_behavior_stop_margin
at minimum between the ego and obstacle.
common.min_behavior_stop_margin
double minimum stop margin when stopping with the behavior module enabled [m]"},{"location":"planning/obstacle_cruise_planner/#a-function-to-keep-the-closest-stop-obstacle-in-target-obstacles","title":"A function to keep the closest stop obstacle in target obstacles","text":"In order to keep the closest stop obstacle in the target obstacles, we check whether it is disappeared or not from the target obstacles in the checkConsistency
function. If the previous closest stop obstacle is remove from the lists, we keep it in the lists for stop_obstacle_hold_time_threshold
seconds. Note that if a new stop obstacle appears and the previous closest obstacle removes from the lists, we do not add it to the target obstacles again.
behavior_determination.stop_obstacle_hold_time_threshold
double maximum time for holding closest stop obstacle [s]"},{"location":"planning/obstacle_cruise_planner/#how-to-debug","title":"How To Debug","text":"How to debug can be seen here.
"},{"location":"planning/obstacle_cruise_planner/#known-limits","title":"Known Limits","text":"rough_detection_area
a small value.motion_velocity_smoother
by default) whether or not the ego realizes the designated target speed. If the velocity smoothing package is updated, please take care of the vehicle's behavior as much as possible.Green polygons which is a detection area is visualized by detection_polygons
in the ~/debug/marker
topic. To determine each behavior (cruise, stop, and slow down), if behavior_determination.*.max_lat_margin
is not zero, the polygons are expanded with the additional width.
Red points which are collision points with obstacle are visualized by *_collision_points
for each behavior in the ~/debug/marker
topic.
Orange sphere which is an obstacle for cruise is visualized by obstacles_to_cruise
in the ~/debug/marker
topic.
Orange wall which means a safe distance to cruise if the ego's front meets the wall is visualized in the ~/debug/cruise/virtual_wall
topic.
Red sphere which is an obstacle for stop is visualized by obstacles_to_stop
in the ~/debug/marker
topic.
Red wall which means a safe distance to stop if the ego's front meets the wall is visualized in the ~/virtual_wall
topic.
Yellow sphere which is an obstacle for slow_down is visualized by obstacles_to_slow_down
in the ~/debug/marker
topic.
Yellow wall which means a safe distance to slow_down if the ego's front meets the wall is visualized in the ~/debug/slow_down/virtual_wall
topic.
obstacle_stop_planner
has following modules
~/input/pointcloud
sensor_msgs::PointCloud2 obstacle pointcloud ~/input/trajectory
autoware_auto_planning_msgs::Trajectory trajectory ~/input/vector_map
autoware_auto_mapping_msgs::HADMapBin vector map ~/input/odometry
nav_msgs::Odometry vehicle velocity ~/input/dynamic_objects
autoware_auto_perception_msgs::PredictedObjects dynamic objects ~/input/expand_stop_range
tier4_planning_msgs::msg::ExpandStopRange expand stop range"},{"location":"planning/obstacle_stop_planner/#output-topics","title":"Output topics","text":"Name Type Description ~output/trajectory
autoware_auto_planning_msgs::Trajectory trajectory to be followed ~output/stop_reasons
tier4_planning_msgs::StopReasonArray reasons that cause the vehicle to stop"},{"location":"planning/obstacle_stop_planner/#common-parameter","title":"Common Parameter","text":"Parameter Type Description enable_slow_down
bool enable slow down planner [-] max_velocity
double max velocity [m/s] chattering_threshold
double even if the obstacle disappears, the stop judgment continues for chattering_threshold [s] enable_z_axis_obstacle_filtering
bool filter obstacles in z axis (height) [-] z_axis_filtering_buffer
double additional buffer for z axis filtering [m] use_predicted_objects
bool whether to use predicted objects for collision and slowdown detection [-] predicted_object_filtering_threshold
double threshold for filtering predicted objects [valid only publish_obstacle_polygon true] [m] publish_obstacle_polygon
bool if use_predicted_objects is true, node publishes collision polygon [-]"},{"location":"planning/obstacle_stop_planner/#obstacle-stop-planner_1","title":"Obstacle Stop Planner","text":""},{"location":"planning/obstacle_stop_planner/#role","title":"Role","text":"This module inserts the stop point before the obstacle with margin. In nominal case, the margin is the sum of baselink_to_front
and max_longitudinal_margin
. The baselink_to_front
means the distance between baselink
( center of rear-wheel axis) and front of the car. The detection area is generated along the processed trajectory as following figure. (This module cut off the input trajectory behind the ego position and decimates the trajectory points for reducing computational costs.)
parameters for obstacle stop planner
target for obstacle stop planner
If another stop point has already been inserted by other modules within max_longitudinal_margin
, the margin is the sum of baselink_to_front
and min_longitudinal_margin
. This feature exists to avoid stopping unnaturally position. (For example, the ego stops unnaturally far away from stop line of crosswalk that pedestrians cross to without this feature.)
minimum longitudinal margin
The module searches the obstacle pointcloud within detection area. When the pointcloud is found, Adaptive Cruise Controller
modules starts to work. only when Adaptive Cruise Controller
modules does not insert target velocity, the stop point is inserted to the trajectory. The stop point means the point with 0 velocity.
If it needs X meters (e.g. 0.5 meters) to stop once the vehicle starts moving due to the poor vehicle control performance, the vehicle goes over the stopping position that should be strictly observed when the vehicle starts to moving in order to approach the near stop point (e.g. 0.3 meters away).
This module has parameter hold_stop_margin_distance
in order to prevent from these redundant restart. If the vehicle is stopped within hold_stop_margin_distance
meters from stop point of the module, the module judges that the vehicle has already stopped for the module's stop point and plans to keep stopping current position even if the vehicle is stopped due to other factors.
parameters
outside the hold_stop_margin_distance
inside the hold_stop_margin_distance"},{"location":"planning/obstacle_stop_planner/#parameters","title":"Parameters","text":""},{"location":"planning/obstacle_stop_planner/#stop-position","title":"Stop position","text":"Parameter Type Description
max_longitudinal_margin
double margin between obstacle and the ego's front [m] max_longitudinal_margin_behind_goal
double margin between obstacle and the ego's front when the stop point is behind the goal[m] min_longitudinal_margin
double if any obstacle exists within max_longitudinal_margin
, this module set margin as the value of stop margin to min_longitudinal_margin
[m] hold_stop_margin_distance
double parameter for restart prevention (See above section) [m]"},{"location":"planning/obstacle_stop_planner/#obstacle-detection-area","title":"Obstacle detection area","text":"Parameter Type Description lateral_margin
double lateral margin from the vehicle footprint for collision obstacle detection area [m] step_length
double step length for pointcloud search range [m] enable_stop_behind_goal_for_obstacle
bool enabling extend trajectory after goal lane for obstacle detection"},{"location":"planning/obstacle_stop_planner/#flowchart","title":"Flowchart","text":""},{"location":"planning/obstacle_stop_planner/#slow-down-planner","title":"Slow Down Planner","text":""},{"location":"planning/obstacle_stop_planner/#role_1","title":"Role","text":"This module inserts the slow down section before the obstacle with forward margin and backward margin. The forward margin is the sum of baselink_to_front
and longitudinal_forward_margin
, and the backward margin is the sum of baselink_to_front
and longitudinal_backward_margin
. The ego keeps slow down velocity in slow down section. The velocity is calculated the following equation.
\\(v_{target} = v_{min} + \\frac{l_{ld} - l_{vw}/2}{l_{margin}} (v_{max} - v_{min} )\\)
min_slow_down_velocity
[m/s]max_slow_down_velocity
[m/s]lateral_margin
[m]The above equation means that the smaller the lateral deviation of the pointcloud, the lower the velocity of the slow down section.
parameters for slow down planner
target for slow down planner"},{"location":"planning/obstacle_stop_planner/#parameters_1","title":"Parameters","text":""},{"location":"planning/obstacle_stop_planner/#slow-down-section","title":"Slow down section","text":"Parameter Type Description
longitudinal_forward_margin
double margin between obstacle and the ego's front [m] longitudinal_backward_margin
double margin between obstacle and the ego's rear [m]"},{"location":"planning/obstacle_stop_planner/#obstacle-detection-area_1","title":"Obstacle detection area","text":"Parameter Type Description lateral_margin
double lateral margin from the vehicle footprint for slow down obstacle detection area [m]"},{"location":"planning/obstacle_stop_planner/#slow-down-target-velocity","title":"Slow down target velocity","text":"Parameter Type Description max_slow_down_velocity
double max slow down velocity [m/s] min_slow_down_velocity
double min slow down velocity [m/s]"},{"location":"planning/obstacle_stop_planner/#flowchart_1","title":"Flowchart","text":""},{"location":"planning/obstacle_stop_planner/#adaptive-cruise-controller","title":"Adaptive Cruise Controller","text":""},{"location":"planning/obstacle_stop_planner/#role_2","title":"Role","text":"Adaptive Cruise Controller
module embeds maximum velocity in trajectory when there is a dynamic point cloud on the trajectory. The value of maximum velocity depends on the own velocity, the velocity of the point cloud ( = velocity of the front car), and the distance to the point cloud (= distance to the front car).
adaptive_cruise_control.use_object_to_estimate_vel
bool use dynamic objects for estimating object velocity or not (valid only if osp.use_predicted_objects false) adaptive_cruise_control.use_pcl_to_estimate_vel
bool use raw pointclouds for estimating object velocity or not (valid only if osp.use_predicted_objects false) adaptive_cruise_control.consider_obj_velocity
bool consider forward vehicle velocity to calculate target velocity in adaptive cruise or not adaptive_cruise_control.obstacle_velocity_thresh_to_start_acc
double start adaptive cruise control when the velocity of the forward obstacle exceeds this value [m/s] adaptive_cruise_control.obstacle_velocity_thresh_to_stop_acc
double stop acc when the velocity of the forward obstacle falls below this value [m/s] adaptive_cruise_control.emergency_stop_acceleration
double supposed minimum acceleration (deceleration) in emergency stop [m/ss] adaptive_cruise_control.emergency_stop_idling_time
double supposed idling time to start emergency stop [s] adaptive_cruise_control.min_dist_stop
double minimum distance of emergency stop [m] adaptive_cruise_control.obstacle_emergency_stop_acceleration
double supposed minimum acceleration (deceleration) in emergency stop [m/ss] adaptive_cruise_control.max_standard_acceleration
double supposed maximum acceleration in active cruise control [m/ss] adaptive_cruise_control.min_standard_acceleration
double supposed minimum acceleration (deceleration) in active cruise control [m/ss] adaptive_cruise_control.standard_idling_time
double supposed idling time to react object in active cruise control [s] adaptive_cruise_control.min_dist_standard
double minimum distance in active cruise control [m] adaptive_cruise_control.obstacle_min_standard_acceleration
double supposed minimum acceleration of forward obstacle [m/ss] adaptive_cruise_control.margin_rate_to_change_vel
double rate of margin distance to insert target velocity [-] adaptive_cruise_control.use_time_compensation_to_calc_distance
bool use time-compensation to calculate distance to forward vehicle adaptive_cruise_control.p_coefficient_positive
double coefficient P in PID control (used when target dist -current_dist >=0) [-] adaptive_cruise_control.p_coefficient_negative
double coefficient P in PID control (used when target dist -current_dist <0) [-] adaptive_cruise_control.d_coefficient_positive
double coefficient D in PID control (used when delta_dist >=0) [-] adaptive_cruise_control.d_coefficient_negative
double coefficient D in PID control (used when delta_dist <0) [-] adaptive_cruise_control.object_polygon_length_margin
double The distance to extend the polygon length the object in pointcloud-object matching [m] adaptive_cruise_control.object_polygon_width_margin
double The distance to extend the polygon width the object in pointcloud-object matching [m] adaptive_cruise_control.valid_estimated_vel_diff_time
double Maximum time difference treated as continuous points in speed estimation using a point cloud [s] adaptive_cruise_control.valid_vel_que_time
double Time width of information used for speed estimation in speed estimation using a point cloud [s] adaptive_cruise_control.valid_estimated_vel_max
double Maximum value of valid speed estimation results in speed estimation using a point cloud [m/s] adaptive_cruise_control.valid_estimated_vel_min
double Minimum value of valid speed estimation results in speed estimation using a point cloud [m/s] adaptive_cruise_control.thresh_vel_to_stop
double Embed a stop line if the maximum speed calculated by ACC is lower than this speed [m/s] adaptive_cruise_control.lowpass_gain_of_upper_velocity
double Lowpass-gain of target velocity adaptive_cruise_control.use_rough_velocity_estimation:
bool Use rough estimated velocity if the velocity estimation is failed (valid only if osp.use_predicted_objects false) adaptive_cruise_control.rough_velocity_rate
double In the rough velocity estimation, the velocity of front car is estimated as self current velocity * this value"},{"location":"planning/obstacle_stop_planner/#flowchart_2","title":"Flowchart","text":"(*1) The target vehicle point is calculated as a closest obstacle PointCloud from ego along the trajectory.
(*2) The sources of velocity estimation can be changed by the following ROS parameters.
adaptive_cruise_control.use_object_to_estimate_vel
adaptive_cruise_control.use_pcl_to_estimate_vel
This module works only when the target point is found in the detection area of the Obstacle stop planner
module.
The first process of this module is to estimate the velocity of the target vehicle point. The velocity estimation uses the velocity information of dynamic objects or the travel distance of the target vehicle point from the previous step. The dynamic object information is primal, and the travel distance estimation is used as a backup in case of the perception failure. If the target vehicle point is contained in the bounding box of a dynamic object geometrically, the velocity of the dynamic object is used as the target point velocity. Otherwise, the target point velocity is calculated by the travel distance of the target point from the previous step; that is (current_position - previous_position) / dt
. Note that this travel distance based estimation fails when the target point is detected in the first time (it mainly happens in the cut-in situation). To improve the stability of the estimation, the median of the calculation result for several steps is used.
If the calculated velocity is within the threshold range, it is used as the target point velocity.
Only when the estimation is succeeded and the estimated velocity exceeds the value of obstacle_stop_velocity_thresh_*
, the distance to the pointcloud from self-position is calculated. For prevent chattering in the mode transition, obstacle_velocity_thresh_to_start_acc
is used for the threshold to start adaptive cruise, and obstacle_velocity_thresh_to_stop_acc
is used for the threshold to stop adaptive cruise. When the calculated distance value exceeds the emergency distance \\(d\\_{emergency}\\) calculated by emergency_stop parameters, target velocity to insert is calculated.
The emergency distance \\(d\\_{emergency}\\) is calculated as follows.
\\(d_{emergency} = d_{margin_{emergency}} + t_{idling_{emergency}} \\cdot v_{ego} + (-\\frac{v_{ego}^2}{2 \\cdot a_{ego_ {emergency}}}) - (-\\frac{v_{obj}^2}{2 \\cdot a_{obj_{emergency}}})\\)
min_dist_stop
emergency_stop_idling_time
emergency_stop_acceleration
obstacle_emergency_stop_acceleration
The target velocity is determined to keep the distance to the obstacle pointcloud from own vehicle at the standard distance \\(d\\_{standard}\\) calculated as following. Therefore, if the distance to the obstacle pointcloud is longer than standard distance, The target velocity becomes higher than the current velocity, and vice versa. For keeping the distance, a PID controller is used.
\\(d_{standard} = d_{margin_{standard}} + t_{idling_{standard}} \\cdot v_{ego} + (-\\frac{v_{ego}^2}{2 \\cdot a_{ego_ {standard}}}) - (-\\frac{v_{obj}^2}{2 \\cdot a_{obj_{standard}}})\\)
min_dist_stop
standard_stop_idling_time
min_standard_acceleration
obstacle_min_standard_acceleration
If the target velocity exceeds the value of thresh_vel_to_stop
, the target velocity is embedded in the trajectory.
Adaptive Cruise Controller
module. If the velocity planning module is updated, please take care of the vehicle's behavior as much as possible and always be ready for overriding.Adaptive Cruise Controller
is depend on object tracking module. Please note that if the object-tracking fails or the tracking result is incorrect, it the possibility that the vehicle behaves dangerously.This node limits the velocity when driving in the direction of an obstacle. For example, it allows to reduce the velocity when driving close to a guard rail in a curve.
Without this node With this node"},{"location":"planning/obstacle_velocity_limiter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"Using a parameter min_ttc
(minimum time to collision), the node set velocity limits such that no collision with an obstacle would occur, even without new control inputs for a duration of min_ttc
.
To achieve this, the motion of the ego vehicle is simulated forward in time at each point of the trajectory to create a corresponding footprint. If the footprint collides with some obstacle, the velocity at the trajectory point is reduced such that the new simulated footprint do not have any collision.
"},{"location":"planning/obstacle_velocity_limiter/#simulated-motion-footprint-and-collision-distance","title":"Simulated Motion, Footprint, and Collision Distance","text":"The motion of the ego vehicle is simulated at each trajectory point using the heading
, velocity
, and steering
defined at the point. Footprints are then constructed from these simulations and checked for collision. If a collision is found, the distance from the trajectory point is used to calculate the adjusted velocity that would produce a collision-free footprint. Parameter simulation.distance_method
allow to switch between an exact distance calculation and a less expensive approximation using a simple euclidean distance.
Two models can be selected with parameter simulation.model
for simulating the motion of the vehicle: a simple particle model and a more complicated bicycle model.
The particle model uses the constant heading and velocity of the vehicle at a trajectory point to simulate the future motion. The simulated forward motion corresponds to a straight line and the footprint to a rectangle.
"},{"location":"planning/obstacle_velocity_limiter/#footprint","title":"Footprint","text":"The rectangle footprint is built from 2 lines parallel to the simulated forward motion and at a distance of half the vehicle width.
"},{"location":"planning/obstacle_velocity_limiter/#distance","title":"Distance","text":"When a collision point is found within the footprint, the distance is calculated as described in the following figure.
"},{"location":"planning/obstacle_velocity_limiter/#bicycle-model","title":"Bicycle Model","text":"The bicycle model uses the constant heading, velocity, and steering of the vehicle at a trajectory point to simulate the future motion. The simulated forward motion corresponds to an arc around the circle of curvature associated with the steering. Uncertainty in the steering can be introduced with the simulation.steering_offset
parameter which will generate a range of motion from a left-most to a right-most steering. This results in 3 curved lines starting from the same trajectory point. A parameter simulation.nb_points
is used to adjust the precision of these lines, with a minimum of 2
resulting in straight lines and higher values increasing the precision of the curves.
By default, the steering values contained in the trajectory message are used. Parameter trajectory_preprocessing.calculate_steering_angles
allows to recalculate these values when set to true
.
The footprint of the bicycle model is created from lines parallel to the left and right simulated motion at a distance of half the vehicle width. In addition, the two points on the left and right of the end point of the central simulated motion are used to complete the polygon.
"},{"location":"planning/obstacle_velocity_limiter/#distance_1","title":"Distance","text":"The distance to a collision point is calculated by finding the curvature circle passing through the trajectory point and the collision point.
"},{"location":"planning/obstacle_velocity_limiter/#obstacle-detection","title":"Obstacle Detection","text":"Obstacles are represented as points or linestrings (i.e., sequence of points) around the obstacles and are constructed from an occupancy grid, a pointcloud, or the lanelet map. The lanelet map is always checked for obstacles but the other source is switched using parameter obstacles.dynamic_source
.
To efficiently find obstacles intersecting with a footprint, they are stored in a R-tree. Two trees are used, one for the obstacle points, and one for the obstacle linestrings (which are decomposed into segments to simplify the R-tree).
"},{"location":"planning/obstacle_velocity_limiter/#obstacle-masks","title":"Obstacle masks","text":""},{"location":"planning/obstacle_velocity_limiter/#dynamic-obstacles","title":"Dynamic obstacles","text":"Moving obstacles such as other cars should not be considered by this module. These obstacles are detected by the perception modules and represented as polygons. Obstacles inside these polygons are ignored.
Only dynamic obstacles with a velocity above parameter obstacles.dynamic_obstacles_min_vel
are removed.
To deal with delays and precision errors, the polygons can be enlarged with parameter obstacles.dynamic_obstacles_buffer
.
Obstacles that are not inside any forward simulated footprint are ignored if parameter obstacles.filter_envelope
is set to true. The safety envelope polygon is built from all the footprints and used as a positive mask on the occupancy grid or pointcloud.
This option can reduce the total number of obstacles which reduces the cost of collision detection. However, the cost of masking the envelope is usually too high to be interesting.
"},{"location":"planning/obstacle_velocity_limiter/#obstacles-on-the-ego-path","title":"Obstacles on the ego path","text":"If parameter obstacles.ignore_obstacles_on_path
is set to true
, a polygon mask is built from the trajectory and the vehicle dimension. Any obstacle in this polygon is ignored.
The size of the polygon can be increased using parameter obstacles.ignore_extra_distance
which is added to the vehicle lateral offset.
This option is a bit expensive and should only be used in case of noisy dynamic obstacles where obstacles are wrongly detected on the ego path, causing unwanted velocity limits.
"},{"location":"planning/obstacle_velocity_limiter/#lanelet-map","title":"Lanelet Map","text":"Information about static obstacles can be stored in the Lanelet map using the value of the type
tag of linestrings. If any linestring has a type
with one of the value from parameter obstacles.static_map_tags
, then it will be used as an obstacle.
Obstacles from the lanelet map are not impacted by the masks.
"},{"location":"planning/obstacle_velocity_limiter/#occupancy-grid","title":"Occupancy Grid","text":"Masking is performed by iterating through the cells inside each polygon mask using the grid_map_utils::PolygonIterator
function. A threshold is then applied to only keep cells with an occupancy value above parameter obstacles.occupancy_grid_threshold
. Finally, the image is converted to an image and obstacle linestrings are extracted using the opencv function findContour
.
Masking is performed using the pcl::CropHull
function. Points from the pointcloud are then directly used as obstacles.
If a collision is found, the velocity at the trajectory point is adjusted such that the resulting footprint would no longer collide with an obstacle: \\(velocity = \\frac{dist\\_to\\_collision}{min\\_ttc}\\)
To prevent sudden deceleration of the ego vehicle, the parameter max_deceleration
limits the deceleration relative to the current ego velocity. For a trajectory point occurring at a duration t
in the future (calculated from the original velocity profile), the adjusted velocity cannot be set lower than \\(v_{current} - t * max\\_deceleration\\).
Furthermore, a parameter min_adjusted_velocity
provides a lower bound on the modified velocity.
The node only modifies part of the input trajectory, starting from the current ego position. Parameter trajectory_preprocessing.start_distance
is used to adjust how far ahead of the ego position the velocities will start being modified. Parameters trajectory_preprocessing.max_length
and trajectory_preprocessing.max_duration
are used to control how much of the trajectory will see its velocity adjusted.
To reduce computation cost at the cost of precision, the trajectory can be downsampled using parameter trajectory_preprocessing.downsample_factor
. For example a value of 1
means all trajectory points will be evaluated while a value of 10
means only 1/10th of the points will be evaluated.
~/input/trajectory
autoware_auto_planning_msgs/Trajectory
Reference trajectory ~/input/occupancy_grid
nav_msgs/OccupancyGrid
Occupancy grid with obstacle information ~/input/obstacle_pointcloud
sensor_msgs/PointCloud2
Pointcloud containing only obstacle points ~/input/dynamic_obstacles
autoware_auto_perception_msgs/PredictedObjects
Dynamic objects ~/input/odometry
nav_msgs/Odometry
Odometry used to retrieve the current ego velocity ~/input/map
autoware_auto_mapping_msgs/HADMapBin
Vector map used to retrieve static obstacles"},{"location":"planning/obstacle_velocity_limiter/#outputs","title":"Outputs","text":"Name Type Description ~/output/trajectory
autoware_auto_planning_msgs/Trajectory
Trajectory with adjusted velocities ~/output/debug_markers
visualization_msgs/MarkerArray
Debug markers (envelopes, obstacle polygons) ~/output/runtime_microseconds
tier4_debug_msgs/Float64
Time taken to calculate the trajectory (in microseconds)"},{"location":"planning/obstacle_velocity_limiter/#parameters","title":"Parameters","text":"Name Type Description min_ttc
float [s] required minimum time with no collision at each point of the trajectory assuming constant heading and velocity. distance_buffer
float [m] required distance buffer with the obstacles. min_adjusted_velocity
float [m/s] minimum adjusted velocity this node can set. max_deceleration
float [m/s\u00b2] maximum deceleration an adjusted velocity can cause. trajectory_preprocessing.start_distance
float [m] controls from which part of the trajectory (relative to the current ego pose) the velocity is adjusted. trajectory_preprocessing.max_length
float [m] controls the maximum length (starting from the start_distance
) where the velocity is adjusted. trajectory_preprocessing.max_distance
float [s] controls the maximum duration (measured from the start_distance
) where the velocity is adjusted. trajectory_preprocessing.downsample_factor
int trajectory downsampling factor to allow tradeoff between precision and performance. trajectory_preprocessing.calculate_steering_angle
bool if true, the steering angles of the trajectory message are not used but are recalculated. simulation.model
string model to use for forward simulation. Either \"particle\" or \"bicycle\". simulation.distance_method
string method to use for calculating distance to collision. Either \"exact\" or \"approximation\". simulation.steering_offset
float offset around the steering used by the bicycle model. simulation.nb_points
int number of points used to simulate motion with the bicycle model. obstacles.dynamic_source
string source of dynamic obstacle used for collision checking. Can be \"occupancy_grid\", \"point_cloud\", or \"static_only\" (no dynamic obstacle). obstacles.occupancy_grid_threshold
int value in the occupancy grid above which a cell is considered an obstacle. obstacles.dynamic_obstacles_buffer
float buffer around dynamic obstacles used when masking an obstacle in order to prevent noise. obstacles.dynamic_obstacles_min_vel
float velocity above which to mask a dynamic obstacle. obstacles.static_map_tags
string list linestring of the lanelet map with this tags are used as obstacles. obstacles.filter_envelope
bool wether to use the safety envelope to filter the dynamic obstacles source."},{"location":"planning/obstacle_velocity_limiter/#assumptions-known-limits","title":"Assumptions / Known limits","text":"The velocity profile produced by this node is not meant to be a realistic velocity profile and can contain sudden jumps of velocity with no regard for acceleration and jerk. This velocity profile is meant to be used as an upper bound on the actual velocity of the vehicle.
"},{"location":"planning/obstacle_velocity_limiter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":"The critical case for this node is when an obstacle is falsely detected very close to the trajectory such that the corresponding velocity suddenly becomes very low. This can cause a sudden brake and two mechanisms can be used to mitigate these errors.
Parameter min_adjusted_velocity
allow to set a minimum to the adjusted velocity, preventing the node to slow down the vehicle too much. Parameter max_deceleration
allow to set a maximum deceleration (relative to the current ego velocity) that the adjusted velocity would incur.
This package contains code to smooth a path or trajectory.
"},{"location":"planning/path_smoother/#features","title":"Features","text":""},{"location":"planning/path_smoother/#elastic-band","title":"Elastic Band","text":"More details about the elastic band can be found here.
"},{"location":"planning/path_smoother/docs/eb/","title":"Elastic band","text":""},{"location":"planning/path_smoother/docs/eb/#elastic-band","title":"Elastic band","text":""},{"location":"planning/path_smoother/docs/eb/#abstract","title":"Abstract","text":"Elastic band smooths the input path. Since the latter optimization (model predictive trajectory) is calculated on the frenet frame, path smoothing is applied here so that the latter optimization will be stable.
Note that this smoothing process does not consider collision checking. Therefore the output path may have a collision with road boundaries or obstacles.
"},{"location":"planning/path_smoother/docs/eb/#flowchart","title":"Flowchart","text":""},{"location":"planning/path_smoother/docs/eb/#general-parameters","title":"General parameters","text":"Parameter Type Descriptioneb.common.num_points
int points for elastic band optimization eb.common.delta_arc_length
double delta arc length for elastic band optimization"},{"location":"planning/path_smoother/docs/eb/#parameters-for-optimization","title":"Parameters for optimization","text":"Parameter Type Description eb.option.enable_warm_start
bool flag to use warm start eb.weight.smooth_weight
double weight for smoothing eb.weight.lat_error_weight
double weight for minimizing the lateral error"},{"location":"planning/path_smoother/docs/eb/#parameters-for-validation","title":"Parameters for validation","text":"Parameter Type Description eb.option.enable_optimization_validation
bool flag to validate optimization eb.validation.max_error
double max lateral error by optimization"},{"location":"planning/path_smoother/docs/eb/#formulation","title":"Formulation","text":""},{"location":"planning/path_smoother/docs/eb/#objective-function","title":"Objective function","text":"We formulate a quadratic problem minimizing the diagonal length of the rhombus on each point generated by the current point and its previous and next points, shown as the red vector's length.
Assuming that \\(k\\)'th point is \\(\\boldsymbol{p}_k = (x_k, y_k)\\), the objective function is as follows.
\\[ \\begin{align} \\ J & = \\min \\sum_{k=1}^{n-2} ||(\\boldsymbol{p}_{k+1} - \\boldsymbol{p}_{k}) - (\\boldsymbol{p}_{k} - \\boldsymbol{p}_{k-1})||^2 \\\\ \\ & = \\min \\sum_{k=1}^{n-2} ||\\boldsymbol{p}_{k+1} - 2 \\boldsymbol{p}_{k} + \\boldsymbol{p}_{k-1}||^2 \\\\ \\ & = \\min \\sum_{k=1}^{n-2} \\{(x_{k+1} - x_k + x_{k-1})^2 + (y_{k+1} - y_k + y_{k-1})^2\\} \\\\ \\ & = \\min \\begin{pmatrix} \\ x_0 \\\\ \\ x_1 \\\\ \\ x_2 \\\\ \\vdots \\\\ \\ x_{n-3}\\\\ \\ x_{n-2} \\\\ \\ x_{n-1} \\\\ \\ y_0 \\\\ \\ y_1 \\\\ \\ y_2 \\\\ \\vdots \\\\ \\ y_{n-3}\\\\ \\ y_{n-2} \\\\ \\ y_{n-1} \\\\ \\end{pmatrix}^T \\begin{pmatrix} 1 & -2 & 1 & 0 & \\dots& \\\\ -2 & 5 & -4 & 1 & 0 &\\dots \\\\ 1 & -4 & 6 & -4 & 1 & \\\\ 0 & 1 & -4 & 6 & -4 & \\\\ \\vdots & 0 & \\ddots&\\ddots& \\ddots \\\\ & \\vdots & & & \\\\ & & & 1 & -4 & 6 & -4 & 1 \\\\ & & & & 1 & -4 & 5 & -2 \\\\ & & & & & 1 & -2 & 1& \\\\ & & & & & & & &1 & -2 & 1 & 0 & \\dots& \\\\ & & & & & & & &-2 & 5 & -4 & 1 & 0 &\\dots \\\\ & & & & & & & &1 & -4 & 6 & -4 & 1 & \\\\ & & & & & & & &0 & 1 & -4 & 6 & -4 & \\\\ & & & & & & & &\\vdots & 0 & \\ddots&\\ddots& \\ddots \\\\ & & & & & & & & & \\vdots & & & \\\\ & & & & & & & & & & & 1 & -4 & 6 & -4 & 1 \\\\ & & & & & & & & & & & & 1 & -4 & 5 & -2 \\\\ & & & & & & & & & & & & & 1 & -2 & 1& \\\\ \\end{pmatrix} \\begin{pmatrix} \\ x_0 \\\\ \\ x_1 \\\\ \\ x_2 \\\\ \\vdots \\\\ \\ x_{n-3}\\\\ \\ x_{n-2} \\\\ \\ x_{n-1} \\\\ \\ y_0 \\\\ \\ y_1 \\\\ \\ y_2 \\\\ \\vdots \\\\ \\ y_{n-3}\\\\ \\ y_{n-2} \\\\ \\ y_{n-1} \\\\ \\end{pmatrix} \\end{align} \\]"},{"location":"planning/path_smoother/docs/eb/#constraint","title":"Constraint","text":"The distance that each point can move is limited so that the path will not changed a lot but will be smoother. In detail, the longitudinal distance that each point can move is zero, and the lateral distance is parameterized as eb.clearance.clearance_for_fix
, eb.clearance.clearance_for_joint
and eb.clearance.clearance_for_smooth
.
The following figure describes how to constrain the lateral distance to move. The red line is where the point can move. The points for the upper and lower bound are described as \\((x_k^u, y_k^u)\\) and \\((x_k^l, y_k^l)\\), respectively.
Based on the line equation whose slope angle is \\(\\theta_k\\) and that passes through \\((x_k, y_k)\\), \\((x_k^u, y_k^u)\\) and \\((x_k^l, y_k^l)\\), the lateral constraint is formulated as follows.
\\[ C_k^l \\leq C_k \\leq C_k^u \\]In addition, the beginning point is fixed and the end point as well if the end point is considered as the goal. This constraint can be applied with the upper equation by changing the distance that each point can move.
"},{"location":"planning/path_smoother/docs/eb/#debug","title":"Debug","text":"This package contains several planning-related debug tools.
The trajectory_analyzer
visualizes the information (speed, curvature, yaw, etc) along the trajectory. This feature would be helpful for purposes such as \"investigating the reason why the vehicle decelerates here\". This feature employs the OSS PlotJuggler.
This is to visualize stop factor and reason. see the details
"},{"location":"planning/planning_debug_tools/#how-to-use","title":"How to use","text":"please launch the analyzer node
ros2 launch planning_debug_tools trajectory_analyzer.launch.xml\n
and visualize the analyzed data on the plot juggler following below.
"},{"location":"planning/planning_debug_tools/#setup-plotjuggler","title":"setup PlotJuggler","text":"For the first time, please add the following code to reactive script and save it as the picture below! (Looking for the way to automatically load the configuration file...)
You can customize what you plot by editing this code.
in Global code
behavior_path = '/planning/scenario_planning/lane_driving/behavior_planning/path_with_lane_id/debug_info'\nbehavior_velocity = '/planning/scenario_planning/lane_driving/behavior_planning/path/debug_info'\nmotion_avoid = '/planning/scenario_planning/lane_driving/motion_planning/obstacle_avoidance_planner/trajectory/debug_info'\nmotion_smoother_latacc = '/planning/scenario_planning/motion_velocity_smoother/debug/trajectory_lateral_acc_filtered/debug_info'\nmotion_smoother = '/planning/scenario_planning/trajectory/debug_info'\n
in function(tracker_time)
PlotCurvatureOverArclength('k_behavior_path', behavior_path, tracker_time)\nPlotCurvatureOverArclength('k_behavior_velocity', behavior_velocity, tracker_time)\nPlotCurvatureOverArclength('k_motion_avoid', motion_avoid, tracker_time)\nPlotCurvatureOverArclength('k_motion_smoother', motion_smoother, tracker_time)\n\nPlotVelocityOverArclength('v_behavior_path', behavior_path, tracker_time)\nPlotVelocityOverArclength('v_behavior_velocity', behavior_velocity, tracker_time)\nPlotVelocityOverArclength('v_motion_avoid', motion_avoid, tracker_time)\nPlotVelocityOverArclength('v_motion_smoother_latacc', motion_smoother_latacc, tracker_time)\nPlotVelocityOverArclength('v_motion_smoother', motion_smoother, tracker_time)\n\nPlotAccelerationOverArclength('a_behavior_path', behavior_path, tracker_time)\nPlotAccelerationOverArclength('a_behavior_velocity', behavior_velocity, tracker_time)\nPlotAccelerationOverArclength('a_motion_avoid', motion_avoid, tracker_time)\nPlotAccelerationOverArclength('a_motion_smoother_latacc', motion_smoother_latacc, tracker_time)\nPlotAccelerationOverArclength('a_motion_smoother', motion_smoother, tracker_time)\n\nPlotYawOverArclength('yaw_behavior_path', behavior_path, tracker_time)\nPlotYawOverArclength('yaw_behavior_velocity', behavior_velocity, tracker_time)\nPlotYawOverArclength('yaw_motion_avoid', motion_avoid, tracker_time)\nPlotYawOverArclength('yaw_motion_smoother_latacc', motion_smoother_latacc, tracker_time)\nPlotYawOverArclength('yaw_motion_smoother', motion_smoother, tracker_time)\n\nPlotCurrentVelocity('localization_kinematic_state', '/localization/kinematic_state', tracker_time)\n
in Function Library
function PlotValue(name, path, timestamp, value)\n new_series = ScatterXY.new(name)\n index = 0\n while(true) do\n series_k = TimeseriesView.find( string.format( \"%s/\"..value..\".%d\", path, index) )\n series_s = TimeseriesView.find( string.format( \"%s/arclength.%d\", path, index) )\n series_size = TimeseriesView.find( string.format( \"%s/size\", path) )\n\n if series_k == nil or series_s == nil then break end\n\n k = series_k:atTime(timestamp)\n s = series_s:atTime(timestamp)\n size = series_size:atTime(timestamp)\n\n if index >= size then break end\n\n new_series:push_back(s,k)\n index = index+1\n end\nend\n\nfunction PlotCurvatureOverArclength(name, path, timestamp)\n PlotValue(name, path, timestamp,\"curvature\")\nend\n\nfunction PlotVelocityOverArclength(name, path, timestamp)\n PlotValue(name, path, timestamp,\"velocity\")\nend\n\nfunction PlotAccelerationOverArclength(name, path, timestamp)\n PlotValue(name, path, timestamp,\"acceleration\")\nend\n\nfunction PlotYawOverArclength(name, path, timestamp)\n PlotValue(name, path, timestamp,\"yaw\")\nend\n\nfunction PlotCurrentVelocity(name, kinematics_name, timestamp)\n new_series = ScatterXY.new(name)\n series_v = TimeseriesView.find( string.format( \"%s/twist/twist/linear/x\", kinematics_name))\n if series_v == nil then\n print(\"error\")\n return\n end\n v = series_v:atTime(timestamp)\n new_series:push_back(0.0, v)\nend\n
Then, run the plot juggler.
"},{"location":"planning/planning_debug_tools/#how-to-customize-the-plot","title":"How to customize the plot","text":"Add Path/PathWithLaneIds/Trajectory topics you want to plot in the trajectory_analyzer.launch.xml
, then the analyzed topics for these messages will be published with TrajectoryDebugINfo.msg
type. You can then visualize these data by editing the reactive script on the PlotJuggler.
The version of the plotJuggler must be > 3.5.0
This node prints the velocity information indicated by planning/control modules on a terminal. For trajectories calculated by planning modules, the target velocity on the trajectory point which is closest to the ego vehicle is printed. For control commands calculated by control modules, the target velocity and acceleration is directly printed. This feature would be helpful for purposes such as \"investigating the reason why the vehicle does not move\".
You can launch by
ros2 run planning_debug_tools closest_velocity_checker.py\n
"},{"location":"planning/planning_debug_tools/#trajectory-visualizer","title":"Trajectory visualizer","text":"The old version of the trajectory analyzer. It is written in Python and more flexible, but very slow.
"},{"location":"planning/planning_debug_tools/#for-other-use-case-experimental","title":"For other use case (experimental)","text":"To see behavior velocity planner's internal plath with lane id add below example value to behavior velocity analyzer and set is_publish_debug_path: true
crosswalk ='/planning/scenario_planning/lane_driving/behavior_planning/behavior_velocity_planner/debug/path_with_lane_id/crosswalk/debug_info'\nintersection ='/planning/scenario_planning/lane_driving/behavior_planning/behavior_velocity_planner/debug/path_with_lane_id/intersection/debug_info'\ntraffic_light ='/planning/scenario_planning/lane_driving/behavior_planning/behavior_velocity_planner/debug/path_with_lane_id/traffic_light/debug_info'\nmerge_from_private ='/planning/scenario_planning/lane_driving/behavior_planning/behavior_velocity_planner/debug/path_with_lane_id/merge_from_private/debug_info'\nocclusion_spot ='/planning/scenario_planning/lane_driving/behavior_planning/behavior_velocity_planner/debug/path_with_lane_id/occlusion_spot/debug_info'\n
PlotVelocityOverArclength('v_crosswalk', crosswalk, tracker_time)\nPlotVelocityOverArclength('v_intersection', intersection, tracker_time)\nPlotVelocityOverArclength('v_merge_from_private', merge_from_private, tracker_time)\nPlotVelocityOverArclength('v_traffic_light', traffic_light, tracker_time)\nPlotVelocityOverArclength('v_occlusion', occlusion_spot, tracker_time)\n\nPlotYawOverArclength('yaw_crosswalk', crosswalk, tracker_time)\nPlotYawOverArclength('yaw_intersection', intersection, tracker_time)\nPlotYawOverArclength('yaw_merge_from_private', merge_from_private, tracker_time)\nPlotYawOverArclength('yaw_traffic_light', traffic_light, tracker_time)\nPlotYawOverArclength('yaw_occlusion', occlusion_spot, tracker_time)\n\nPlotCurrentVelocity('localization_kinematic_state', '/localization/kinematic_state', tracker_time)\n
"},{"location":"planning/planning_debug_tools/#perception-reproducer","title":"Perception reproducer","text":"This script can overlay the perception results from the rosbag on the planning simulator synchronized with the simulator's ego pose.
In detail, the ego pose in the rosbag which is closest to the current ego pose in the simulator is calculated. The perception results at the timestamp of the closest ego pose is extracted, and published.
"},{"location":"planning/planning_debug_tools/#how-to-use_1","title":"How to use","text":"First, launch the planning simulator, and put the ego pose. Then, run the script according to the following command.
By designating a rosbag, perception reproducer can be launched.
ros2 run planning_debug_tools perception_reproducer.py -b <bag-file>\n
You can designate multiple rosbags in the directory.
ros2 run planning_debug_tools perception_reproducer.py -b <dir-to-bag-files>\n
Instead of publishing predicted objects, you can publish detected/tracked objects by designating -d
or -t
, respectively.
A part of the feature is under development.
This script can overlay the perception results from the rosbag on the planning simulator.
In detail, this script publishes the data at a certain timestamp from the rosbag. The timestamp will increase according to the real time without any operation. By using the GUI, you can modify the timestamp by pausing, changing the rate or going back into the past.
"},{"location":"planning/planning_debug_tools/#how-to-use_2","title":"How to use","text":"First, launch the planning simulator, and put the ego pose. Then, run the script according to the following command.
By designating a rosbag, perception replayer can be launched. The GUI is launched as well with which a timestamp of rosbag can be managed.
ros2 run planning_debug_tools perception_replayer.py -b <bag-file>\n
You can designate multiple rosbags in the directory.
ros2 run planning_debug_tools perception_replayer.py -b <dir-to-bag-files>\n
Instead of publishing predicted objects, you can publish detected/tracked objects by designating -d
or -t
, respectively.
The purpose of the Processing Time Subscriber is to monitor and visualize the processing times of various ROS 2 topics in a system. By providing a real-time terminal-based visualization, users can easily confirm the processing time performance as in the picture below.
You can run the program by the following command.
ros2 run planning_debug_tools processing_time_checker.py -f <update-hz> -m <max-bar-time>\n
This program subscribes to ROS 2 topics that have a suffix of processing_time_ms
.
The program allows users to customize two parameters via command-line arguments:
By adjusting these parameters, users can tailor the display to their specific monitoring needs.
"},{"location":"planning/planning_debug_tools/#logging-level-updater","title":"Logging Level Updater","text":"The purpose of the Logging Level Updater is to update the logging level of the planning modules via ROS 2 service. Users can easily update the logging level for debugging.
ros2 run planning_debug_tools update_logger_level.sh <module-name> <logger-level>\n
<logger-level>
will be DEBUG
, INFO
, WARN
, or ERROR
.
When you have a typo of the planning module, the script will show the available modules.
"},{"location":"planning/planning_debug_tools/doc-stop-reason-visualizer/","title":"Doc stop reason visualizer","text":""},{"location":"planning/planning_debug_tools/doc-stop-reason-visualizer/#stop_reason_visualizer","title":"stop_reason_visualizer","text":"This module is to visualize stop factor quickly without selecting correct debug markers. This is supposed to use with virtual wall marker like below.
"},{"location":"planning/planning_debug_tools/doc-stop-reason-visualizer/#how-to-use","title":"How to use","text":"Run this node.
ros2 run planning_debug_tools stop_reason_visualizer_exe\n
Add stop reason debug marker from rviz.
Note: ros2 process can be sometimes deleted only from killall stop_reason_visualizer_exe
Reference
"},{"location":"planning/planning_test_utils/","title":"Planning Interface Test Manager","text":""},{"location":"planning/planning_test_utils/#planning-interface-test-manager","title":"Planning Interface Test Manager","text":""},{"location":"planning/planning_test_utils/#background","title":"Background","text":"In each node of the planning module, when exceptional input, such as unusual routes or significantly deviated ego-position, is given, the node may not be prepared for such input and could crash. As a result, debugging node crashes can be time-consuming. For example, if an empty trajectory is given as input and it was not anticipated during implementation, the node might crash due to the unaddressed exceptional input when changes are merged, during scenario testing or while the system is running on an actual vehicle.
"},{"location":"planning/planning_test_utils/#purpose","title":"Purpose","text":"The purpose is to provide a utility for implementing tests to ensure that node operates correctly when receiving exceptional input. By utilizing this utility and implementing tests for exceptional input, the purpose is to reduce bugs that are only discovered when actually running the system, by requiring measures for exceptional input before merging PRs.
"},{"location":"planning/planning_test_utils/#features","title":"Features","text":""},{"location":"planning/planning_test_utils/#confirmation-of-normal-operation","title":"Confirmation of normal operation","text":"For the test target node, confirm that the node operates correctly and publishes the required messages for subsequent nodes. To do this, test_node publish the necessary messages and confirm that the node's output is being published.
"},{"location":"planning/planning_test_utils/#robustness-confirmation-for-special-inputs","title":"Robustness confirmation for special inputs","text":"After confirming normal operation, ensure that the test target node does not crash when given exceptional input. To do this, provide exceptional input from the test_node and confirm that the node does not crash.
(WIP)
"},{"location":"planning/planning_test_utils/#usage","title":"Usage","text":"TEST(PlanningModuleInterfaceTest, NodeTestWithExceptionTrajectory)\n{\nrclcpp::init(0, nullptr);\n\n// instantiate test_manager with PlanningInterfaceTestManager type\nauto test_manager = std::make_shared<planning_test_utils::PlanningInterfaceTestManager>();\n\n// get package directories for necessary configuration files\nconst auto planning_test_utils_dir =\nament_index_cpp::get_package_share_directory(\"planning_test_utils\");\nconst auto target_node_dir =\nament_index_cpp::get_package_share_directory(\"target_node\");\n\n// set arguments to get the config file\nnode_options.arguments(\n{\"--ros-args\", \"--params-file\",\nplanning_test_utils_dir + \"/config/test_vehicle_info.param.yaml\", \"--params-file\",\nplanning_validator_dir + \"/config/planning_validator.param.yaml\"});\n\n// instantiate the TargetNode with node_options\nauto test_target_node = std::make_shared<TargetNode>(node_options);\n\n// publish the necessary topics from test_manager second argument is topic name\ntest_manager->publishOdometry(test_target_node, \"/localization/kinematic_state\");\ntest_manager->publishMaxVelocity(\ntest_target_node, \"motion_velocity_smoother/input/external_velocity_limit_mps\");\n\n// set scenario_selector's input topic name(this topic is changed to test node)\ntest_manager->setTrajectoryInputTopicName(\"input/parking/trajectory\");\n\n// test with normal trajectory\nASSERT_NO_THROW(test_manager->testWithNominalTrajectory(test_target_node));\n\n// make sure target_node is running\nEXPECT_GE(test_manager->getReceivedTopicNum(), 1);\n\n// test with trajectory input with empty/one point/overlapping point\nASSERT_NO_THROW(test_manager->testWithAbnormalTrajectory(test_target_node));\n\n// shutdown ROS context\nrclcpp::shutdown();\n}\n
"},{"location":"planning/planning_test_utils/#implemented-tests","title":"Implemented tests","text":"Node Test name exceptional input output Exceptional input pattern planning_validator NodeTestWithExceptionTrajectory trajectory trajectory Empty, single point, path with duplicate points motion_velocity_smoother NodeTestWithExceptionTrajectory trajectory trajectory Empty, single point, path with duplicate points obstacle_cruise_planner NodeTestWithExceptionTrajectory trajectory trajectory Empty, single point, path with duplicate points obstacle_stop_planner NodeTestWithExceptionTrajectory trajectory trajectory Empty, single point, path with duplicate points obstacle_velocity_limiter NodeTestWithExceptionTrajectory trajectory trajectory Empty, single point, path with duplicate points obstacle_avoidance_planner NodeTestWithExceptionTrajectory trajectory trajectory Empty, single point, path with duplicate points scenario_selector NodeTestWithExceptionTrajectoryLaneDrivingMode NodeTestWithExceptionTrajectoryParkingMode trajectory scenario Empty, single point, path with duplicate points for scenarios:LANEDRIVING and PARKING freespace_planner NodeTestWithExceptionRoute route trajectory Empty route behavior_path_planner NodeTestWithExceptionRoute NodeTestWithOffTrackEgoPose route route odometry Empty route Off-lane ego-position behavior_velocity_planner NodeTestWithExceptionPathWithLaneID path_with_lane_id path Empty path"},{"location":"planning/planning_test_utils/#important-notes","title":"Important Notes","text":"During test execution, when launching a node, parameters are loaded from the parameter file within each package. Therefore, when adding parameters, it is necessary to add the required parameters to the parameter file in the target node package. This is to prevent the node from being unable to launch if there are missing parameters when retrieving them from the parameter file during node launch.
"},{"location":"planning/planning_test_utils/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":"(WIP)
"},{"location":"planning/planning_topic_converter/","title":"Planning Topic Converter","text":""},{"location":"planning/planning_topic_converter/#planning-topic-converter","title":"Planning Topic Converter","text":""},{"location":"planning/planning_topic_converter/#purpose","title":"Purpose","text":"This package provides tools that convert topic type among types are defined in https://github.com/tier4/autoware_auto_msgs.
"},{"location":"planning/planning_topic_converter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"planning/planning_topic_converter/#usage-example","title":"Usage example","text":"The tools in this package are provided as composable ROS 2 component nodes, so that they can be spawned into an existing process, launched from launch files, or invoked from the command line.
<load_composable_node target=\"container_name\">\n<composable_node pkg=\"planning_topic_converter\" plugin=\"planning_topic_converter::PathToTrajectory\" name=\"path_to_trajectory_converter\" namespace=\"\">\n<!-- params -->\n<param name=\"input_topic\" value=\"foo\"/>\n<param name=\"output_topic\" value=\"bar\"/>\n<!-- composable node config -->\n<extra_arg name=\"use_intra_process_comms\" value=\"false\"/>\n</composable_node>\n</load_composable_node>\n
"},{"location":"planning/planning_topic_converter/#parameters","title":"Parameters","text":"Name Type Description input_topic
string input topic name. output_topic
string output topic name."},{"location":"planning/planning_topic_converter/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"planning/planning_topic_converter/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":""},{"location":"planning/planning_validator/","title":"Planning Validator","text":""},{"location":"planning/planning_validator/#planning-validator","title":"Planning Validator","text":"The planning_validator
is a module that checks the validity of a trajectory before it is published. The status of the validation can be viewed in the /diagnostics
and /validation_status
topics. When an invalid trajectory is detected, the planning_validator
will process the trajectory following the selected option: \"0. publish the trajectory as it is\", \"1. stop publishing the trajectory\", \"2. publish the last validated trajectory\".
The following features are supported for trajectory validation and can have thresholds set by parameters:
The following features are to be implemented.
The planning_validator
takes in the following inputs:
~/input/kinematics
nav_msgs/Odometry ego pose and twist ~/input/trajectory
autoware_auto_planning_msgs/Trajectory target trajectory to be validated in this node"},{"location":"planning/planning_validator/#outputs","title":"Outputs","text":"It outputs the following:
Name Type Description~/output/trajectory
autoware_auto_planning_msgs/Trajectory validated trajectory ~/output/validation_status
planning_validator/PlanningValidatorStatus validator status to inform the reason why the trajectory is valid/invalid /diagnostics
diagnostic_msgs/DiagnosticStatus diagnostics to report errors"},{"location":"planning/planning_validator/#parameters","title":"Parameters","text":"The following parameters can be set for the planning_validator
:
invalid_trajectory_handling_type
int set the operation when the invalid trajectory is detected. 0: publish the trajectory even if it is invalid, 1: stop publishing the trajectory, 2: publish the last validated trajectory. 0 publish_diag
bool the Diag will be set to ERROR when the number of consecutive invalid trajectory exceeds this threshold. (For example, threshold = 1 means, even if the trajectory is invalid, the Diag will not be ERROR if the next trajectory is valid.) true diag_error_count_threshold
int if true, diagnostics msg is published. true display_on_terminal
bool show error msg on terminal true"},{"location":"planning/planning_validator/#algorithm-parameters","title":"Algorithm parameters","text":""},{"location":"planning/planning_validator/#thresholds","title":"Thresholds","text":"The input trajectory is detected as invalid if the index exceeds the following thresholds.
Name Type Description Default valuethresholds.interval
double invalid threshold of the distance of two neighboring trajectory points [m] 100.0 thresholds.relative_angle
double invalid threshold of the relative angle of two neighboring trajectory points [rad] 2.0 thresholds.curvature
double invalid threshold of the curvature in each trajectory point [1/m] 1.0 thresholds.lateral_acc
double invalid threshold of the lateral acceleration in each trajectory point [m/ss] 9.8 thresholds.longitudinal_max_acc
double invalid threshold of the maximum longitudinal acceleration in each trajectory point [m/ss] 9.8 thresholds.longitudinal_min_acc
double invalid threshold of the minimum longitudinal deceleration in each trajectory point [m/ss] -9.8 thresholds.steering
double invalid threshold of the steering angle in each trajectory point [rad] 1.414 thresholds.steering_rate
double invalid threshold of the steering angle rate in each trajectory point [rad/s] 10.0 thresholds.velocity_deviation
double invalid threshold of the velocity deviation between the ego velocity and the trajectory point closest to ego [m/s] 100.0 thresholds.distance_deviation
double invalid threshold of the distance deviation between the ego position and the trajectory point closest to ego [m] 100.0"},{"location":"planning/route_handler/","title":"route handler","text":""},{"location":"planning/route_handler/#route-handler","title":"route handler","text":"route_handler
is a library for calculating driving route on the lanelet map.
RTC Interface is an interface to publish the decision status of behavior planning modules and receive execution command from external of an autonomous driving system.
"},{"location":"planning/rtc_interface/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"planning/rtc_interface/#usage-example","title":"Usage example","text":"// Generate instance (in this example, \"intersection\" is selected)\nrtc_interface::RTCInterface rtc_interface(node, \"intersection\");\n\n// Generate UUID\nconst unique_identifier_msgs::msg::UUID uuid = generateUUID(getModuleId());\n\n// Repeat while module is running\nwhile (...) {\n// Get safety status of the module corresponding to the module id\nconst bool safe = ...\n\n// Get distance to the object corresponding to the module id\nconst double start_distance = ...\nconst double finish_distance = ...\n\n// Get time stamp\nconst rclcpp::Time stamp = ...\n\n// Update status\nrtc_interface.updateCooperateStatus(uuid, safe, start_distance, finish_distance, stamp);\n\nif (rtc_interface.isActivated(uuid)) {\n// Execute planning\n} else {\n// Stop planning\n}\n// Get time stamp\nconst rclcpp::Time stamp = ...\n\n// Publish status topic\nrtc_interface.publishCooperateStatus(stamp);\n}\n\n// Remove the status from array\nrtc_interface.removeCooperateStatus(uuid);\n
"},{"location":"planning/rtc_interface/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"planning/rtc_interface/#rtcinterface-constructor","title":"RTCInterface (Constructor)","text":"rtc_interface::RTCInterface(rclcpp::Node & node, const std::string & name);\n
"},{"location":"planning/rtc_interface/#description","title":"Description","text":"A constructor for rtc_interface::RTCInterface
.
node
: Node calling this interfacename
: Name of cooperate status array topic and cooperate commands service~/{name}/cooperate_status
~/{name}/cooperate_commands
An instance of RTCInterface
rtc_interface::publishCooperateStatus(const rclcpp::Time & stamp)\n
"},{"location":"planning/rtc_interface/#description_1","title":"Description","text":"Publish registered cooperate status.
"},{"location":"planning/rtc_interface/#input_1","title":"Input","text":"stamp
: Time stampNothing
"},{"location":"planning/rtc_interface/#updatecooperatestatus","title":"updateCooperateStatus","text":"rtc_interface::updateCooperateStatus(const unique_identifier_msgs::msg::UUID & uuid, const bool safe, const double start_distance, const double finish_distance, const rclcpp::Time & stamp)\n
"},{"location":"planning/rtc_interface/#description_2","title":"Description","text":"Update cooperate status corresponding to uuid
. If cooperate status corresponding to uuid
is not registered yet, add new cooperate status.
uuid
: UUID for requesting modulesafe
: Safety status of requesting modulestart_distance
: Distance to the start object from ego vehiclefinish_distance
: Distance to the finish object from ego vehiclestamp
: Time stampNothing
"},{"location":"planning/rtc_interface/#removecooperatestatus","title":"removeCooperateStatus","text":"rtc_interface::removeCooperateStatus(const unique_identifier_msgs::msg::UUID & uuid)\n
"},{"location":"planning/rtc_interface/#description_3","title":"Description","text":"Remove cooperate status corresponding to uuid
from registered statuses.
uuid
: UUID for expired moduleNothing
"},{"location":"planning/rtc_interface/#clearcooperatestatus","title":"clearCooperateStatus","text":"rtc_interface::clearCooperateStatus()\n
"},{"location":"planning/rtc_interface/#description_4","title":"Description","text":"Remove all cooperate statuses.
"},{"location":"planning/rtc_interface/#input_4","title":"Input","text":"Nothing
"},{"location":"planning/rtc_interface/#output_4","title":"Output","text":"Nothing
"},{"location":"planning/rtc_interface/#isactivated","title":"isActivated","text":"rtc_interface::isActivated(const unique_identifier_msgs::msg::UUID & uuid)\n
"},{"location":"planning/rtc_interface/#description_5","title":"Description","text":"Return received command status corresponding to uuid
.
uuid
: UUID for checking moduleIf auto mode is enabled, return based on the safety status. If not, if received command is ACTIVATED
, return true
. If not, return false
.
rtc_interface::isRegistered(const unique_identifier_msgs::msg::UUID & uuid)\n
"},{"location":"planning/rtc_interface/#description_6","title":"Description","text":"Return true
if uuid
is registered.
uuid
: UUID for checking moduleIf uuid
is registered, return true
. If not, return false
.
The current issue for RTC commands is that service is not recorded to rosbag, so it's very hard to analyze what was happened exactly. So this package makes it possible to replay rtc commands service from rosbag rtc status topic to resolve that issue.
"},{"location":"planning/rtc_replayer/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"planning/rtc_replayer/#input","title":"Input","text":"Name Type Description/debug/rtc_status
tier4_rtc_msgs::msg::CooperateStatusArray CooperateStatusArray that is recorded in rosbag"},{"location":"planning/rtc_replayer/#output","title":"Output","text":"Name Type Description /api/external/set/rtc_commands
tier4_rtc_msgs::msg::CooperateCommands CooperateCommands that is replayed by this package"},{"location":"planning/rtc_replayer/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"planning/rtc_replayer/#assumptions-known-limits","title":"Assumptions / Known limits","text":"This package can't replay CooperateCommands correctly if CooperateStatusArray is not stable. And this replay is always later one step than actual however it will not affect much for behavior.
"},{"location":"planning/rtc_replayer/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":"tbd.
"},{"location":"planning/sampling_based_planner/bezier_sampler/","title":"B\u00e9zier sampler","text":""},{"location":"planning/sampling_based_planner/bezier_sampler/#bezier-sampler","title":"B\u00e9zier sampler","text":"Implementation of b\u00e9zier curves and their generation following the sampling strategy from https://ieeexplore.ieee.org/document/8932495
"},{"location":"planning/sampling_based_planner/frenet_planner/","title":"Frenet planner","text":""},{"location":"planning/sampling_based_planner/frenet_planner/#frenet-planner","title":"Frenet planner","text":"Trajectory generation in Frenet frame.
"},{"location":"planning/sampling_based_planner/frenet_planner/#description","title":"Description","text":"Original paper
"},{"location":"planning/sampling_based_planner/path_sampler/","title":"Path Sampler","text":""},{"location":"planning/sampling_based_planner/path_sampler/#path-sampler","title":"Path Sampler","text":""},{"location":"planning/sampling_based_planner/path_sampler/#purpose","title":"Purpose","text":"This package implements a node that uses sampling based planning to generate a drivable trajectory.
"},{"location":"planning/sampling_based_planner/path_sampler/#feature","title":"Feature","text":"This package is able to:
Note that the velocity is just taken over from the input path.
"},{"location":"planning/sampling_based_planner/path_sampler/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"planning/sampling_based_planner/path_sampler/#input","title":"input","text":"Name Type Description~/input/path
autoware_auto_planning_msgs/msg/Path Reference path and the corresponding drivable area ~/input/odometry
nav_msgs/msg/Odometry Current state of the ego vehicle ~/input/objects
autoware_auto_perception_msgs/msg/PredictedObjects objects to avoid"},{"location":"planning/sampling_based_planner/path_sampler/#output","title":"output","text":"Name Type Description ~/output/trajectory
autoware_auto_planning_msgs/msg/Trajectory generated trajectory that is feasible to drive and collision-free"},{"location":"planning/sampling_based_planner/path_sampler/#algorithm","title":"Algorithm","text":"Sampling based planning is decomposed into 3 successive steps:
Candidate trajectories are generated based on the current ego state and some target state. 2 sampling algorithms are currently implemented: sampling with b\u00e9zier curves or with polynomials in the frenet frame.
"},{"location":"planning/sampling_based_planner/path_sampler/#pruning","title":"Pruning","text":"The validity of each candidate trajectory is checked using a set of hard constraints.
Among the valid candidate trajectories, the best one is determined using a set of soft constraints (i.e., objective functions).
Each soft constraint is associated with a weight to allow tuning of the preferences.
"},{"location":"planning/sampling_based_planner/path_sampler/#limitations","title":"Limitations","text":"The quality of the candidates generated with polynomials in frenet frame greatly depend on the reference path. If the reference path is not smooth, the resulting candidates will probably be undriveable.
Failure to find a valid trajectory current results in a suddenly stopping trajectory.
"},{"location":"planning/sampling_based_planner/path_sampler/#comparison-with-the-obstacle_avoidance_planner","title":"Comparison with theobstacle_avoidance_planner
","text":"The obstacle_avoidance_planner
uses an optimization based approach, finding the optimal solution of a mathematical problem if it exists. When no solution can be found, it is often hard to identify the issue due to the intermediate mathematical representation of the problem.
In comparison, the sampling based approach cannot guarantee an optimal solution but is much more straightforward, making it easier to debug and tune.
"},{"location":"planning/sampling_based_planner/path_sampler/#how-to-tune-parameters","title":"How to Tune Parameters","text":"The sampling based planner mostly offers a trade-off between the consistent quality of the trajectory and the computation time. To guarantee that a good trajectory is found requires generating many candidates which linearly increases the computation time.
TODO
"},{"location":"planning/sampling_based_planner/path_sampler/#drivability-in-narrow-roads","title":"Drivability in narrow roads","text":""},{"location":"planning/sampling_based_planner/path_sampler/#computation-time","title":"Computation time","text":""},{"location":"planning/sampling_based_planner/path_sampler/#robustness","title":"Robustness","text":""},{"location":"planning/sampling_based_planner/path_sampler/#other-options","title":"Other options","text":""},{"location":"planning/sampling_based_planner/path_sampler/#how-to-debug","title":"How To Debug","text":"TODO
"},{"location":"planning/sampling_based_planner/sampler_common/","title":"Sampler Common","text":""},{"location":"planning/sampling_based_planner/sampler_common/#sampler-common","title":"Sampler Common","text":"Common functions for sampling based planners. This includes classes for representing paths and trajectories, hard and soft constraints, conversion between cartesian and frenet frames, ...
"},{"location":"planning/scenario_selector/","title":"scenario_selector","text":""},{"location":"planning/scenario_selector/#scenario_selector","title":"scenario_selector","text":""},{"location":"planning/scenario_selector/#scenario_selector_node","title":"scenario_selector_node","text":"scenario_selector_node
is a node that switches trajectories from each scenario.
~input/lane_driving/trajectory
autoware_auto_planning_msgs::Trajectory trajectory of LaneDriving scenario ~input/parking/trajectory
autoware_auto_planning_msgs::Trajectory trajectory of Parking scenario ~input/lanelet_map
autoware_auto_mapping_msgs::HADMapBin ~input/route
autoware_planning_msgs::LaneletRoute route and goal pose ~input/odometry
nav_msgs::Odometry for checking whether vehicle is stopped is_parking_completed
bool (implemented as rosparam) whether all split trajectory of Parking are published"},{"location":"planning/scenario_selector/#output-topics","title":"Output topics","text":"Name Type Description ~output/scenario
tier4_planning_msgs::Scenario current scenario and scenarios to be activated ~output/trajectory
autoware_auto_planning_msgs::Trajectory trajectory to be followed"},{"location":"planning/scenario_selector/#output-tfs","title":"Output TFs","text":"None
"},{"location":"planning/scenario_selector/#how-to-launch","title":"How to launch","text":"scenario_selector.launch
or add args when executing roslaunch
roslaunch scenario_selector scenario_selector.launch
roslaunch scenario_selector dummy_scenario_selector_{scenario_name}.launch
This package statically calculates the centerline satisfying path footprints inside the drivable area.
On narrow-road driving, the default centerline, which is the middle line between lanelets' right and left boundaries, often causes path footprints outside the drivable area. To make path footprints inside the drivable area, we use online path shape optimization by the obstacle_avoidance_planner package.
Instead of online path shape optimization, we introduce static centerline optimization. With this static centerline optimization, we have following advantages.
There are two interfaces to communicate with the centerline optimizer.
"},{"location":"planning/static_centerline_optimizer/#vector-map-builder-interface","title":"Vector Map Builder Interface","text":"Note: This function of Vector Map Builder has not been released. Please wait for a while. Currently there is no documentation about Vector Map Builder's operation for this function.
The optimized centerline can be generated from Vector Map Builder's operation.
We can run
with the following command by designating <vehicle_model>
ros2 launch static_centerline_optimizer run_planning_server.launch.xml vehicle_model:=<vehicle-model>\n
FYI, port ID of the http server is 4010 by default.
"},{"location":"planning/static_centerline_optimizer/#command-line-interface","title":"Command Line Interface","text":"The optimized centerline can be generated from the command line interface by designating
<input-osm-path>
<output-osm-path>
(not mandatory)<start-lanelet-id>
<end-lanelet-id>
<vehicle-model>
ros2 launch static_centerline_optimizer static_centerline_optimizer.launch.xml run_backgrond:=false lanelet2_input_file_path:=<input-osm-path> lanelet2_output_file_path:=<output-osm-path> start_lanelet_id:=<start-lane-id> end_lanelet_id:=<end-lane-id> vehicle_model:=<vehicle-model>\n
The default output map path containing the optimized centerline locates /tmp/lanelet2_map.osm
. If you want to change the output map path, you can remap the path by designating <output-osm-path>
.
When launching the path planning server, rviz is launched as well as follows.
Sometimes the optimized centerline footprints are close to the lanes' boundaries. We can check how close they are with unsafe footprints
marker as follows.
Footprints' color depends on its distance to the boundaries, and text expresses its distance.
By default, footprints' color is
This module subscribes required data (ego-pose, obstacles, etc), and publishes zero velocity limit to keep stopping if any of stop conditions are satisfied.
"},{"location":"planning/surround_obstacle_checker/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"planning/surround_obstacle_checker/#flow-chart","title":"Flow chart","text":""},{"location":"planning/surround_obstacle_checker/#algorithms","title":"Algorithms","text":""},{"location":"planning/surround_obstacle_checker/#check-data","title":"Check data","text":"Check that surround_obstacle_checker
receives no ground pointcloud, dynamic objects and current velocity data.
Calculate distance between ego vehicle and the nearest object. In this function, it calculates the minimum distance between the polygon of ego vehicle and all points in pointclouds and the polygons of dynamic objects.
"},{"location":"planning/surround_obstacle_checker/#stop-requirement","title":"Stop requirement","text":"If it satisfies all following conditions, it plans stopping.
State::PASS
, the distance is less than surround_check_distance
State::STOP
, the distance is less than surround_check_recover_distance
state_clear_time
To prevent chattering, surround_obstacle_checker
manages two states. As mentioned in stop condition section, it prevents chattering by changing threshold to find surround obstacle depending on the states.
State::PASS
: Stop planning is releasedState::STOP
\uff1aWhile stop planning/perception/obstacle_segmentation/pointcloud
sensor_msgs::msg::PointCloud2
Pointcloud of obstacles which the ego-vehicle should stop or avoid /perception/object_recognition/objects
autoware_auto_perception_msgs::msg::PredictedObjects
Dynamic objects /localization/kinematic_state
nav_msgs::msg::Odometry
Current twist /tf
tf2_msgs::msg::TFMessage
TF /tf_static
tf2_msgs::msg::TFMessage
TF static"},{"location":"planning/surround_obstacle_checker/#output","title":"Output","text":"Name Type Description ~/output/velocity_limit_clear_command
tier4_planning_msgs::msg::VelocityLimitClearCommand
Velocity limit clear command ~/output/max_velocity
tier4_planning_msgs::msg::VelocityLimit
Velocity limit command ~/output/no_start_reason
diagnostic_msgs::msg::DiagnosticStatus
No start reason ~/output/stop_reasons
tier4_planning_msgs::msg::StopReasonArray
Stop reasons ~/debug/marker
visualization_msgs::msg::MarkerArray
Marker for visualization ~/debug/footprint
geometry_msgs::msg::PolygonStamped
Ego vehicle base footprint for visualization ~/debug/footprint_offset
geometry_msgs::msg::PolygonStamped
Ego vehicle footprint with surround_check_distance
offset for visualization ~/debug/footprint_recover_offset
geometry_msgs::msg::PolygonStamped
Ego vehicle footprint with surround_check_recover_distance
offset for visualization"},{"location":"planning/surround_obstacle_checker/#parameters","title":"Parameters","text":"Name Type Description Default Range pointcloud.enable_check boolean enable to check surrounding pointcloud false N/A pointcloud.surround_check_front_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 pointcloud.surround_check_side_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 pointcloud.surround_check_back_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 unknown.enable_check boolean enable to check surrounding unknown objects true N/A unknown.surround_check_front_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 unknown.surround_check_side_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 unknown.surround_check_back_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 car.enable_check boolean enable to check surrounding car true N/A car.surround_check_front_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 car.surround_check_side_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 car.surround_check_back_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 truck.enable_check boolean enable to check surrounding truck true N/A truck.surround_check_front_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 truck.surround_check_side_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 truck.surround_check_back_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 bus.enable_check boolean enable to check surrounding bus true N/A bus.surround_check_front_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 bus.surround_check_side_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 bus.surround_check_back_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 trailer.enable_check boolean enable to check surrounding trailer true N/A trailer.surround_check_front_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 trailer.surround_check_side_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 trailer.surround_check_back_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 motorcycle.enable_check boolean enable to check surrounding motorcycle true N/A motorcycle.surround_check_front_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 motorcycle.surround_check_side_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 motorcycle.surround_check_back_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 bicycle.enable_check boolean enable to check surrounding bicycle true N/A bicycle.surround_check_front_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 bicycle.surround_check_side_distance float f objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 bicycle.surround_check_back_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 pedestrian.enable_check boolean enable to check surrounding pedestrian true N/A pedestrian.surround_check_front_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 pedestrian.surround_check_side_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 pedestrian.surround_check_back_distance float If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status. [m] 0.5 \u22650.0 surround_check_hysteresis_distance float If no object exists in this hysteresis distance added to the above distance, transit to \"non-surrounding-obstacle\" status [m] 0.3 \u22650.0 state_clear_time float Threshold to clear stop state [s] 2.0 \u22650.0 stop_state_ego_speed float Threshold to check ego vehicle stopped [m/s] 0.1 \u22650.0 publish_debug_footprints boolean Publish vehicle footprint & footprints with surround_check_distance and surround_check_recover_distance offsets. true N/A debug_footprint_label string select the label for debug footprint car ['pointcloud', 'unknown', 'car', 'truck', 'bus', 'trailer', 'motorcycle', 'bicycle', 'pedestrian'] Name Type Description Default value enable_check
bool
Indicates whether each object is considered in the obstacle check target. true
for objects; false
for point clouds surround_check_front_distance
bool
If there are objects or point clouds within this distance in front, transition to the \"exist-surrounding-obstacle\" status [m]. 0.5 surround_check_side_distance
double
If there are objects or point clouds within this side distance, transition to the \"exist-surrounding-obstacle\" status [m]. 0.5 surround_check_back_distance
double
If there are objects or point clouds within this back distance, transition to the \"exist-surrounding-obstacle\" status [m]. 0.5 surround_check_hysteresis_distance
double
If no object exists within surround_check_xxx_distance
plus this additional distance, transition to the \"non-surrounding-obstacle\" status [m]. 0.3 state_clear_time
double
Threshold to clear stop state [s] 2.0 stop_state_ego_speed
double
Threshold to check ego vehicle stopped [m/s] 0.1 stop_state_entry_duration_time
double
Threshold to check ego vehicle stopped [s] 0.1 publish_debug_footprints
bool
Publish vehicle footprint with/without offsets true
"},{"location":"planning/surround_obstacle_checker/#assumptions-known-limits","title":"Assumptions / Known limits","text":"To perform stop planning, it is necessary to get obstacle pointclouds data. Hence, it does not plan stopping if the obstacle is in blind spot.
"},{"location":"planning/surround_obstacle_checker/surround_obstacle_checker-design.ja/","title":"Surround Obstacle Checker","text":""},{"location":"planning/surround_obstacle_checker/surround_obstacle_checker-design.ja/#surround-obstacle-checker","title":"Surround Obstacle Checker","text":""},{"location":"planning/surround_obstacle_checker/surround_obstacle_checker-design.ja/#purpose","title":"Purpose","text":"surround_obstacle_checker
\u306f\u3001\u81ea\u8eca\u304c\u505c\u8eca\u4e2d\u3001\u81ea\u8eca\u306e\u5468\u56f2\u306b\u969c\u5bb3\u7269\u304c\u5b58\u5728\u3059\u308b\u5834\u5408\u306b\u767a\u9032\u3057\u306a\u3044\u3088\u3046\u306b\u505c\u6b62\u8a08\u753b\u3092\u884c\u3046\u30e2\u30b8\u30e5\u30fc\u30eb\u3067\u3042\u308b\u3002
\u70b9\u7fa4\u3001\u52d5\u7684\u7269\u4f53\u3001\u81ea\u8eca\u901f\u5ea6\u306e\u30c7\u30fc\u30bf\u304c\u53d6\u5f97\u3067\u304d\u3066\u3044\u308b\u304b\u3069\u3046\u304b\u3092\u78ba\u8a8d\u3059\u308b\u3002
"},{"location":"planning/surround_obstacle_checker/surround_obstacle_checker-design.ja/#get-distance-to-nearest-object","title":"Get distance to nearest object","text":"\u81ea\u8eca\u3068\u6700\u8fd1\u508d\u306e\u969c\u5bb3\u7269\u3068\u306e\u8ddd\u96e2\u3092\u8a08\u7b97\u3059\u308b\u3002 \u3053\u3053\u3067\u306f\u3001\u81ea\u8eca\u306e\u30dd\u30ea\u30b4\u30f3\u3092\u8a08\u7b97\u3057\u3001\u70b9\u7fa4\u306e\u5404\u70b9\u304a\u3088\u3073\u5404\u52d5\u7684\u7269\u4f53\u306e\u30dd\u30ea\u30b4\u30f3\u3068\u306e\u8ddd\u96e2\u3092\u305d\u308c\u305e\u308c\u8a08\u7b97\u3059\u308b\u3053\u3068\u3067\u6700\u8fd1\u508d\u306e\u969c\u5bb3\u7269\u3068\u306e\u8ddd\u96e2\u3092\u6c42\u3081\u308b\u3002
"},{"location":"planning/surround_obstacle_checker/surround_obstacle_checker-design.ja/#stop-condition","title":"Stop condition","text":"\u6b21\u306e\u6761\u4ef6\u3092\u3059\u3079\u3066\u6e80\u305f\u3059\u3068\u304d\u3001\u81ea\u8eca\u306f\u505c\u6b62\u8a08\u753b\u3092\u884c\u3046\u3002
State::PASS
\u306e\u3068\u304d\u3001surround_check_distance
\u672a\u6e80\u3067\u3042\u308bState::STOP
\u306e\u3068\u304d\u3001surround_check_recover_distance
\u4ee5\u4e0b\u3067\u3042\u308bstate_clear_time
\u4ee5\u4e0b\u3067\u3042\u308b\u3053\u3068\u30c1\u30e3\u30bf\u30ea\u30f3\u30b0\u9632\u6b62\u306e\u305f\u3081\u3001surround_obstacle_checker
\u3067\u306f\u72b6\u614b\u3092\u7ba1\u7406\u3057\u3066\u3044\u308b\u3002 Stop condition \u306e\u9805\u3067\u8ff0\u3079\u305f\u3088\u3046\u306b\u3001\u72b6\u614b\u306b\u3088\u3063\u3066\u969c\u5bb3\u7269\u5224\u5b9a\u306e\u3057\u304d\u3044\u5024\u3092\u5909\u66f4\u3059\u308b\u3053\u3068\u3067\u30c1\u30e3\u30bf\u30ea\u30f3\u30b0\u3092\u9632\u6b62\u3057\u3066\u3044\u308b\u3002
State::PASS
\uff1a\u505c\u6b62\u8a08\u753b\u89e3\u9664\u4e2dState::STOP
\uff1a\u505c\u6b62\u8a08\u753b\u4e2d/perception/obstacle_segmentation/pointcloud
sensor_msgs::msg::PointCloud2
Pointcloud of obstacles which the ego-vehicle should stop or avoid /perception/object_recognition/objects
autoware_auto_perception_msgs::msg::PredictedObjects
Dynamic objects /localization/kinematic_state
nav_msgs::msg::Odometry
Current twist /tf
tf2_msgs::msg::TFMessage
TF /tf_static
tf2_msgs::msg::TFMessage
TF static"},{"location":"planning/surround_obstacle_checker/surround_obstacle_checker-design.ja/#output","title":"Output","text":"Name Type Description ~/output/velocity_limit_clear_command
tier4_planning_msgs::msg::VelocityLimitClearCommand
Velocity limit clear command ~/output/max_velocity
tier4_planning_msgs::msg::VelocityLimit
Velocity limit command ~/output/no_start_reason
diagnostic_msgs::msg::DiagnosticStatus
No start reason ~/output/stop_reasons
tier4_planning_msgs::msg::StopReasonArray
Stop reasons ~/debug/marker
visualization_msgs::msg::MarkerArray
Marker for visualization"},{"location":"planning/surround_obstacle_checker/surround_obstacle_checker-design.ja/#parameters","title":"Parameters","text":"Name Type Description Default value use_pointcloud
bool
Use pointcloud as obstacle check true
use_dynamic_object
bool
Use dynamic object as obstacle check true
surround_check_distance
double
If objects exist in this distance, transit to \"exist-surrounding-obstacle\" status [m] 0.5 surround_check_recover_distance
double
If no object exists in this distance, transit to \"non-surrounding-obstacle\" status [m] 0.8 state_clear_time
double
Threshold to clear stop state [s] 2.0 stop_state_ego_speed
double
Threshold to check ego vehicle stopped [m/s] 0.1 stop_state_entry_duration_time
double
Threshold to check ego vehicle stopped [s] 0.1"},{"location":"planning/surround_obstacle_checker/surround_obstacle_checker-design.ja/#assumptions-known-limits","title":"Assumptions / Known limits","text":"\u3053\u306e\u6a5f\u80fd\u304c\u52d5\u4f5c\u3059\u308b\u305f\u3081\u306b\u306f\u969c\u5bb3\u7269\u70b9\u7fa4\u306e\u89b3\u6e2c\u304c\u5fc5\u8981\u306a\u305f\u3081\u3001\u969c\u5bb3\u7269\u304c\u6b7b\u89d2\u306b\u5165\u3063\u3066\u3044\u308b\u5834\u5408\u306f\u505c\u6b62\u8a08\u753b\u3092\u884c\u308f\u306a\u3044\u3002
"},{"location":"sensing/gnss_poser/","title":"gnss_poser","text":""},{"location":"sensing/gnss_poser/#gnss_poser","title":"gnss_poser","text":""},{"location":"sensing/gnss_poser/#purpose","title":"Purpose","text":"The gnss_poser
is a node that subscribes gnss sensing messages and calculates vehicle pose with covariance.
This node subscribes to NavSatFix to publish the pose of base_link. The data in NavSatFix represents the antenna's position. Therefore, it performs a coordinate transformation using the tf from base_link
to the antenna's position. The frame_id of the antenna's position refers to NavSatFix's header.frame_id
. (Note that header.frame_id
in NavSatFix indicates the antenna's frame_id, not the Earth or reference ellipsoid. See also NavSatFix definition.)
If the transformation from base_link
to the antenna cannot be obtained, it outputs the pose of the antenna position without performing coordinate transformation.
/map/map_projector_info
tier4_map_msgs::msg::MapProjectorInfo
map projection info ~/input/fix
sensor_msgs::msg::NavSatFix
gnss status message ~/input/autoware_orientation
autoware_sensing_msgs::msg::GnssInsOrientationStamped
orientation click here for more details"},{"location":"sensing/gnss_poser/#output","title":"Output","text":"Name Type Description ~/output/pose
geometry_msgs::msg::PoseStamped
vehicle pose calculated from gnss sensing data ~/output/gnss_pose_cov
geometry_msgs::msg::PoseWithCovarianceStamped
vehicle pose with covariance calculated from gnss sensing data ~/output/gnss_fixed
tier4_debug_msgs::msg::BoolStamped
gnss fix status"},{"location":"sensing/gnss_poser/#parameters","title":"Parameters","text":""},{"location":"sensing/gnss_poser/#core-parameters","title":"Core Parameters","text":"Name Type Description Default Range base_frame string frame id for base_frame base_link N/A gnss_base_frame string frame id for gnss_base_frame gnss_base_link N/A map_frame string frame id for map_frame map N/A use_gnss_ins_orientation boolean use Gnss-Ins orientation true N/A gnss_pose_pub_method integer 0: Instant Value 1: Average Value 2: Median Value. If 0 is chosen buffer_epoch parameter loses affect. 0 \u22650\u22642 buff_epoch integer Buffer epoch 1 \u22650"},{"location":"sensing/gnss_poser/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/gnss_poser/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/gnss_poser/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/gnss_poser/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"sensing/gnss_poser/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/image_diagnostics/","title":"image_diagnostics","text":""},{"location":"sensing/image_diagnostics/#image_diagnostics","title":"image_diagnostics","text":""},{"location":"sensing/image_diagnostics/#purpose","title":"Purpose","text":"The image_diagnostics
is a node that check the status of the input raw image.
Below figure shows the flowchart of image diagnostics node. Each image is divided into small blocks for block state assessment.
Each small image block state is assessed as below figure.
After all image's blocks state are evaluated, the whole image status is summarized as below.
"},{"location":"sensing/image_diagnostics/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"sensing/image_diagnostics/#input","title":"Input","text":"Name Type Descriptioninput/raw_image
sensor_msgs::msg::Image
raw image"},{"location":"sensing/image_diagnostics/#output","title":"Output","text":"Name Type Description image_diag/debug/gray_image
sensor_msgs::msg::Image
gray image image_diag/debug/dft_image
sensor_msgs::msg::Image
discrete Fourier transformation image image_diag/debug/diag_block_image
sensor_msgs::msg::Image
each block state colorization image_diag/image_state_diag
tier4_debug_msgs::msg::Int32Stamped
image diagnostics status value /diagnostics
diagnostic_msgs::msg::DiagnosticArray
diagnostics"},{"location":"sensing/image_diagnostics/#parameters","title":"Parameters","text":""},{"location":"sensing/image_diagnostics/#assumptions-known-limits","title":"Assumptions / Known limits","text":"The image_transport_decompressor
is a node that decompresses images.
~/input/compressed_image
sensor_msgs::msg::CompressedImage
compressed image"},{"location":"sensing/image_transport_decompressor/#output","title":"Output","text":"Name Type Description ~/output/raw_image
sensor_msgs::msg::Image
decompressed image"},{"location":"sensing/image_transport_decompressor/#parameters","title":"Parameters","text":""},{"location":"sensing/image_transport_decompressor/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/image_transport_decompressor/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/image_transport_decompressor/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/image_transport_decompressor/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"sensing/image_transport_decompressor/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/imu_corrector/","title":"imu_corrector","text":""},{"location":"sensing/imu_corrector/#imu_corrector","title":"imu_corrector","text":""},{"location":"sensing/imu_corrector/#imu_corrector_1","title":"imu_corrector","text":"imu_corrector_node
is a node that correct imu data.
Mathematically, we assume the following equation:
\\[ \\tilde{\\omega}(t) = \\omega(t) + b(t) + n(t) \\]where \\(\\tilde{\\omega}\\) denotes observed angular velocity, \\(\\omega\\) denotes true angular velocity, \\(b\\) denotes an offset, and \\(n\\) denotes a gaussian noise. We also assume that \\(n\\sim\\mathcal{N}(0, \\sigma^2)\\).
"},{"location":"sensing/imu_corrector/#input","title":"Input","text":"Name Type Description~input
sensor_msgs::msg::Imu
raw imu data"},{"location":"sensing/imu_corrector/#output","title":"Output","text":"Name Type Description ~output
sensor_msgs::msg::Imu
corrected imu data"},{"location":"sensing/imu_corrector/#parameters","title":"Parameters","text":"Name Type Description angular_velocity_offset_x
double roll rate offset in imu_link [rad/s] angular_velocity_offset_y
double pitch rate offset imu_link [rad/s] angular_velocity_offset_z
double yaw rate offset imu_link [rad/s] angular_velocity_stddev_xx
double roll rate standard deviation imu_link [rad/s] angular_velocity_stddev_yy
double pitch rate standard deviation imu_link [rad/s] angular_velocity_stddev_zz
double yaw rate standard deviation imu_link [rad/s] acceleration_stddev
double acceleration standard deviation imu_link [m/s^2]"},{"location":"sensing/imu_corrector/#gyro_bias_estimator","title":"gyro_bias_estimator","text":"gyro_bias_validator
is a node that validates the bias of the gyroscope. It subscribes to the sensor_msgs::msg::Imu
topic and validate if the bias of the gyroscope is within the specified range.
Note that the node calculates bias from the gyroscope data by averaging the data only when the vehicle is stopped.
"},{"location":"sensing/imu_corrector/#input_1","title":"Input","text":"Name Type Description~/input/imu_raw
sensor_msgs::msg::Imu
raw imu data ~/input/pose
geometry_msgs::msg::PoseWithCovarianceStamped
ndt pose Note that the input pose is assumed to be accurate enough. For example when using NDT, we assume that the NDT is appropriately converged.
Currently, it is possible to use methods other than NDT as a pose_source
for Autoware, but less accurate methods are not suitable for IMU bias estimation.
In the future, with careful implementation for pose errors, the IMU bias estimated by NDT could potentially be used not only for validation but also for online calibration.
"},{"location":"sensing/imu_corrector/#output_1","title":"Output","text":"Name Type Description~/output/gyro_bias
geometry_msgs::msg::Vector3Stamped
bias of the gyroscope [rad/s]"},{"location":"sensing/imu_corrector/#parameters_1","title":"Parameters","text":"Note that this node also uses angular_velocity_offset_x
, angular_velocity_offset_y
, angular_velocity_offset_z
parameters from imu_corrector.param.yaml
.
gyro_bias_threshold
double threshold of the bias of the gyroscope [rad/s] timer_callback_interval_sec
double seconds about the timer callback function [sec] diagnostics_updater_interval_sec
double period of the diagnostics updater [sec] straight_motion_ang_vel_upper_limit
double upper limit of yaw angular velocity, beyond which motion is not considered straight [rad/s]"},{"location":"sensing/livox/livox_tag_filter/","title":"livox_tag_filter","text":""},{"location":"sensing/livox/livox_tag_filter/#livox_tag_filter","title":"livox_tag_filter","text":""},{"location":"sensing/livox/livox_tag_filter/#purpose","title":"Purpose","text":"The livox_tag_filter
is a node that removes noise from pointcloud by using the following tags:
~/input
sensor_msgs::msg::PointCloud2
reference points"},{"location":"sensing/livox/livox_tag_filter/#output","title":"Output","text":"Name Type Description ~/output
sensor_msgs::msg::PointCloud2
filtered points"},{"location":"sensing/livox/livox_tag_filter/#parameters","title":"Parameters","text":""},{"location":"sensing/livox/livox_tag_filter/#node-parameters","title":"Node Parameters","text":"Name Type Description ignore_tags
vector ignored tags (See the following table)"},{"location":"sensing/livox/livox_tag_filter/#tag-parameters","title":"Tag Parameters","text":"Bit Description Options 0~1 Point property based on spatial position 00: Normal 01: High confidence level of the noise 10: Moderate confidence level of the noise 11: Low confidence level of the noise 2~3 Point property based on intensity 00: Normal 01: High confidence level of the noise 10: Moderate confidence level of the noise 11: Reserved 4~5 Return number 00: return 0 01: return 1 10: return 2 11: return 3 6~7 Reserved You can download more detail description about the livox from external link [1].
"},{"location":"sensing/livox/livox_tag_filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/livox/livox_tag_filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/livox/livox_tag_filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/livox/livox_tag_filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":"[1] https://www.livoxtech.com/downloads
"},{"location":"sensing/livox/livox_tag_filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/pointcloud_preprocessor/","title":"pointcloud_preprocessor","text":""},{"location":"sensing/pointcloud_preprocessor/#pointcloud_preprocessor","title":"pointcloud_preprocessor","text":""},{"location":"sensing/pointcloud_preprocessor/#purpose","title":"Purpose","text":"The pointcloud_preprocessor
is a package that includes the following filters:
Detail description of each filter's algorithm is in the following links.
Filter Name Description Detail concatenate_data subscribe multiple pointclouds and concatenate them into a pointcloud link crop_box_filter remove points within a given box link distortion_corrector compensate pointcloud distortion caused by ego vehicle's movement during 1 scan link downsample_filter downsampling input pointcloud link outlier_filter remove points caused by hardware problems, rain drops and small insects as a noise link passthrough_filter remove points on the outside of a range in given field (e.g. x, y, z, intensity) link pointcloud_accumulator accumulate pointclouds for a given amount of time link vector_map_filter remove points on the outside of lane by using vector map link vector_map_inside_area_filter remove points inside of vector map area that has given type by parameter link"},{"location":"sensing/pointcloud_preprocessor/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"sensing/pointcloud_preprocessor/#input","title":"Input","text":"Name Type Description~/input/points
sensor_msgs::msg::PointCloud2
reference points ~/input/indices
pcl_msgs::msg::Indices
reference indices"},{"location":"sensing/pointcloud_preprocessor/#output","title":"Output","text":"Name Type Description ~/output/points
sensor_msgs::msg::PointCloud2
filtered points"},{"location":"sensing/pointcloud_preprocessor/#parameters","title":"Parameters","text":""},{"location":"sensing/pointcloud_preprocessor/#node-parameters","title":"Node Parameters","text":"Name Type Default Value Description input_frame
string \" \" input frame id output_frame
string \" \" output frame id max_queue_size
int 5 max queue size of input/output topics use_indices
bool false flag to use pointcloud indices latched_indices
bool false flag to latch pointcloud indices approximate_sync
bool false flag to use approximate sync option"},{"location":"sensing/pointcloud_preprocessor/#assumptions-known-limits","title":"Assumptions / Known limits","text":"pointcloud_preprocessor::Filter
is implemented based on pcl_perception [1] because of this issue.
[1] https://github.com/ros-perception/perception_pcl/blob/ros2/pcl_ros/src/pcl_ros/filters/filter.cpp
"},{"location":"sensing/pointcloud_preprocessor/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/pointcloud_preprocessor/docs/blockage_diag/","title":"blockage_diag","text":""},{"location":"sensing/pointcloud_preprocessor/docs/blockage_diag/#blockage_diag","title":"blockage_diag","text":""},{"location":"sensing/pointcloud_preprocessor/docs/blockage_diag/#purpose","title":"Purpose","text":"To ensure the performance of LiDAR and safety for autonomous driving, the abnormal condition diagnostics feature is needed. LiDAR blockage is abnormal condition of LiDAR when some unwanted objects stitch to and block the light pulses and return signal. This node's purpose is to detect the existing of blockage on LiDAR and its related size and location.
"},{"location":"sensing/pointcloud_preprocessor/docs/blockage_diag/#inner-workings-algorithmsblockage-detection","title":"Inner-workings / Algorithms(Blockage detection)","text":"This node bases on the no-return region and its location to decide if it is a blockage.
The logic is showed as below
"},{"location":"sensing/pointcloud_preprocessor/docs/blockage_diag/#inner-workings-algorithmsdust-detection","title":"Inner-workings /Algorithms(Dust detection)","text":"About dust detection, morphological processing is implemented. If the lidar's ray cannot be acquired due to dust in the lidar area where the point cloud is considered to return from the ground, black pixels appear as noise in the depth image. The area of noise is found by erosion and dilation these black pixels.
"},{"location":"sensing/pointcloud_preprocessor/docs/blockage_diag/#inputs-outputs","title":"Inputs / Outputs","text":"This implementation inherits pointcloud_preprocessor::Filter
class, please refer README.
~/input/pointcloud_raw_ex
sensor_msgs::msg::PointCloud2
The raw point cloud data is used to detect the no-return region"},{"location":"sensing/pointcloud_preprocessor/docs/blockage_diag/#output","title":"Output","text":"Name Type Description ~/output/blockage_diag/debug/blockage_mask_image
sensor_msgs::msg::Image
The mask image of detected blockage ~/output/blockage_diag/debug/ground_blockage_ratio
tier4_debug_msgs::msg::Float32Stamped
The area ratio of blockage region in ground region ~/output/blockage_diag/debug/sky_blockage_ratio
tier4_debug_msgs::msg::Float32Stamped
The area ratio of blockage region in sky region ~/output/blockage_diag/debug/lidar_depth_map
sensor_msgs::msg::Image
The depth map image of input point cloud ~/output/blockage_diag/debug/single_frame_dust_mask
sensor_msgs::msg::Image
The mask image of detected dusty area in latest single frame ~/output/blockage_diag/debug/multi_frame_dust_mask
sensor_msgs::msg::Image
The mask image of continuous detected dusty area ~/output/blockage_diag/debug/blockage_dust_merged_image
sensor_msgs::msg::Image
The merged image of blockage detection(red) and multi frame dusty area detection(yellow) results ~/output/blockage_diag/debug/ground_dust_ratio
tier4_debug_msgs::msg::Float32Stamped
The ratio of dusty area divided by area where ray usually returns from the ground."},{"location":"sensing/pointcloud_preprocessor/docs/blockage_diag/#parameters","title":"Parameters","text":"Name Type Description blockage_ratio_threshold
float The threshold of blockage area ratio.If the blockage value exceeds this threshold, the diagnostic state will be set to ERROR. blockage_count_threshold
float The threshold of number continuous blockage frames horizontal_ring_id
int The id of horizontal ring of the LiDAR angle_range
vector The effective range of LiDAR vertical_bins
int The LiDAR channel number model
string The LiDAR model blockage_buffering_frames
int The number of buffering about blockage detection [range:1-200] blockage_buffering_interval
int The interval of buffering about blockage detection dust_ratio_threshold
float The threshold of dusty area ratio dust_count_threshold
int The threshold of number continuous frames include dusty area dust_kernel_size
int The kernel size of morphology processing in dusty area detection dust_buffering_frames
int The number of buffering about dusty area detection [range:1-200] dust_buffering_interval
int The interval of buffering about dusty area detection"},{"location":"sensing/pointcloud_preprocessor/docs/blockage_diag/#assumptions-known-limits","title":"Assumptions / Known limits","text":"Many self-driving cars combine multiple LiDARs to expand the sensing range. Therefore, a function to combine a plurality of point clouds is required.
To combine multiple sensor data with a similar timestamp, the message_filters is often used in the ROS-based system, but this requires the assumption that all inputs can be received. Since safety must be strongly considered in autonomous driving, the point clouds concatenate node must be designed so that even if one sensor fails, the remaining sensor information can be output.
"},{"location":"sensing/pointcloud_preprocessor/docs/concatenate-data/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"The figure below represents the reception time of each sensor data and how it is combined in the case.
"},{"location":"sensing/pointcloud_preprocessor/docs/concatenate-data/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"sensing/pointcloud_preprocessor/docs/concatenate-data/#input","title":"Input","text":"Name Type Description~/input/twist
geometry_msgs::msg::TwistWithCovarianceStamped
The vehicle odometry is used to interpolate the timestamp of each sensor data"},{"location":"sensing/pointcloud_preprocessor/docs/concatenate-data/#output","title":"Output","text":"Name Type Description ~/output/points
sensor_msgs::msg::Pointcloud2
concatenated point clouds"},{"location":"sensing/pointcloud_preprocessor/docs/concatenate-data/#parameters","title":"Parameters","text":"Name Type Default Value Description input/points
vector of string [] input topic names that type must be sensor_msgs::msg::Pointcloud2
input_frame
string \"\" input frame id output_frame
string \"\" output frame id max_queue_size
int 5 max queue size of input/output topics"},{"location":"sensing/pointcloud_preprocessor/docs/concatenate-data/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description timeout_sec
double 0.1 tolerance of time to publish next pointcloud [s]When this time limit is exceeded, the filter concatenates and publishes pointcloud, even if not all the point clouds are subscribed. input_offset
vector of double [] This parameter can control waiting time for each input sensor pointcloud [s]. You must to set the same length of offsets with input pointclouds numbers. For its tuning, please see actual usage page. publish_synchronized_pointcloud
bool false If true, publish the time synchronized pointclouds. All input pointclouds are transformed and then re-published as message named <original_msg_name>_synchronized
. input_twist_topic_type
std::string twist Topic type for twist. Currently support twist
or odom
."},{"location":"sensing/pointcloud_preprocessor/docs/concatenate-data/#actual-usage","title":"Actual Usage","text":"For the example of actual usage of this node, please refer to the preprocessor.launch.py file.
"},{"location":"sensing/pointcloud_preprocessor/docs/concatenate-data/#how-to-tuning-timeout_sec-and-input_offset","title":"How to tuning timeout_sec and input_offset","text":"The values in timeout_sec
and input_offset
are used in the timer_callback to control concatenation timings.
timeout_sec
timeout_sec
- input_offset
timeout_sec
timeout sec for default timer To avoid mis-concatenation, at least this value must be shorter than sampling time. input_offset
timeout extension when a pointcloud comes to buffer. The amount of waiting time will be timeout_sec
- input_offset
. So, you will need to set larger value for the last-coming pointcloud and smaller for fore-coming."},{"location":"sensing/pointcloud_preprocessor/docs/concatenate-data/#node-separation-options-for-future","title":"Node separation options for future","text":"Since the pointcloud concatenation has two process, \"time synchronization\" and \"pointcloud concatenation\", it is possible to separate these processes.
In the future, Nodes will be completely separated in order to achieve node loosely coupled nature, but currently both nodes can be selected for backward compatibility (See this PR).
"},{"location":"sensing/pointcloud_preprocessor/docs/concatenate-data/#assumptions-known-limits","title":"Assumptions / Known limits","text":"It is necessary to assume that the vehicle odometry value exists, the sensor data and odometry timestamp are correct, and the TF from base_link
to sensor_frame
is also correct.
The crop_box_filter
is a node that removes points with in a given box region. This filter is used to remove the points that hit the vehicle itself.
pcl::CropBox
is used, which filters all points inside a given box.
This implementation inherit pointcloud_preprocessor::Filter
class, please refer README.
This implementation inherit pointcloud_preprocessor::Filter
class, please refer README.
min_x
double -1.0 x-coordinate minimum value for crop range max_x
double 1.0 x-coordinate maximum value for crop range min_y
double -1.0 y-coordinate minimum value for crop range max_y
double 1.0 y-coordinate maximum value for crop range min_z
double -1.0 z-coordinate minimum value for crop range max_z
double 1.0 z-coordinate maximum value for crop range"},{"location":"sensing/pointcloud_preprocessor/docs/crop-box-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/pointcloud_preprocessor/docs/crop-box-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/pointcloud_preprocessor/docs/crop-box-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/pointcloud_preprocessor/docs/crop-box-filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"sensing/pointcloud_preprocessor/docs/crop-box-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/pointcloud_preprocessor/docs/distortion-corrector/","title":"distortion_corrector","text":""},{"location":"sensing/pointcloud_preprocessor/docs/distortion-corrector/#distortion_corrector","title":"distortion_corrector","text":""},{"location":"sensing/pointcloud_preprocessor/docs/distortion-corrector/#purpose","title":"Purpose","text":"The distortion_corrector
is a node that compensates pointcloud distortion caused by ego vehicle's movement during 1 scan.
Since the LiDAR sensor scans by rotating an internal laser, the resulting point cloud will be distorted if the ego-vehicle moves during a single scan (as shown by the figure below). The node corrects this by interpolating sensor data using odometry of ego-vehicle.
"},{"location":"sensing/pointcloud_preprocessor/docs/distortion-corrector/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"The offset equation is given by $ TimeOffset = (55.296 \\mu s SequenceIndex) + (2.304 \\mu s DataPointIndex) $
To calculate the exact point time, add the TimeOffset to the timestamp. $ ExactPointTime = TimeStamp + TimeOffset $
"},{"location":"sensing/pointcloud_preprocessor/docs/distortion-corrector/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"sensing/pointcloud_preprocessor/docs/distortion-corrector/#input","title":"Input","text":"Name Type Description~/input/points
sensor_msgs::msg::PointCloud2
reference points ~/input/twist
geometry_msgs::msg::TwistWithCovarianceStamped
twist ~/input/imu
sensor_msgs::msg::Imu
imu data"},{"location":"sensing/pointcloud_preprocessor/docs/distortion-corrector/#output","title":"Output","text":"Name Type Description ~/output/points
sensor_msgs::msg::PointCloud2
filtered points"},{"location":"sensing/pointcloud_preprocessor/docs/distortion-corrector/#parameters","title":"Parameters","text":""},{"location":"sensing/pointcloud_preprocessor/docs/distortion-corrector/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description timestamp_field_name
string \"time_stamp\" time stamp field name use_imu
bool true use gyroscope for yaw rate if true, else use vehicle status"},{"location":"sensing/pointcloud_preprocessor/docs/distortion-corrector/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/pointcloud_preprocessor/docs/downsample-filter/","title":"downsample_filter","text":""},{"location":"sensing/pointcloud_preprocessor/docs/downsample-filter/#downsample_filter","title":"downsample_filter","text":""},{"location":"sensing/pointcloud_preprocessor/docs/downsample-filter/#purpose","title":"Purpose","text":"The downsample_filter
is a node that reduces the number of points.
pcl::VoxelGridNearestCentroid
is used. The algorithm is described in tier4_pcl_extensions
pcl::RandomSample
is used, which points are sampled with uniform probability.
pcl::VoxelGrid
is used, which points in each voxel are approximated with their centroid.
These implementations inherit pointcloud_preprocessor::Filter
class, please refer README.
These implementations inherit pointcloud_preprocessor::Filter
class, please refer README.
voxel_size_x
double 0.3 voxel size x [m] voxel_size_y
double 0.3 voxel size y [m] voxel_size_z
double 0.1 voxel size z [m]"},{"location":"sensing/pointcloud_preprocessor/docs/downsample-filter/#random-downsample-filter_1","title":"Random Downsample Filter","text":"Name Type Default Value Description sample_num
int 1500 number of indices to be sampled"},{"location":"sensing/pointcloud_preprocessor/docs/downsample-filter/#voxel-grid-downsample-filter_1","title":"Voxel Grid Downsample Filter","text":"Name Type Default Value Description voxel_size_x
double 0.3 voxel size x [m] voxel_size_y
double 0.3 voxel size y [m] voxel_size_z
double 0.1 voxel size z [m]"},{"location":"sensing/pointcloud_preprocessor/docs/downsample-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/pointcloud_preprocessor/docs/downsample-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/pointcloud_preprocessor/docs/downsample-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/pointcloud_preprocessor/docs/downsample-filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"sensing/pointcloud_preprocessor/docs/downsample-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/pointcloud_preprocessor/docs/dual-return-outlier-filter/","title":"dual_return_outlier_filter","text":""},{"location":"sensing/pointcloud_preprocessor/docs/dual-return-outlier-filter/#dual_return_outlier_filter","title":"dual_return_outlier_filter","text":""},{"location":"sensing/pointcloud_preprocessor/docs/dual-return-outlier-filter/#purpose","title":"Purpose","text":"The purpose is to remove point cloud noise such as fog and rain and publish visibility as a diagnostic topic.
"},{"location":"sensing/pointcloud_preprocessor/docs/dual-return-outlier-filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"This node can remove rain and fog by considering the light reflected from the object in two stages according to the attenuation factor. The dual_return_outlier_filter
is named because it removes noise using data that contains two types of return values separated by attenuation factor, as shown in the figure below.
Therefore, in order to use this node, the sensor driver must publish custom data including return_type
. please refer to PointXYZIRADRT data structure.
Another feature of this node is that it publishes visibility as a diagnostic topic. With this function, for example, in heavy rain, the sensing module can notify that the processing performance has reached its limit, which can lead to ensuring the safety of the vehicle.
In some complicated road scenes where normal objects also reflect the light in two stages, for instance plants, leaves, some plastic net etc, the visibility faces some drop in fine weather condition. To deal with that, optional settings of a region of interest (ROI) are added.
Fixed_xyz_ROI
mode: Visibility estimation based on the weak points in a fixed cuboid surrounding region of ego-vehicle, defined by x, y, z in base_link perspective.Fixed_azimuth_ROI
mode: Visibility estimation based on the weak points in a fixed surrounding region of ego-vehicle, defined by azimuth and distance of LiDAR perspective.When select 2 fixed ROI modes, due to the range of weak points is shrink, the sensitivity of visibility is decrease so that a trade of between weak_first_local_noise_threshold
and visibility_threshold
is needed.
The figure below describe how the node works.
The below picture shows the ROI options.
"},{"location":"sensing/pointcloud_preprocessor/docs/dual-return-outlier-filter/#inputs-outputs","title":"Inputs / Outputs","text":"This implementation inherits pointcloud_preprocessor::Filter
class, please refer README.
/dual_return_outlier_filter/frequency_image
sensor_msgs::msg::Image
The histogram image that represent visibility /dual_return_outlier_filter/visibility
tier4_debug_msgs::msg::Float32Stamped
A representation of visibility with a value from 0 to 1 /dual_return_outlier_filter/pointcloud_noise
sensor_msgs::msg::Pointcloud2
The pointcloud removed as noise"},{"location":"sensing/pointcloud_preprocessor/docs/dual-return-outlier-filter/#parameters","title":"Parameters","text":""},{"location":"sensing/pointcloud_preprocessor/docs/dual-return-outlier-filter/#node-parameters","title":"Node Parameters","text":"This implementation inherits pointcloud_preprocessor::Filter
class, please refer README.
vertical_bins
int The number of vertical bin for visibility histogram max_azimuth_diff
float Threshold for ring_outlier_filter weak_first_distance_ratio
double Threshold for ring_outlier_filter general_distance_ratio
double Threshold for ring_outlier_filter weak_first_local_noise_threshold
int The parameter for determining whether it is noise visibility_error_threshold
float When the percentage of white pixels in the binary histogram falls below this parameter the diagnostic status becomes ERR visibility_warn_threshold
float When the percentage of white pixels in the binary histogram falls below this parameter the diagnostic status becomes WARN roi_mode
string The name of ROI mode for switching min_azimuth_deg
float The left limit of azimuth for Fixed_azimuth_ROI
mode max_azimuth_deg
float The right limit of azimuth for Fixed_azimuth_ROI
mode max_distance
float The limit distance for for Fixed_azimuth_ROI
mode x_max
float Maximum of x for Fixed_xyz_ROI
mode x_min
float Minimum of x for Fixed_xyz_ROI
mode y_max
float Maximum of y for Fixed_xyz_ROI
mode y_min
float Minimum of y for Fixed_xyz_ROI
mode z_max
float Maximum of z for Fixed_xyz_ROI
mode z_min
float Minimum of z for Fixed_xyz_ROI
mode"},{"location":"sensing/pointcloud_preprocessor/docs/dual-return-outlier-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":"Not recommended for use as it is under development. Input data must be PointXYZIRADRT
type data including return_type
.
The outlier_filter
is a package for filtering outlier of points.
The passthrough_filter
is a node that removes points on the outside of a range in a given field (e.g. x, y, z, intensity, ring, etc).
~/input/points
sensor_msgs::msg::PointCloud2
reference points ~/input/indices
pcl_msgs::msg::Indices
reference indices"},{"location":"sensing/pointcloud_preprocessor/docs/passthrough-filter/#output","title":"Output","text":"Name Type Description ~/output/points
sensor_msgs::msg::PointCloud2
filtered points"},{"location":"sensing/pointcloud_preprocessor/docs/passthrough-filter/#parameters","title":"Parameters","text":""},{"location":"sensing/pointcloud_preprocessor/docs/passthrough-filter/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description filter_limit_min
int 0 minimum allowed field value filter_limit_max
int 127 maximum allowed field value filter_field_name
string \"ring\" filtering field name keep_organized
bool false flag to keep indices structure filter_limit_negative
bool false flag to return whether the data is inside limit or not"},{"location":"sensing/pointcloud_preprocessor/docs/passthrough-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/pointcloud_preprocessor/docs/passthrough-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/pointcloud_preprocessor/docs/passthrough-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/pointcloud_preprocessor/docs/passthrough-filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"sensing/pointcloud_preprocessor/docs/passthrough-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/pointcloud_preprocessor/docs/pointcloud-accumulator/","title":"pointcloud_accumulator","text":""},{"location":"sensing/pointcloud_preprocessor/docs/pointcloud-accumulator/#pointcloud_accumulator","title":"pointcloud_accumulator","text":""},{"location":"sensing/pointcloud_preprocessor/docs/pointcloud-accumulator/#purpose","title":"Purpose","text":"The pointcloud_accumulator
is a node that accumulates pointclouds for a given amount of time.
~/input/points
sensor_msgs::msg::PointCloud2
reference points"},{"location":"sensing/pointcloud_preprocessor/docs/pointcloud-accumulator/#output","title":"Output","text":"Name Type Description ~/output/points
sensor_msgs::msg::PointCloud2
filtered points"},{"location":"sensing/pointcloud_preprocessor/docs/pointcloud-accumulator/#parameters","title":"Parameters","text":""},{"location":"sensing/pointcloud_preprocessor/docs/pointcloud-accumulator/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description accumulation_time_sec
double 2.0 accumulation period [s] pointcloud_buffer_size
int 50 buffer size"},{"location":"sensing/pointcloud_preprocessor/docs/pointcloud-accumulator/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/pointcloud_preprocessor/docs/pointcloud-accumulator/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/pointcloud_preprocessor/docs/pointcloud-accumulator/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/pointcloud_preprocessor/docs/pointcloud-accumulator/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"sensing/pointcloud_preprocessor/docs/pointcloud-accumulator/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/pointcloud_preprocessor/docs/radius-search-2d-outlier-filter/","title":"radius_search_2d_outlier_filter","text":""},{"location":"sensing/pointcloud_preprocessor/docs/radius-search-2d-outlier-filter/#radius_search_2d_outlier_filter","title":"radius_search_2d_outlier_filter","text":""},{"location":"sensing/pointcloud_preprocessor/docs/radius-search-2d-outlier-filter/#purpose","title":"Purpose","text":"The purpose is to remove point cloud noise such as insects and rain.
"},{"location":"sensing/pointcloud_preprocessor/docs/radius-search-2d-outlier-filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"RadiusOutlierRemoval filter which removes all indices in its input cloud that don\u2019t have at least some number of neighbors within a certain range.
The description above is quoted from [1]. pcl::search::KdTree
[2] is used to implement this package.
This implementation inherits pointcloud_preprocessor::Filter
class, please refer README.
This implementation inherits pointcloud_preprocessor::Filter
class, please refer README.
min_neighbors
int If points in the circle centered on reference point is less than min_neighbors
, a reference point is judged as outlier search_radius
double Searching number of points included in search_radius
"},{"location":"sensing/pointcloud_preprocessor/docs/radius-search-2d-outlier-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":"Since the method is to count the number of points contained in the cylinder with the direction of gravity as the direction of the cylinder axis, it is a prerequisite that the ground has been removed.
"},{"location":"sensing/pointcloud_preprocessor/docs/radius-search-2d-outlier-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/pointcloud_preprocessor/docs/radius-search-2d-outlier-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/pointcloud_preprocessor/docs/radius-search-2d-outlier-filter/#referencesexternal-links","title":"References/External links","text":"[1] https://pcl.readthedocs.io/projects/tutorials/en/latest/remove_outliers.html
[2] https://pcl.readthedocs.io/projects/tutorials/en/latest/kdtree_search.html#kdtree-search
"},{"location":"sensing/pointcloud_preprocessor/docs/radius-search-2d-outlier-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/pointcloud_preprocessor/docs/ring-outlier-filter/","title":"ring_outlier_filter","text":""},{"location":"sensing/pointcloud_preprocessor/docs/ring-outlier-filter/#ring_outlier_filter","title":"ring_outlier_filter","text":""},{"location":"sensing/pointcloud_preprocessor/docs/ring-outlier-filter/#purpose","title":"Purpose","text":"The purpose is to remove point cloud noise such as insects and rain.
"},{"location":"sensing/pointcloud_preprocessor/docs/ring-outlier-filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"A method of operating scan in chronological order and removing noise based on the rate of change in the distance between points
"},{"location":"sensing/pointcloud_preprocessor/docs/ring-outlier-filter/#inputs-outputs","title":"Inputs / Outputs","text":"This implementation inherits pointcloud_preprocessor::Filter
class, please refer README.
This implementation inherits pointcloud_preprocessor::Filter
class, please refer README.
distance_ratio
double 1.03 object_length_threshold
double 0.1 num_points_threshold
int 4 max_rings_num
uint_16 128 max_points_num_per_ring
size_t 4000 Set this value large enough such that HFoV / resolution < max_points_num_per_ring
"},{"location":"sensing/pointcloud_preprocessor/docs/ring-outlier-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":"It is a prerequisite to input a scan point cloud in chronological order. In this repository it is defined as blow structure (please refer to PointXYZIRADT).
The vector_map_filter
is a node that removes points on the outside of lane by using vector map.
~/input/points
sensor_msgs::msg::PointCloud2
reference points ~/input/vector_map
autoware_auto_mapping_msgs::msg::HADMapBin
vector map"},{"location":"sensing/pointcloud_preprocessor/docs/vector-map-filter/#output","title":"Output","text":"Name Type Description ~/output/points
sensor_msgs::msg::PointCloud2
filtered points"},{"location":"sensing/pointcloud_preprocessor/docs/vector-map-filter/#parameters","title":"Parameters","text":""},{"location":"sensing/pointcloud_preprocessor/docs/vector-map-filter/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description voxel_size_x
double 0.04 voxel size voxel_size_y
double 0.04 voxel size"},{"location":"sensing/pointcloud_preprocessor/docs/vector-map-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/pointcloud_preprocessor/docs/vector-map-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/pointcloud_preprocessor/docs/vector-map-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/pointcloud_preprocessor/docs/vector-map-filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"sensing/pointcloud_preprocessor/docs/vector-map-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/pointcloud_preprocessor/docs/vector-map-inside-area-filter/","title":"vector_map_inside_area_filter","text":""},{"location":"sensing/pointcloud_preprocessor/docs/vector-map-inside-area-filter/#vector_map_inside_area_filter","title":"vector_map_inside_area_filter","text":""},{"location":"sensing/pointcloud_preprocessor/docs/vector-map-inside-area-filter/#purpose","title":"Purpose","text":"The vector_map_inside_area_filter
is a node that removes points inside the vector map area that has given type by parameter.
polygon_type
This implementation inherits pointcloud_preprocessor::Filter
class, so please see also README.
~/input
sensor_msgs::msg::PointCloud2
input points ~/input/vector_map
autoware_auto_mapping_msgs::msg::HADMapBin
vector map used for filtering points"},{"location":"sensing/pointcloud_preprocessor/docs/vector-map-inside-area-filter/#output","title":"Output","text":"Name Type Description ~/output
sensor_msgs::msg::PointCloud2
filtered points"},{"location":"sensing/pointcloud_preprocessor/docs/vector-map-inside-area-filter/#core-parameters","title":"Core Parameters","text":"Name Type Description polygon_type
string polygon type to be filtered"},{"location":"sensing/pointcloud_preprocessor/docs/vector-map-inside-area-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/pointcloud_preprocessor/docs/voxel-grid-outlier-filter/","title":"voxel_grid_outlier_filter","text":""},{"location":"sensing/pointcloud_preprocessor/docs/voxel-grid-outlier-filter/#voxel_grid_outlier_filter","title":"voxel_grid_outlier_filter","text":""},{"location":"sensing/pointcloud_preprocessor/docs/voxel-grid-outlier-filter/#purpose","title":"Purpose","text":"The purpose is to remove point cloud noise such as insects and rain.
"},{"location":"sensing/pointcloud_preprocessor/docs/voxel-grid-outlier-filter/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"Removing point cloud noise based on the number of points existing within a voxel. The radius_search_2d_outlier_filter is better for accuracy, but this method has the advantage of low calculation cost.
"},{"location":"sensing/pointcloud_preprocessor/docs/voxel-grid-outlier-filter/#inputs-outputs","title":"Inputs / Outputs","text":"This implementation inherits pointcloud_preprocessor::Filter
class, please refer README.
This implementation inherits pointcloud_preprocessor::Filter
class, please refer README.
voxel_size_x
double 0.3 the voxel size along x-axis [m] voxel_size_y
double 0.3 the voxel size along y-axis [m] voxel_size_z
double 0.1 the voxel size along z-axis [m] voxel_points_threshold
int 2 the minimum number of points in each voxel"},{"location":"sensing/pointcloud_preprocessor/docs/voxel-grid-outlier-filter/#assumptions-known-limits","title":"Assumptions / Known limits","text":""},{"location":"sensing/pointcloud_preprocessor/docs/voxel-grid-outlier-filter/#optional-error-detection-and-handling","title":"(Optional) Error detection and handling","text":""},{"location":"sensing/pointcloud_preprocessor/docs/voxel-grid-outlier-filter/#optional-performance-characterization","title":"(Optional) Performance characterization","text":""},{"location":"sensing/pointcloud_preprocessor/docs/voxel-grid-outlier-filter/#optional-referencesexternal-links","title":"(Optional) References/External links","text":""},{"location":"sensing/pointcloud_preprocessor/docs/voxel-grid-outlier-filter/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/radar_scan_to_pointcloud2/","title":"radar_scan_to_pointcloud2","text":""},{"location":"sensing/radar_scan_to_pointcloud2/#radar_scan_to_pointcloud2","title":"radar_scan_to_pointcloud2","text":""},{"location":"sensing/radar_scan_to_pointcloud2/#radar_scan_to_pointcloud2_node","title":"radar_scan_to_pointcloud2_node","text":"radar_msgs::msg::RadarScan
to sensor_msgs::msg::PointCloud2
true
. publish_doppler_pointcloud bool Whether publish radar pointcloud whose intensity is doppler velocity. Default is false
."},{"location":"sensing/radar_scan_to_pointcloud2/#how-to-launch","title":"How to launch","text":"ros2 launch radar_scan_to_pointcloud2 radar_scan_to_pointcloud2.launch.xml\n
"},{"location":"sensing/radar_static_pointcloud_filter/","title":"radar_static_pointcloud_filter","text":""},{"location":"sensing/radar_static_pointcloud_filter/#radar_static_pointcloud_filter","title":"radar_static_pointcloud_filter","text":""},{"location":"sensing/radar_static_pointcloud_filter/#radar_static_pointcloud_filter_node","title":"radar_static_pointcloud_filter_node","text":"Extract static/dynamic radar pointcloud by using doppler velocity and ego motion. Calculation cost is O(n). n
is the number of radar pointcloud.
ros2 launch radar_static_pointcloud_filter radar_static_pointcloud_filter.launch\n
"},{"location":"sensing/radar_static_pointcloud_filter/#algorithm","title":"Algorithm","text":""},{"location":"sensing/radar_threshold_filter/","title":"radar_threshold_filter","text":""},{"location":"sensing/radar_threshold_filter/#radar_threshold_filter","title":"radar_threshold_filter","text":""},{"location":"sensing/radar_threshold_filter/#radar_threshold_filter_node","title":"radar_threshold_filter_node","text":"Remove noise from radar return by threshold.
Calculation cost is O(n). n
is the number of radar return.
ros2 launch radar_threshold_filter radar_threshold_filter.launch.xml\n
"},{"location":"sensing/radar_tracks_noise_filter/","title":"radar_tracks_noise_filter","text":""},{"location":"sensing/radar_tracks_noise_filter/#radar_tracks_noise_filter","title":"radar_tracks_noise_filter","text":"This package contains a radar object filter module for radar_msgs/msg/RadarTrack
. This package can filter noise objects in RadarTracks.
The core algorithm of this package is RadarTrackCrossingNoiseFilterNode::isNoise()
function. See the function and the parameters for details.
Radar can detect x-axis velocity as doppler velocity, but cannot detect y-axis velocity. Some radar can estimate y-axis velocity inside the device, but it sometimes lack precision. In y-axis threshold filter, if y-axis velocity of RadarTrack is more than velocity_y_threshold
, it treats as noise objects.
~/input/tracks
radar_msgs/msg/RadarTracks.msg 3D detected tracks."},{"location":"sensing/radar_tracks_noise_filter/#output","title":"Output","text":"Name Type Description ~/output/noise_tracks
radar_msgs/msg/RadarTracks.msg Noise objects ~/output/filtered_tracks
radar_msgs/msg/RadarTracks.msg Filtered objects"},{"location":"sensing/radar_tracks_noise_filter/#parameters","title":"Parameters","text":"Name Type Description Default value velocity_y_threshold
double Y-axis velocity threshold [m/s]. If y-axis velocity of RadarTrack is more than velocity_y_threshold
, it treats as noise objects. 7.0"},{"location":"sensing/tier4_pcl_extensions/","title":"tier4_pcl_extensions","text":""},{"location":"sensing/tier4_pcl_extensions/#tier4_pcl_extensions","title":"tier4_pcl_extensions","text":""},{"location":"sensing/tier4_pcl_extensions/#purpose","title":"Purpose","text":"The tier4_pcl_extensions
is a pcl extension library. The voxel grid filter in this package works with a different algorithm than the original one.
[1] https://pointclouds.org/documentation/tutorials/voxel_grid.html
"},{"location":"sensing/tier4_pcl_extensions/#optional-future-extensions-unimplemented-parts","title":"(Optional) Future extensions / Unimplemented parts","text":""},{"location":"sensing/vehicle_velocity_converter/","title":"vehicle_velocity_converter","text":""},{"location":"sensing/vehicle_velocity_converter/#vehicle_velocity_converter","title":"vehicle_velocity_converter","text":""},{"location":"sensing/vehicle_velocity_converter/#purpose","title":"Purpose","text":"This package converts autoware_auto_vehicle_msgs::msg::VehicleReport message to geometry_msgs::msg::TwistWithCovarianceStamped for gyro odometer node.
"},{"location":"sensing/vehicle_velocity_converter/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"sensing/vehicle_velocity_converter/#input","title":"Input","text":"Name Type Descriptionvelocity_status
autoware_auto_vehicle_msgs::msg::VehicleReport
vehicle velocity"},{"location":"sensing/vehicle_velocity_converter/#output","title":"Output","text":"Name Type Description twist_with_covariance
geometry_msgs::msg::TwistWithCovarianceStamped
twist with covariance converted from VehicleReport"},{"location":"sensing/vehicle_velocity_converter/#parameters","title":"Parameters","text":"Name Type Description speed_scale_factor
double speed scale factor (ideal value is 1.0) frame_id
string frame id for output message velocity_stddev_xx
double standard deviation for vx angular_velocity_stddev_zz
double standard deviation for yaw rate"},{"location":"simulator/dummy_perception_publisher/","title":"dummy_perception_publisher","text":""},{"location":"simulator/dummy_perception_publisher/#dummy_perception_publisher","title":"dummy_perception_publisher","text":""},{"location":"simulator/dummy_perception_publisher/#purpose","title":"Purpose","text":"This node publishes the result of the dummy detection with the type of perception.
"},{"location":"simulator/dummy_perception_publisher/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"simulator/dummy_perception_publisher/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"simulator/dummy_perception_publisher/#input","title":"Input","text":"Name Type Description/tf
tf2_msgs/TFMessage
TF (self-pose) input/object
dummy_perception_publisher::msg::Object
dummy detection objects"},{"location":"simulator/dummy_perception_publisher/#output","title":"Output","text":"Name Type Description output/dynamic_object
tier4_perception_msgs::msg::DetectedObjectsWithFeature
dummy detection objects output/points_raw
sensor_msgs::msg::PointCloud2
point cloud of objects output/debug/ground_truth_objects
autoware_auto_perception_msgs::msg::TrackedObjects
ground truth objects"},{"location":"simulator/dummy_perception_publisher/#parameters","title":"Parameters","text":"Name Type Default Value Explanation visible_range
double 100.0 sensor visible range [m] detection_successful_rate
double 0.8 sensor detection rate. (min) 0.0 - 1.0(max) enable_ray_tracing
bool true if True, use ray tracking use_object_recognition
bool true if True, publish objects topic use_base_link_z
bool true if True, node uses z coordinate of ego base_link publish_ground_truth
bool false if True, publish ground truth objects use_fixed_random_seed
bool false if True, use fixed random seed random_seed
int 0 random seed"},{"location":"simulator/dummy_perception_publisher/#node-parameters","title":"Node Parameters","text":"None.
"},{"location":"simulator/dummy_perception_publisher/#core-parameters","title":"Core Parameters","text":"None.
"},{"location":"simulator/dummy_perception_publisher/#assumptions-known-limits","title":"Assumptions / Known limits","text":"TBD.
"},{"location":"simulator/fault_injection/","title":"fault_injection","text":""},{"location":"simulator/fault_injection/#fault_injection","title":"fault_injection","text":""},{"location":"simulator/fault_injection/#purpose","title":"Purpose","text":"This package is used to convert pseudo system faults from PSim to Diagnostics and notify Autoware. The component diagram is as follows:
"},{"location":"simulator/fault_injection/#test","title":"Test","text":"source install/setup.bash\ncd fault_injection\nlaunch_test test/test_fault_injection_node.test.py\n
"},{"location":"simulator/fault_injection/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"simulator/fault_injection/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"simulator/fault_injection/#input","title":"Input","text":"Name Type Description ~/input/simulation_events
tier4_simulation_msgs::msg::SimulationEvents
simulation events"},{"location":"simulator/fault_injection/#output","title":"Output","text":"None.
"},{"location":"simulator/fault_injection/#parameters","title":"Parameters","text":"None.
"},{"location":"simulator/fault_injection/#node-parameters","title":"Node Parameters","text":"None.
"},{"location":"simulator/fault_injection/#core-parameters","title":"Core Parameters","text":"None.
"},{"location":"simulator/fault_injection/#assumptions-known-limits","title":"Assumptions / Known limits","text":"TBD.
"},{"location":"simulator/simple_planning_simulator/","title":"simple_planning_simulator","text":""},{"location":"simulator/simple_planning_simulator/#simple_planning_simulator","title":"simple_planning_simulator","text":""},{"location":"simulator/simple_planning_simulator/#purpose-use-cases","title":"Purpose / Use cases","text":"This node simulates the vehicle motion for a vehicle command in 2D using a simple vehicle model.
"},{"location":"simulator/simple_planning_simulator/#design","title":"Design","text":"The purpose of this simulator is for the integration test of planning and control modules. This does not simulate sensing or perception, but is implemented in pure c++ only and works without GPU.
"},{"location":"simulator/simple_planning_simulator/#assumptions-known-limits","title":"Assumptions / Known limits","text":"geometry_msgs/msg/PoseWithCovarianceStamped
] : for initial poseautoware_auto_msgs/msg/AckermannControlCommand
] : target command to drive a vehicleautoware_auto_msgs/msg/AckermannControlCommand
] : manual target command to drive a vehicle (used when control_mode_request = Manual)autoware_auto_vehicle_msgs/msg/GearCommand
] : target gear command.autoware_auto_vehicle_msgs/msg/GearCommand
] : target gear command (used when control_mode_request = Manual)autoware_auto_vehicle_msgs/msg/TurnIndicatorsCommand
] : target turn indicator commandautoware_auto_vehicle_msgs/msg/HazardLightsCommand
] : target hazard lights commandtier4_vehicle_msgs::srv::ControlModeRequest
] : mode change for Auto/Manual drivingtf2_msgs/msg/TFMessage
] : simulated vehicle pose (base_link)nav_msgs/msg/Odometry
] : simulated vehicle pose and twistautoware_auto_vehicle_msgs/msg/SteeringReport
] : simulated steering angleautoware_auto_vehicle_msgs/msg/ControlModeReport
] : current control mode (Auto/Manual)autoware_auto_vehicle_msgs/msg/ControlModeReport
] : simulated gearautoware_auto_vehicle_msgs/msg/ControlModeReport
] : simulated turn indicator statusautoware_auto_vehicle_msgs/msg/ControlModeReport
] : simulated hazard lights statusinput/initialpose
topic is published. \"INITIAL_POSE_TOPIC\" add_measurement_noise bool If true, the Gaussian noise is added to the simulated results. true pos_noise_stddev double Standard deviation for position noise 0.01 rpy_noise_stddev double Standard deviation for Euler angle noise 0.0001 vel_noise_stddev double Standard deviation for longitudinal velocity noise 0.0 angvel_noise_stddev double Standard deviation for angular velocity noise 0.0 steer_noise_stddev double Standard deviation for steering angle noise 0.0001 measurement_steer_bias double Measurement bias for steering angle 0.0"},{"location":"simulator/simple_planning_simulator/#vehicle-model-parameters","title":"Vehicle Model Parameters","text":""},{"location":"simulator/simple_planning_simulator/#vehicle_model_type-options","title":"vehicle_model_type options","text":"IDEAL_STEER_VEL
IDEAL_STEER_ACC
IDEAL_STEER_ACC_GEARED
DELAY_STEER_VEL
DELAY_STEER_ACC
DELAY_STEER_ACC_GEARED
DELAY_STEER_MAP_ACC_GEARED
: applies 1D dynamics and time delay to the steering and acceleration commands. The simulated acceleration is determined by a value converted through the provided acceleration map. This model is valuable for an accurate simulation with acceleration deviations in a real vehicle.The IDEAL
model moves ideally as commanded, while the DELAY
model moves based on a 1st-order with time delay model. The STEER
means the model receives the steer command. The VEL
means the model receives the target velocity command, while the ACC
model receives the target acceleration command. The GEARED
suffix means that the motion will consider the gear command: the vehicle moves only one direction following the gear.
The table below shows which models correspond to what parameters. The model names are written in abbreviated form (e.g. IDEAL_STEER_VEL = I_ST_V).
Name Type Description I_ST_V I_ST_A I_ST_A_G D_ST_V D_ST_A D_ST_A_G D_ST_M_ACC_G Default value unit acc_time_delay double dead time for the acceleration input x x x x o o o 0.1 [s] steer_time_delay double dead time for the steering input x x x o o o o 0.24 [s] vel_time_delay double dead time for the velocity input x x x o x x x 0.25 [s] acc_time_constant double time constant of the 1st-order acceleration dynamics x x x x o o o 0.1 [s] steer_time_constant double time constant of the 1st-order steering dynamics x x x o o o o 0.27 [s] steer_dead_band double dead band for steering angle x x x o o o x 0.0 [rad] vel_time_constant double time constant of the 1st-order velocity dynamics x x x o x x x 0.5 [s] vel_lim double limit of velocity x x x o o o o 50.0 [m/s] vel_rate_lim double limit of acceleration x x x o o o o 7.0 [m/ss] steer_lim double limit of steering angle x x x o o o o 1.0 [rad] steer_rate_lim double limit of steering angle change rate x x x o o o o 5.0 [rad/s] debug_acc_scaling_factor double scaling factor for accel command x x x x o o x 1.0 [-] debug_steer_scaling_factor double scaling factor for steer command x x x x o o x 1.0 [-] acceleration_map_path string path to csv file for acceleration map which converts velocity and ideal acceleration to actual acceleration x x x x x x o - [-]The acceleration_map
is used only for DELAY_STEER_MAP_ACC_GEARED
and it shows the acceleration command on the vertical axis and the current velocity on the horizontal axis, with each cell representing the converted acceleration command that is actually used in the simulator's motion calculation. Values in between are linearly interpolated.
Example of acceleration_map.csv
default, 0.00, 1.39, 2.78, 4.17, 5.56, 6.94, 8.33, 9.72, 11.11, 12.50, 13.89, 15.28, 16.67\n-4.0, -4.40, -4.36, -4.38, -4.12, -4.20, -3.94, -3.98, -3.80, -3.77, -3.76, -3.59, -3.50, -3.40\n-3.5, -4.00, -3.91, -3.85, -3.64, -3.68, -3.55, -3.42, -3.24, -3.25, -3.00, -3.04, -2.93, -2.80\n-3.0, -3.40, -3.37, -3.33, -3.00, -3.00, -2.90, -2.88, -2.65, -2.43, -2.44, -2.43, -2.39, -2.30\n-2.5, -2.80, -2.72, -2.72, -2.62, -2.41, -2.43, -2.26, -2.18, -2.11, -2.03, -1.96, -1.91, -1.85\n-2.0, -2.30, -2.24, -2.12, -2.02, -1.92, -1.81, -1.67, -1.58, -1.51, -1.49, -1.40, -1.35, -1.30\n-1.5, -1.70, -1.61, -1.47, -1.46, -1.40, -1.37, -1.29, -1.24, -1.10, -0.99, -0.83, -0.80, -0.78\n-1.0, -1.30, -1.28, -1.10, -1.09, -1.04, -1.02, -0.98, -0.89, -0.82, -0.61, -0.52, -0.54, -0.56\n-0.8, -0.96, -0.90, -0.82, -0.74, -0.70, -0.65, -0.63, -0.59, -0.55, -0.44, -0.39, -0.39, -0.35\n-0.6, -0.77, -0.71, -0.67, -0.65, -0.58, -0.52, -0.51, -0.50, -0.40, -0.33, -0.30, -0.31, -0.30\n-0.4, -0.45, -0.40, -0.45, -0.44, -0.38, -0.35, -0.31, -0.30, -0.26, -0.30, -0.29, -0.31, -0.25\n-0.2, -0.24, -0.24, -0.25, -0.22, -0.23, -0.25, -0.27, -0.29, -0.24, -0.22, -0.17, -0.18, -0.12\n 0.0, 0.00, 0.00, -0.05, -0.05, -0.05, -0.05, -0.08, -0.08, -0.08, -0.08, -0.10, -0.10, -0.10\n 0.2, 0.16, 0.12, 0.02, 0.02, 0.00, 0.00, -0.05, -0.05, -0.05, -0.05, -0.08, -0.08, -0.08\n 0.4, 0.38, 0.30, 0.22, 0.25, 0.24, 0.23, 0.20, 0.16, 0.16, 0.14, 0.10, 0.05, 0.05\n 0.6, 0.52, 0.52, 0.51, 0.49, 0.43, 0.40, 0.35, 0.33, 0.33, 0.33, 0.32, 0.34, 0.34\n 0.8, 0.82, 0.81, 0.78, 0.68, 0.63, 0.56, 0.53, 0.48, 0.43, 0.41, 0.37, 0.38, 0.40\n 1.0, 1.00, 1.08, 1.01, 0.88, 0.76, 0.69, 0.66, 0.58, 0.54, 0.49, 0.45, 0.40, 0.40\n 1.5, 1.52, 1.50, 1.38, 1.26, 1.14, 1.03, 0.91, 0.82, 0.67, 0.61, 0.51, 0.41, 0.41\n 2.0, 1.80, 1.80, 1.64, 1.43, 1.25, 1.11, 0.96, 0.81, 0.70, 0.59, 0.51, 0.42, 0.42\n
Note: The steering/velocity/acceleration dynamics is modeled by a first order system with a deadtime in a delay model. The definition of the time constant is the time it takes for the step response to rise up to 63% of its final value. The deadtime is a delay in the response to a control input.
"},{"location":"simulator/simple_planning_simulator/#default-tf-configuration","title":"Default TF configuration","text":"Since the vehicle outputs odom
->base_link
tf, this simulator outputs the tf with the same frame_id configuration. In the simple_planning_simulator.launch.py, the node that outputs the map
->odom
tf, that usually estimated by the localization module (e.g. NDT), will be launched as well. Since the tf output by this simulator module is an ideal value, odom
->map
will always be 0.
Ego vehicle pitch angle is calculated in the following manner.
NOTE: driving against the line direction (as depicted in image's bottom row) is not supported and only shown for illustration purposes.
"},{"location":"simulator/simple_planning_simulator/#error-detection-and-handling","title":"Error detection and handling","text":"The only validation on inputs being done is testing for a valid vehicle model type.
"},{"location":"simulator/simple_planning_simulator/#security-considerations","title":"Security considerations","text":""},{"location":"simulator/simple_planning_simulator/#references-external-links","title":"References / External links","text":"This is originally developed in the Autoware.AI. See the link below.
https://github.com/Autoware-AI/simulation/tree/master/wf_simulator
"},{"location":"simulator/simple_planning_simulator/#future-extensions-unimplemented-parts","title":"Future extensions / Unimplemented parts","text":"This package is used to convert autoware_msgs
to autoware_auto_msgs
.
As we transition from autoware_auto_msgs
to autoware_msgs
, we wanted to provide flexibility and compatibility for users who are still using autoware_auto_msgs
.
This adapter package allows users to easily convert messages between the two formats.
"},{"location":"system/autoware_auto_msgs_adapter/#capabilities","title":"Capabilities","text":"The autoware_auto_msgs_adapter
package provides the following capabilities:
autoware_msgs
messages to autoware_auto_msgs
messages.Customize the adapter configuration by replicating and editing the autoware_auto_msgs_adapter_control.param.yaml
file located in the autoware_auto_msgs_adapter/config
directory. Example configuration:
/**:\nros__parameters:\nmsg_type_target: \"autoware_auto_control_msgs/msg/AckermannControlCommand\"\ntopic_name_source: \"/control/command/control_cmd\"\ntopic_name_target: \"/control/command/control_cmd_auto\"\n
Set the msg_type_target
parameter to the desired target message type from autoware_auto_msgs
.
Make sure that the msg_type_target
has the correspondence in either:
AutowareAutoMsgsAdapterNode::create_adapter_map()
method.(If this package is maintained correctly, they should match each other.)
Launch the adapter node by any of the following methods:
"},{"location":"system/autoware_auto_msgs_adapter/#ros2-launch","title":"ros2 launch
","text":"ros2 launch autoware_auto_msgs_adapter autoware_auto_msgs_adapter.launch.xml param_path:='full_path_to_param_file'\n
Make sure to set the param_path
argument to the full path of the parameter file.
Alternatively,
ros2 run
","text":"ros2 run autoware_auto_msgs_adapter autoware_auto_msgs_adapter_exe --ros-args --params-file 'full_path_to_param_file'\n
Make sure to set the param_path
argument to the full path of the parameter file.
The entry point for the adapter executable is created with RCLCPP_COMPONENTS_REGISTER_NODE
the autoware_auto_msgs_adapter_core.cpp.
This allows it to be launched as a component or as a standalone node.
In the AutowareAutoMsgsAdapterNode
constructor, the adapter is selected by the type string provided in the configuration file. The adapter is then initialized with the topic names provided.
The constructors of the adapters are responsible for creating the publisher and subscriber (which makes use of the conversion method).
"},{"location":"system/autoware_auto_msgs_adapter/#adding-a-new-message-pair","title":"Adding a new message pair","text":"To add a new message pair,
AutowareAutoMsgsAdapterNode::create_adapter_map()
method of the adapter node:definitions:autoware_auto_msgs_adapter:properties:msg_type_target:enum
section.CMakeLists.txt
file as it will automatically detect the new test file.Also make sure to test the new adapter with:
colcon test --event-handlers console_cohesion+ --packages-select autoware_auto_msgs_adapter\n
"},{"location":"system/bluetooth_monitor/","title":"bluetooth_monitor","text":""},{"location":"system/bluetooth_monitor/#macro-rendering-error","title":"Macro Rendering Error","text":"File: system/bluetooth_monitor/README.md
FileNotFoundError: [Errno 2] No such file or directory: 'system/bluetooth_monitor/schema/bluetooth_monitor.schema.json'
Traceback (most recent call last):\n File \"/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/mkdocs_macros/plugin.py\", line 527, in render\n return md_template.render(**page_variables)\n File \"/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/jinja2/environment.py\", line 1301, in render\n self.environment.handle_exception()\n File \"/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/jinja2/environment.py\", line 936, in handle_exception\n raise rewrite_traceback_stack(source=source)\n File \"<template>\", line 49, in top-level template code\n File \"/home/runner/work/autoware.universe/autoware.universe/mkdocs_macros.py\", line 70, in json_to_markdown\n with open(json_schema_file_path) as f:\nFileNotFoundError: [Errno 2] No such file or directory: 'system/bluetooth_monitor/schema/bluetooth_monitor.schema.json'\n
"},{"location":"system/component_state_monitor/","title":"component_state_monitor","text":""},{"location":"system/component_state_monitor/#component_state_monitor","title":"component_state_monitor","text":"The component state monitor checks the state of each component using topic state monitor. This is an implementation for backward compatibility with the AD service state monitor. It will be replaced in the future using a diagnostics tree.
"},{"location":"system/default_ad_api/","title":"default_ad_api","text":""},{"location":"system/default_ad_api/#default_ad_api","title":"default_ad_api","text":""},{"location":"system/default_ad_api/#features","title":"Features","text":"This package is a default implementation AD API.
This is a sample to call API using HTTP.
"},{"location":"system/default_ad_api/#guide-message-script","title":"Guide message script","text":"This is a debug script to check the conditions for transition to autonomous mode.
$ ros2 run default_ad_api guide.py\n\nThe vehicle pose is not estimated. Please set an initial pose or check GNSS.\nThe route is not set. Please set a goal pose.\nThe topic rate error is detected. Please check [control,planning] components.\nThe vehicle is ready. Please change the operation mode to autonomous.\nThe vehicle is driving autonomously.\nThe vehicle has reached the goal of the route. Please reset a route.\n
"},{"location":"system/default_ad_api/document/autoware-state/","title":"Autoware state compatibility","text":""},{"location":"system/default_ad_api/document/autoware-state/#autoware-state-compatibility","title":"Autoware state compatibility","text":""},{"location":"system/default_ad_api/document/autoware-state/#overview","title":"Overview","text":"Since /autoware/state
was so widely used, default_ad_api creates it from the states of AD API for backwards compatibility. The diagnostic checks that ad_service_state_monitor used to perform have been replaced by component_state_monitor. The service /autoware/shutdown
to change autoware state to finalizing is also supported for compatibility.
This is the correspondence between AD API states and autoware states. The launch state is the data that default_ad_api node holds internally.
"},{"location":"system/default_ad_api/document/fail-safe/","title":"Fail-safe API","text":""},{"location":"system/default_ad_api/document/fail-safe/#fail-safe-api","title":"Fail-safe API","text":""},{"location":"system/default_ad_api/document/fail-safe/#overview","title":"Overview","text":"The fail-safe API simply relays the MRM state. See the autoware-documentation for AD API specifications.
"},{"location":"system/default_ad_api/document/interface/","title":"Interface API","text":""},{"location":"system/default_ad_api/document/interface/#interface-api","title":"Interface API","text":""},{"location":"system/default_ad_api/document/interface/#overview","title":"Overview","text":"The interface API simply returns a version number. See the autoware-documentation for AD API specifications.
"},{"location":"system/default_ad_api/document/localization/","title":"Localization API","text":""},{"location":"system/default_ad_api/document/localization/#localization-api","title":"Localization API","text":""},{"location":"system/default_ad_api/document/localization/#overview","title":"Overview","text":"Unify the location initialization method to the service. The topic /initialpose
from rviz is now only subscribed to by adapter node and converted to API call. This API call is forwarded to the pose initializer node so it can centralize the state of pose initialization. For other nodes that require initialpose, pose initializer node publishes as /initialpose3d
. See the autoware-documentation for AD API specifications.
Provides a hook for when the vehicle starts. It is typically used for announcements that call attention to the surroundings. Add a pause function to the vehicle_cmd_gate, and API will control it based on vehicle stopped and start requested. See the autoware-documentation for AD API specifications.
"},{"location":"system/default_ad_api/document/motion/#states","title":"States","text":"The implementation has more detailed state transitions to manage pause state synchronization. The correspondence with the AD API state is as follows.
"},{"location":"system/default_ad_api/document/operation-mode/","title":"Operation mode API","text":""},{"location":"system/default_ad_api/document/operation-mode/#operation-mode-api","title":"Operation mode API","text":""},{"location":"system/default_ad_api/document/operation-mode/#overview","title":"Overview","text":"Introduce operation mode. It handles autoware engage, gate_mode, external_cmd_selector and control_mode abstractly. When the mode is changed, it will be in-transition state, and if the transition completion condition to that mode is not satisfied, it will be returned to the previous mode. Also, currently, the condition for mode change is only WaitingForEngage
in /autoware/state
, and the engage state is shared between modes. After introducing the operation mode, each mode will have a transition available flag. See the autoware-documentation for AD API specifications.
The operation mode has the following state transitions. Disabling autoware control and changing operation mode when autoware control is disabled can be done immediately. Otherwise, enabling autoware control and changing operation mode when autoware control is enabled causes the state will be transition state. If the mode change completion condition is not satisfied within the timeout in the transition state, it will return to the previous mode.
"},{"location":"system/default_ad_api/document/operation-mode/#compatibility","title":"Compatibility","text":"Ideally, vehicle_cmd_gate and external_cmd_selector should be merged so that the operation mode can be handled directly. However, currently the operation mode transition manager performs the following conversions to match the implementation. The transition manager monitors each topic in the previous interface and synchronizes the operation mode when it changes. When the operation mode is changed with the new interface, the transition manager disables synchronization and changes the operation mode using the previous interface.
"},{"location":"system/default_ad_api/document/routing/","title":"Routing API","text":""},{"location":"system/default_ad_api/document/routing/#routing-api","title":"Routing API","text":""},{"location":"system/default_ad_api/document/routing/#overview","title":"Overview","text":"Unify the route setting method to the service. This API supports two waypoint formats, poses and lanelet segments. The goal and checkpoint topics from rviz is only subscribed to by adapter node and converted to API call. This API call is forwarded to the mission planner node so it can centralize the state of routing. For other nodes that require route, mission planner node publishes as /planning/mission_planning/route
. See the autoware-documentation for AD API specifications.
This node makes it easy to use the localization AD API from RViz. When a initial pose topic is received, call the localization initialize API. This node depends on the map height fitter library. See here for more details.
Interface Local Name Global Name Description Subscription initialpose /initialpose The pose for localization initialization. Client - /api/localization/initialize The localization initialize API."},{"location":"system/default_ad_api_helpers/ad_api_adaptors/#routing_adaptor","title":"routing_adaptor","text":"This node makes it easy to use the routing AD API from RViz. When a goal pose topic is received, reset the waypoints and call the API. When a waypoint pose topic is received, append it to the end of the waypoints to call the API. The clear API is called automatically before setting the route.
Interface Local Name Global Name Description Subscription - /api/routing/state The state of the routing API. Subscription ~/input/fixed_goal /planning/mission_planning/goal The goal pose of route. Disable goal modification. Subscription ~/input/rough_goal /rviz/routing/rough_goal The goal pose of route. Enable goal modification. Subscription ~/input/reroute /rviz/routing/reroute The goal pose of reroute. Subscription ~/input/waypoint /planning/mission_planning/checkpoint The waypoint pose of route. Client - /api/routing/clear_route The route clear API. Client - /api/routing/set_route_points The route points set API. Client - /api/routing/change_route_points The route points change API."},{"location":"system/default_ad_api_helpers/automatic_pose_initializer/","title":"automatic_pose_initializer","text":""},{"location":"system/default_ad_api_helpers/automatic_pose_initializer/#automatic_pose_initializer","title":"automatic_pose_initializer","text":""},{"location":"system/default_ad_api_helpers/automatic_pose_initializer/#automatic_pose_initializer_1","title":"automatic_pose_initializer","text":"This node calls localization initialize API when the localization initialization state is uninitialized. Since the API uses GNSS pose when no pose is specified, initialization using GNSS can be performed automatically.
Interface Local Name Global Name Description Subscription - /api/localization/initialization_state The localization initialization state API. Client - /api/localization/initialize The localization initialize API."},{"location":"system/diagnostic_graph_aggregator/","title":"diagnostic_graph_aggregator","text":""},{"location":"system/diagnostic_graph_aggregator/#diagnostic_graph_aggregator","title":"diagnostic_graph_aggregator","text":""},{"location":"system/diagnostic_graph_aggregator/#overview","title":"Overview","text":"The diagnostic graph aggregator node subscribes to diagnostic array and publishes aggregated diagnostic graph. As shown in the diagram below, this node introduces extra diagnostic status for intermediate functional unit. Diagnostic status dependencies will be directed acyclic graph (DAG).
"},{"location":"system/diagnostic_graph_aggregator/#diagnostics-graph-message","title":"Diagnostics graph message","text":"The diagnostics graph that this node outputs is a combination of diagnostic status and connections between them. This graph consists of an array of diagnostic nodes, and each node has a status and links. This link contains an index indicating the position of the node in the graph. Therefore, the graph can be reconstructed from the array of nodes using links. The following is an example of a message representing the graph in the overview section.
"},{"location":"system/diagnostic_graph_aggregator/#operation-mode-availability","title":"Operation mode availability","text":"For MRM, this node publishes the status of the top-level functional units in the dedicated message. Therefore, the diagnostic graph must contain functional units with the following names. This feature breaks the generality of the graph and may be changed to a plugin or another node in the future.
/diagnostics
diagnostic_msgs/msg/DiagnosticArray
Diagnostics input. publisher /diagnostics_graph
tier4_system_msgs/msg/DiagnosticGraph
Diagnostics graph. publisher /system/operation_mode/availability
tier4_system_msgs/msg/OperationModeAvailability
mode availability."},{"location":"system/diagnostic_graph_aggregator/#parameters","title":"Parameters","text":"Parameter Name Data Type Description graph_file
string
Path of the config file. rate
double
Rate of aggregation and topic publication. input_qos_depth
uint
QoS depth of input array topic. graph_qos_depth
uint
QoS depth of output graph topic. use_operation_mode_availability
bool
Use operation mode availability publisher. use_debug_mode
bool
Use debug output to stdout."},{"location":"system/diagnostic_graph_aggregator/#examples","title":"Examples","text":"ros2 launch diagnostic_graph_aggregator example.launch.xml\n
"},{"location":"system/diagnostic_graph_aggregator/#graph-file-format","title":"Graph file format","text":"And is a node that is evaluated as the AND of the input nodes.
"},{"location":"system/diagnostic_graph_aggregator/doc/format/and/#format","title":"Format","text":"Name Type Required Description type string yes Specifyand
when using this object. name string yes Name of diagnostic status. list List<Diag|Unit> yes List of input node references."},{"location":"system/diagnostic_graph_aggregator/doc/format/diag/","title":"Diag","text":""},{"location":"system/diagnostic_graph_aggregator/doc/format/diag/#diag","title":"Diag","text":"Diag is a node that refers to a source diagnostics.
"},{"location":"system/diagnostic_graph_aggregator/doc/format/diag/#format","title":"Format","text":"Name Type Required Description type string yes Specifydiag
when using this object. diag string yes Name of diagnostic status."},{"location":"system/diagnostic_graph_aggregator/doc/format/graph-file/","title":"GraphFile","text":""},{"location":"system/diagnostic_graph_aggregator/doc/format/graph-file/#graphfile","title":"GraphFile","text":"GraphFile is the top level object that makes up the configuration file.
"},{"location":"system/diagnostic_graph_aggregator/doc/format/graph-file/#format","title":"Format","text":"Name Type Required Description files List<Path> no Paths of the files to include. nodes List<Node> no Nodes of the diagnostic graph."},{"location":"system/diagnostic_graph_aggregator/doc/format/node/","title":"Node","text":""},{"location":"system/diagnostic_graph_aggregator/doc/format/node/#node","title":"Node","text":"Node is a base object that makes up the diagnostic graph.
"},{"location":"system/diagnostic_graph_aggregator/doc/format/node/#format","title":"Format","text":"Name Type Required Description type string yes Node type. See derived objects for details."},{"location":"system/diagnostic_graph_aggregator/doc/format/or/","title":"Unit","text":""},{"location":"system/diagnostic_graph_aggregator/doc/format/or/#unit","title":"Unit","text":"Or is a node that is evaluated as the OR of the input nodes.
"},{"location":"system/diagnostic_graph_aggregator/doc/format/or/#format","title":"Format","text":"Name Type Required Description type string yes Specifyor
when using this object. name string yes Name of diagnostic status. list List<Diag|Unit> yes List of input node references."},{"location":"system/diagnostic_graph_aggregator/doc/format/path/","title":"Path","text":""},{"location":"system/diagnostic_graph_aggregator/doc/format/path/#path","title":"Path","text":"Path is an object that indicates the path of the file to include.
"},{"location":"system/diagnostic_graph_aggregator/doc/format/path/#format","title":"Format","text":"Name Type Required Description package string yes Package name. path string yes Relative path in the package."},{"location":"system/diagnostic_graph_aggregator/doc/format/unit/","title":"Unit","text":""},{"location":"system/diagnostic_graph_aggregator/doc/format/unit/#unit","title":"Unit","text":"Diag is a node that refers to a functional unit.
"},{"location":"system/diagnostic_graph_aggregator/doc/format/unit/#format","title":"Format","text":"Name Type Required Description type string yes Specifyunit
when using this object. name string yes Name of diagnostic status."},{"location":"system/dummy_diag_publisher/","title":"dummy_diag_publisher","text":""},{"location":"system/dummy_diag_publisher/#dummy_diag_publisher","title":"dummy_diag_publisher","text":""},{"location":"system/dummy_diag_publisher/#purpose","title":"Purpose","text":"This package outputs a dummy diagnostic data for debugging and developing.
"},{"location":"system/dummy_diag_publisher/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"system/dummy_diag_publisher/#outputs","title":"Outputs","text":"Name Type Description/diagnostics
diagnostic_msgs::msgs::DiagnosticArray
Diagnostics outputs"},{"location":"system/dummy_diag_publisher/#parameters","title":"Parameters","text":""},{"location":"system/dummy_diag_publisher/#node-parameters","title":"Node Parameters","text":"The parameter DIAGNOSTIC_NAME
must be a name that exists in the parameter YAML file. If the parameter status
is given from a command line, the parameter is_active
is automatically set to true
.
update_rate
int 10
Timer callback period [Hz] false DIAGNOSTIC_NAME.is_active
bool true
Force update or not true DIAGNOSTIC_NAME.status
string \"OK\"
diag status set by dummy diag publisher true"},{"location":"system/dummy_diag_publisher/#yaml-format-for-dummy_diag_publisher","title":"YAML format for dummy_diag_publisher","text":"If the value is default
, the default value will be set.
required_diags.DIAGNOSTIC_NAME.is_active
bool true
Force update or not required_diags.DIAGNOSTIC_NAME.status
string \"OK\"
diag status set by dummy diag publisher"},{"location":"system/dummy_diag_publisher/#assumptions-known-limits","title":"Assumptions / Known limits","text":"TBD.
"},{"location":"system/dummy_diag_publisher/#usage","title":"Usage","text":""},{"location":"system/dummy_diag_publisher/#launch","title":"launch","text":"ros2 launch dummy_diag_publisher dummy_diag_publisher.launch.xml\n
"},{"location":"system/dummy_diag_publisher/#reconfigure","title":"reconfigure","text":"ros2 param set /dummy_diag_publisher velodyne_connection.status \"Warn\"\nros2 param set /dummy_diag_publisher velodyne_connection.is_active true\n
"},{"location":"system/dummy_infrastructure/","title":"dummy_infrastructure","text":""},{"location":"system/dummy_infrastructure/#dummy_infrastructure","title":"dummy_infrastructure","text":"This is a debug node for infrastructure communication.
"},{"location":"system/dummy_infrastructure/#usage","title":"Usage","text":"ros2 launch dummy_infrastructure dummy_infrastructure.launch.xml\nros2 run rqt_reconfigure rqt_reconfigure\n
"},{"location":"system/dummy_infrastructure/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"system/dummy_infrastructure/#inputs","title":"Inputs","text":"Name Type Description ~/input/command_array
tier4_v2x_msgs::msg::InfrastructureCommandArray
Infrastructure command"},{"location":"system/dummy_infrastructure/#outputs","title":"Outputs","text":"Name Type Description ~/output/state_array
tier4_v2x_msgs::msg::VirtualTrafficLightStateArray
Virtual traffic light array"},{"location":"system/dummy_infrastructure/#parameters","title":"Parameters","text":""},{"location":"system/dummy_infrastructure/#node-parameters","title":"Node Parameters","text":"Name Type Default Value Explanation update_rate
int 10
Timer callback period [Hz] use_first_command
bool true
Consider instrument id or not instrument_id
string `` Used as command id approval
bool false
set approval filed to ros param is_finalized
bool false
Stop at stop_line if finalization isn't completed"},{"location":"system/dummy_infrastructure/#assumptions-known-limits","title":"Assumptions / Known limits","text":"TBD.
"},{"location":"system/duplicated_node_checker/","title":"Duplicated Node Checker","text":""},{"location":"system/duplicated_node_checker/#duplicated-node-checker","title":"Duplicated Node Checker","text":""},{"location":"system/duplicated_node_checker/#purpose","title":"Purpose","text":"This node monitors the ROS 2 environments and detect duplication of node names in the environment. The result is published as diagnostics.
"},{"location":"system/duplicated_node_checker/#standalone-startup","title":"Standalone Startup","text":"ros2 launch duplicated_node_checker duplicated_node_checker.launch.xml\n
"},{"location":"system/duplicated_node_checker/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"The types of topic status and corresponding diagnostic status are following.
Duplication status Diagnostic status DescriptionOK
OK No duplication is detected Duplicated Detected
ERROR Duplication is detected"},{"location":"system/duplicated_node_checker/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"system/duplicated_node_checker/#output","title":"Output","text":"Name Type Description /diagnostics
diagnostic_msgs/DiagnosticArray
Diagnostics outputs"},{"location":"system/duplicated_node_checker/#parameters","title":"Parameters","text":"Name Type Description Default Range update_rate float The scanning and update frequency of the checker. 10 >2"},{"location":"system/duplicated_node_checker/#assumptions-known-limits","title":"Assumptions / Known limits","text":"TBD.
"},{"location":"system/emergency_handler/","title":"emergency_handler","text":""},{"location":"system/emergency_handler/#emergency_handler","title":"emergency_handler","text":""},{"location":"system/emergency_handler/#purpose","title":"Purpose","text":"Emergency Handler is a node to select proper MRM from from system failure state contained in HazardStatus.
"},{"location":"system/emergency_handler/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"system/emergency_handler/#state-transitions","title":"State Transitions","text":""},{"location":"system/emergency_handler/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"system/emergency_handler/#input","title":"Input","text":"Name Type Description/system/emergency/hazard_status
autoware_auto_system_msgs::msg::HazardStatusStamped
Used to select proper MRM from system failure state contained in HazardStatus /control/vehicle_cmd
autoware_auto_control_msgs::msg::AckermannControlCommand
Used as reference when generate Emergency Control Command /localization/kinematic_state
nav_msgs::msg::Odometry
Used to decide whether vehicle is stopped or not /vehicle/status/control_mode
autoware_auto_vehicle_msgs::msg::ControlModeReport
Used to check vehicle mode: autonomous or manual /system/api/mrm/comfortable_stop/status
tier4_system_msgs::msg::MrmBehaviorStatus
Used to check if MRM comfortable stop operation is available /system/api/mrm/emergency_stop/status
tier4_system_msgs::msg::MrmBehaviorStatus
Used to check if MRM emergency stop operation is available"},{"location":"system/emergency_handler/#output","title":"Output","text":"Name Type Description /system/emergency/shift_cmd
autoware_auto_vehicle_msgs::msg::GearCommand
Required to execute proper MRM (send gear cmd) /system/emergency/hazard_cmd
autoware_auto_vehicle_msgs::msg::HazardLightsCommand
Required to execute proper MRM (send turn signal cmd) /api/fail_safe/mrm_state
autoware_adapi_v1_msgs::msg::MrmState
Inform MRM execution state and selected MRM behavior /system/api/mrm/comfortable_stop/operate
tier4_system_msgs::srv::OperateMrm
Execution order for MRM comfortable stop /system/api/mrm/emergency_stop/operate
tier4_system_msgs::srv::OperateMrm
Execution order for MRM emergency stop"},{"location":"system/emergency_handler/#parameters","title":"Parameters","text":"Name Type Description Default Range update_rate integer Timer callback period. 10 N/A timeout_hazard_status float If the input hazard_status
topic cannot be received for more than timeout_hazard_status
, vehicle will make an emergency stop. 0.5 N/A timeout_takeover_request float Transition to MRR_OPERATING if the time from the last takeover request exceeds timeout_takeover_request
. 10.0 N/A use_takeover_request boolean If this parameter is true, the handler will record the time and make take over request to the driver when emergency state occurs. false N/A use_parking_after_stopped boolean If this parameter is true, it will publish PARKING shift command. false N/A use_comfortable_stop boolean If this parameter is true, operate comfortable stop when latent faults occur. false N/A turning_hazard_on.emergency boolean If this parameter is true, hazard lamps will be turned on during emergency state. true N/A"},{"location":"system/emergency_handler/#assumptions-known-limits","title":"Assumptions / Known limits","text":"TBD.
"},{"location":"system/mrm_comfortable_stop_operator/","title":"mrm_comfortable_stop_operator","text":""},{"location":"system/mrm_comfortable_stop_operator/#mrm_comfortable_stop_operator","title":"mrm_comfortable_stop_operator","text":""},{"location":"system/mrm_comfortable_stop_operator/#purpose","title":"Purpose","text":"MRM comfortable stop operator is a node that generates comfortable stop commands according to the comfortable stop MRM order.
"},{"location":"system/mrm_comfortable_stop_operator/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"system/mrm_comfortable_stop_operator/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"system/mrm_comfortable_stop_operator/#input","title":"Input","text":"Name Type Description~/input/mrm/comfortable_stop/operate
tier4_system_msgs::srv::OperateMrm
MRM execution order"},{"location":"system/mrm_comfortable_stop_operator/#output","title":"Output","text":"Name Type Description ~/output/mrm/comfortable_stop/status
tier4_system_msgs::msg::MrmBehaviorStatus
MRM execution status ~/output/velocity_limit
tier4_planning_msgs::msg::VelocityLimit
Velocity limit command ~/output/velocity_limit/clear
tier4_planning_msgs::msg::VelocityLimitClearCommand
Velocity limit clear command"},{"location":"system/mrm_comfortable_stop_operator/#parameters","title":"Parameters","text":""},{"location":"system/mrm_comfortable_stop_operator/#node-parameters","title":"Node Parameters","text":"Name Type Default value Explanation update_rate int 10
Timer callback frequency [Hz]"},{"location":"system/mrm_comfortable_stop_operator/#core-parameters","title":"Core Parameters","text":"Name Type Default value Explanation min_acceleration double -1.0
Minimum acceleration for comfortable stop [m/s^2] max_jerk double 0.3
Maximum jerk for comfortable stop [m/s^3] min_jerk double -0.3
Minimum jerk for comfortable stop [m/s^3]"},{"location":"system/mrm_comfortable_stop_operator/#assumptions-known-limits","title":"Assumptions / Known limits","text":"TBD.
"},{"location":"system/mrm_emergency_stop_operator/","title":"mrm_emergency_stop_operator","text":""},{"location":"system/mrm_emergency_stop_operator/#mrm_emergency_stop_operator","title":"mrm_emergency_stop_operator","text":""},{"location":"system/mrm_emergency_stop_operator/#purpose","title":"Purpose","text":"MRM emergency stop operator is a node that generates emergency stop commands according to the emergency stop MRM order.
"},{"location":"system/mrm_emergency_stop_operator/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"system/mrm_emergency_stop_operator/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"system/mrm_emergency_stop_operator/#input","title":"Input","text":"Name Type Description~/input/mrm/emergency_stop/operate
tier4_system_msgs::srv::OperateMrm
MRM execution order ~/input/control/control_cmd
autoware_auto_control_msgs::msg::AckermannControlCommand
Control command output from the last node of the control component. Used for the initial value of the emergency stop command."},{"location":"system/mrm_emergency_stop_operator/#output","title":"Output","text":"Name Type Description ~/output/mrm/emergency_stop/status
tier4_system_msgs::msg::MrmBehaviorStatus
MRM execution status ~/output/mrm/emergency_stop/control_cmd
autoware_auto_control_msgs::msg::AckermannControlCommand
Emergency stop command"},{"location":"system/mrm_emergency_stop_operator/#parameters","title":"Parameters","text":""},{"location":"system/mrm_emergency_stop_operator/#node-parameters","title":"Node Parameters","text":"Name Type Default value Explanation update_rate int 30
Timer callback frequency [Hz]"},{"location":"system/mrm_emergency_stop_operator/#core-parameters","title":"Core Parameters","text":"Name Type Default value Explanation target_acceleration double -2.5
Target acceleration for emergency stop [m/s^2] target_jerk double -1.5
Target jerk for emergency stop [m/s^3]"},{"location":"system/mrm_emergency_stop_operator/#assumptions-known-limits","title":"Assumptions / Known limits","text":"TBD.
"},{"location":"system/system_error_monitor/","title":"system_error_monitor","text":""},{"location":"system/system_error_monitor/#system_error_monitor","title":"system_error_monitor","text":""},{"location":"system/system_error_monitor/#purpose","title":"Purpose","text":"Autoware Error Monitor has two main functions.
/diagnostics_agg
diagnostic_msgs::msg::DiagnosticArray
Diagnostic information aggregated based diagnostic_aggregator setting is used to /autoware/state
autoware_auto_system_msgs::msg::AutowareState
Required to ignore error during Route, Planning and Finalizing. /control/current_gate_mode
tier4_control_msgs::msg::GateMode
Required to select the appropriate module from autonomous_driving
or external_control
/vehicle/control_mode
autoware_auto_vehicle_msgs::msg::ControlModeReport
Required to not hold emergency during manual driving"},{"location":"system/system_error_monitor/#output","title":"Output","text":"Name Type Description /system/emergency/hazard_status
autoware_auto_system_msgs::msg::HazardStatusStamped
HazardStatus contains system hazard level, emergency hold status and failure details /diagnostics_err
diagnostic_msgs::msg::DiagnosticArray
This has the same contents as HazardStatus. This is used for visualization"},{"location":"system/system_error_monitor/#parameters","title":"Parameters","text":""},{"location":"system/system_error_monitor/#node-parameters","title":"Node Parameters","text":"Name Type Default Value Explanation ignore_missing_diagnostics
bool false
If this parameter is turned off, it will be ignored if required modules have not been received. add_leaf_diagnostics
bool true
Required to use children diagnostics. diag_timeout_sec
double 1.0
(sec) If required diagnostic is not received for a diag_timeout_sec
, the diagnostic state become STALE state. data_ready_timeout
double 30.0
If input topics required for system_error_monitor are not available for data_ready_timeout
seconds, autoware_state will translate to emergency state. data_heartbeat_timeout
double 1.0
If input topics required for system_error_monitor are not no longer subscribed for data_heartbeat_timeout
seconds, autoware_state will translate to emergency state."},{"location":"system/system_error_monitor/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Explanation hazard_recovery_timeout
double 5.0
The vehicle can recovery to normal driving if emergencies disappear during hazard_recovery_timeout
. use_emergency_hold
bool false
If it is false, the vehicle will return to normal as soon as emergencies disappear. use_emergency_hold_in_manual_driving
bool false
If this parameter is turned off, emergencies will be ignored during manual driving. emergency_hazard_level
int 2
If hazard_level is more than emergency_hazard_level, autoware state will translate to emergency state"},{"location":"system/system_error_monitor/#yaml-format-for-system_error_monitor","title":"YAML format for system_error_monitor","text":"The parameter key should be filled with the hierarchical diagnostics output by diagnostic_aggregator. Parameters prefixed with required_modules.autonomous_driving
are for autonomous driving. Parameters with the required_modules.remote_control
prefix are for remote control. If the value is default
, the default value will be set.
required_modules.autonomous_driving.DIAGNOSTIC_NAME.sf_at
string \"none\"
Diagnostic level where it becomes Safe Fault. Available options are \"none\"
, \"warn\"
, \"error\"
. required_modules.autonomous_driving.DIAGNOSTIC_NAME.lf_at
string \"warn\"
Diagnostic level where it becomes Latent Fault. Available options are \"none\"
, \"warn\"
, \"error\"
. required_modules.autonomous_driving.DIAGNOSTIC_NAME.spf_at
string \"error\"
Diagnostic level where it becomes Single Point Fault. Available options are \"none\"
, \"warn\"
, \"error\"
. required_modules.autonomous_driving.DIAGNOSTIC_NAME.auto_recovery
string \"true\"
Determines whether the system will automatically recover when it recovers from an error. required_modules.remote_control.DIAGNOSTIC_NAME.sf_at
string \"none\"
Diagnostic level where it becomes Safe Fault. Available options are \"none\"
, \"warn\"
, \"error\"
. required_modules.remote_control.DIAGNOSTIC_NAME.lf_at
string \"warn\"
Diagnostic level where it becomes Latent Fault. Available options are \"none\"
, \"warn\"
, \"error\"
. required_modules.remote_control.DIAGNOSTIC_NAME.spf_at
string \"error\"
Diagnostic level where it becomes Single Point Fault. Available options are \"none\"
, \"warn\"
, \"error\"
. required_modules.remote_control.DIAGNOSTIC_NAME.auto_recovery
string \"true\"
Determines whether the system will automatically recover when it recovers from an error."},{"location":"system/system_error_monitor/#assumptions-known-limits","title":"Assumptions / Known limits","text":"TBD.
"},{"location":"system/system_monitor/","title":"System Monitor for Autoware","text":""},{"location":"system/system_monitor/#system-monitor-for-autoware","title":"System Monitor for Autoware","text":"Further improvement of system monitor functionality for Autoware.
"},{"location":"system/system_monitor/#description","title":"Description","text":"This package provides the following nodes for monitoring system:
Use colcon build and launch in the same way as other packages.
colcon build\nsource install/setup.bash\nros2 launch system_monitor system_monitor.launch.xml\n
CPU and GPU monitoring method differs depending on platform. CMake automatically chooses source to be built according to build environment. If you build this package on intel platform, CPU monitor and GPU monitor which run on intel platform are built.
"},{"location":"system/system_monitor/#ros-topics-published-by-system-monitor","title":"ROS topics published by system monitor","text":"Every topic is published in 1 minute interval.
[Usage] \u2713\uff1aSupported, -\uff1aNot supported
Node Message Intel arm64(tegra) arm64(raspi) Notes CPU Monitor CPU Temperature \u2713 \u2713 \u2713 CPU Usage \u2713 \u2713 \u2713 CPU Load Average \u2713 \u2713 \u2713 CPU Thermal Throttling \u2713 - \u2713 CPU Frequency \u2713 \u2713 \u2713 Notification of frequency only, normally error not generated. HDD Monitor HDD Temperature \u2713 \u2713 \u2713 HDD PowerOnHours \u2713 \u2713 \u2713 HDD TotalDataWritten \u2713 \u2713 \u2713 HDD RecoveredError \u2713 \u2713 \u2713 HDD Usage \u2713 \u2713 \u2713 HDD ReadDataRate \u2713 \u2713 \u2713 HDD WriteDataRate \u2713 \u2713 \u2713 HDD ReadIOPS \u2713 \u2713 \u2713 HDD WriteIOPS \u2713 \u2713 \u2713 HDD Connection \u2713 \u2713 \u2713 Memory Monitor Memory Usage \u2713 \u2713 \u2713 Net Monitor Network Connection \u2713 \u2713 \u2713 Network Usage \u2713 \u2713 \u2713 Notification of usage only, normally error not generated. Network CRC Error \u2713 \u2713 \u2713 Warning occurs when the number of CRC errors in the period reaches the threshold value. The number of CRC errors that occur is the same as the value that can be confirmed with the ip command. IP Packet Reassembles Failed \u2713 \u2713 \u2713 NTP Monitor NTP Offset \u2713 \u2713 \u2713 Process Monitor Tasks Summary \u2713 \u2713 \u2713 High-load Proc[0-9] \u2713 \u2713 \u2713 High-mem Proc[0-9] \u2713 \u2713 \u2713 GPU Monitor GPU Temperature \u2713 \u2713 - GPU Usage \u2713 \u2713 - GPU Memory Usage \u2713 - - GPU Thermal Throttling \u2713 - - GPU Frequency \u2713 \u2713 - For Intel platform, monitor whether current GPU clock is supported by the GPU. Voltage Monitor CMOS Battery Status \u2713 - - Battery Health for RTC and BIOS -"},{"location":"system/system_monitor/#ros-parameters","title":"ROS parameters","text":"See ROS parameters.
"},{"location":"system/system_monitor/#notes","title":"Notes","text":""},{"location":"system/system_monitor/#cpu-monitor-for-intel-platform","title":"CPU monitor for intel platform","text":"Thermal throttling event can be monitored by reading contents of MSR(Model Specific Register), and accessing MSR is only allowed for root by default, so this package provides the following approach to minimize security risks as much as possible:
Create a user to run 'msr_reader'.
sudo adduser <username>\n
Load kernel module 'msr' into your target system. The path '/dev/cpu/CPUNUM/msr' appears.
sudo modprobe msr\n
Allow user to access MSR with read-only privilege using the Access Control List (ACL).
sudo setfacl -m u:<username>:r /dev/cpu/*/msr\n
Assign capability to 'msr_reader' since msr kernel module requires rawio capability.
sudo setcap cap_sys_rawio=ep install/system_monitor/lib/system_monitor/msr_reader\n
Run 'msr_reader' as the user you created, and run system_monitor as a generic user.
su <username>\ninstall/system_monitor/lib/system_monitor/msr_reader\n
msr_reader
"},{"location":"system/system_monitor/#hdd-monitor","title":"HDD Monitor","text":"Generally, S.M.A.R.T. information is used to monitor HDD temperature and life of HDD, and normally accessing disk device node is allowed for root user or disk group. As with the CPU monitor, this package provides an approach to minimize security risks as much as possible:
Create a user to run 'hdd_reader'.
sudo adduser <username>\n
Add user to the disk group.
sudo usermod -a -G disk <username>\n
Assign capabilities to 'hdd_reader' since SCSI kernel module requires rawio capability to send ATA PASS-THROUGH (12) command and NVMe kernel module requires admin capability to send Admin Command.
sudo setcap 'cap_sys_rawio=ep cap_sys_admin=ep' install/system_monitor/lib/system_monitor/hdd_reader\n
Run 'hdd_reader' as the user you created, and run system_monitor as a generic user.
su <username>\ninstall/system_monitor/lib/system_monitor/hdd_reader\n
hdd_reader
"},{"location":"system/system_monitor/#gpu-monitor-for-intel-platform","title":"GPU Monitor for intel platform","text":"Currently GPU monitor for intel platform only supports NVIDIA GPU whose information can be accessed by NVML API.
Also you need to install CUDA libraries. For installation instructions for CUDA 10.0, see NVIDIA CUDA Installation Guide for Linux.
"},{"location":"system/system_monitor/#voltage-monitor-for-cmos-battery","title":"Voltage monitor for CMOS Battery","text":"Some platforms have built-in batteries for the RTC and CMOS. This node determines the battery status from the result of executing cat /proc/driver/rtc. Also, if lm-sensors is installed, it is possible to use the results. However, the return value of sensors varies depending on the chipset, so it is necessary to set a string to extract the corresponding voltage. It is also necessary to set the voltage for warning and error. For example, if you want a warning when the voltage is less than 2.9V and an error when it is less than 2.7V. The execution result of sensors on the chipset nct6106 is as follows, and \"in7:\" is the voltage of the CMOS battery.
$ sensors\npch_cannonlake-virtual-0\nAdapter: Virtual device\ntemp1: +42.0\u00b0C\n\nnct6106-isa-0a10\nAdapter: ISA adapter\nin0: 728.00 mV (min = +0.00 V, max = +1.74 V)\nin1: 1.01 V (min = +0.00 V, max = +2.04 V)\nin2: 3.34 V (min = +0.00 V, max = +4.08 V)\nin3: 3.34 V (min = +0.00 V, max = +4.08 V)\nin4: 1.07 V (min = +0.00 V, max = +2.04 V)\nin5: 1.05 V (min = +0.00 V, max = +2.04 V)\nin6: 1.67 V (min = +0.00 V, max = +2.04 V)\nin7: 3.06 V (min = +0.00 V, max = +4.08 V)\nin8: 2.10 V (min = +0.00 V, max = +4.08 V)\nfan1: 2789 RPM (min = 0 RPM)\nfan2: 0 RPM (min = 0 RPM)\n
The setting value of voltage_monitor.param.yaml is as follows.
/**:\nros__parameters:\ncmos_battery_warn: 2.90\ncmos_battery_error: 2.70\ncmos_battery_label: \"in7:\"\n
The above values of 2.7V and 2.90V are hypothetical. Depending on the motherboard and chipset, the value may vary. However, if the voltage of the lithium battery drops below 2.7V, it is recommended to replace it. In the above example, the message output to the topic /diagnostics is as follows. If the voltage < 2.9V then:
name: /autoware/system/resource_monitoring/voltage/cmos_battery\n message: Warning\n hardware_id: ''\n values:\n - key: 'voltage_monitor: CMOS Battery Status'\n value: Low Battery\n
If the voltage < 2.7V then:
name: /autoware/system/resource_monitoring/voltage/cmos_battery\n message: Warning\n hardware_id: ''\n values:\n - key: 'voltage_monitor: CMOS Battery Status'\n value: Battery Died\n
If neither, then:
name: /autoware/system/resource_monitoring/voltage/cmos_battery\n message: OK\n hardware_id: ''\n values:\n - key: 'voltage_monitor: CMOS Battery Status'\n value: OK\n
If the CMOS battery voltage drops less than voltage_error or voltage_warn,It will be a warning. If the battery runs out, the RTC will stop working when the power is turned off. However, since the vehicle can run, it is not an error. The vehicle will stop when an error occurs, but there is no need to stop immediately. It can be determined by the value of \"Low Battery\" or \"Battery Died\".
"},{"location":"system/system_monitor/#uml-diagrams","title":"UML diagrams","text":"See Class diagrams. See Sequence diagrams.
"},{"location":"system/system_monitor/docs/class_diagrams/","title":"Class diagrams","text":""},{"location":"system/system_monitor/docs/class_diagrams/#class-diagrams","title":"Class diagrams","text":""},{"location":"system/system_monitor/docs/class_diagrams/#cpu-monitor","title":"CPU Monitor","text":""},{"location":"system/system_monitor/docs/class_diagrams/#hdd-monitor","title":"HDD Monitor","text":""},{"location":"system/system_monitor/docs/class_diagrams/#memory-monitor","title":"Memory Monitor","text":""},{"location":"system/system_monitor/docs/class_diagrams/#net-monitor","title":"Net Monitor","text":""},{"location":"system/system_monitor/docs/class_diagrams/#ntp-monitor","title":"NTP Monitor","text":""},{"location":"system/system_monitor/docs/class_diagrams/#process-monitor","title":"Process Monitor","text":""},{"location":"system/system_monitor/docs/class_diagrams/#gpu-monitor","title":"GPU Monitor","text":""},{"location":"system/system_monitor/docs/hdd_reader/","title":"hdd_reader","text":""},{"location":"system/system_monitor/docs/hdd_reader/#hdd_reader","title":"hdd_reader","text":""},{"location":"system/system_monitor/docs/hdd_reader/#name","title":"Name","text":"hdd_reader - Read S.M.A.R.T. information for monitoring HDD temperature and life of HDD
"},{"location":"system/system_monitor/docs/hdd_reader/#synopsis","title":"Synopsis","text":"hdd_reader [OPTION]
"},{"location":"system/system_monitor/docs/hdd_reader/#description","title":"Description","text":"Read S.M.A.R.T. information for monitoring HDD temperature and life of HDD. This runs as a daemon process and listens to a TCP/IP port (7635 by default).
Options: -h, --help \u00a0\u00a0\u00a0\u00a0Display help -p, --port # \u00a0\u00a0\u00a0\u00a0Port number to listen to
Exit status: Returns 0 if OK; non-zero otherwise.
"},{"location":"system/system_monitor/docs/hdd_reader/#notes","title":"Notes","text":"The 'hdd_reader' accesses minimal data enough to get Model number, Serial number, HDD temperature, and life of HDD. This is an approach to limit its functionality, however, the functionality can be expanded for further improvements and considerations in the future.
"},{"location":"system/system_monitor/docs/hdd_reader/#ata","title":"[ATA]","text":"Purpose Name Length Model number, Serial number IDENTIFY DEVICE data 256 words(512 bytes) HDD temperature, life of HDD SMART READ DATA 256 words(512 bytes)For details please see the documents below.
For details please see the documents below.
msr_reader - Read MSR register for monitoring thermal throttling event
"},{"location":"system/system_monitor/docs/msr_reader/#synopsis","title":"Synopsis","text":"msr_reader [OPTION]
"},{"location":"system/system_monitor/docs/msr_reader/#description","title":"Description","text":"Read MSR register for monitoring thermal throttling event. This runs as a daemon process and listens to a TCP/IP port (7634 by default).
Options: -h, --help \u00a0\u00a0\u00a0\u00a0Display help -p, --port # \u00a0\u00a0\u00a0\u00a0Port number to listen to
Exit status: Returns 0 if OK; non-zero otherwise.
"},{"location":"system/system_monitor/docs/msr_reader/#notes","title":"Notes","text":"The 'msr_reader' accesses minimal data enough to get thermal throttling event. This is an approach to limit its functionality, however, the functionality can be expanded for further improvements and considerations in the future.
Register Address Name Length 1B1H IA32_PACKAGE_THERM_STATUS 64bitFor details please see the documents below.
cpu_monitor:
Name Type Unit Default Notes temp_warn float DegC 90.0 Generates warning when CPU temperature reaches a specified value or higher. temp_error float DegC 95.0 Generates error when CPU temperature reaches a specified value or higher. usage_warn float %(1e-2) 0.90 Generates warning when CPU usage reaches a specified value or higher and last for usage_warn_count counts. usage_error float %(1e-2) 1.00 Generates error when CPU usage reaches a specified value or higher and last for usage_error_count counts. usage_warn_count int n/a 2 Generates warning when CPU usage reaches usage_warn value or higher and last for a specified counts. usage_error_count int n/a 2 Generates error when CPU usage reaches usage_error value or higher and last for a specified counts. load1_warn float %(1e-2) 0.90 Generates warning when load average 1min reaches a specified value or higher. load5_warn float %(1e-2) 0.80 Generates warning when load average 5min reaches a specified value or higher. msr_reader_port int n/a 7634 Port number to connect to msr_reader."},{"location":"system/system_monitor/docs/ros_parameters/#hdd-monitor","title":"HDD Monitor","text":"hdd_monitor:
\u00a0\u00a0disks:
Name Type Unit Default Notes name string n/a none The disk name to monitor temperature. (e.g. /dev/sda) temp_attribute_id int n/a 0xC2 S.M.A.R.T attribute ID of temperature. temp_warn float DegC 55.0 Generates warning when HDD temperature reaches a specified value or higher. temp_error float DegC 70.0 Generates error when HDD temperature reaches a specified value or higher. power_on_hours_attribute_id int n/a 0x09 S.M.A.R.T attribute ID of power-on hours. power_on_hours_warn int Hour 3000000 Generates warning when HDD power-on hours reaches a specified value or higher. total_data_written_attribute_id int n/a 0xF1 S.M.A.R.T attribute ID of total data written. total_data_written_warn int depends on device 4915200 Generates warning when HDD total data written reaches a specified value or higher. total_data_written_safety_factor int %(1e-2) 0.05 Safety factor of HDD total data written. recovered_error_attribute_id int n/a 0xC3 S.M.A.R.T attribute ID of recovered error. recovered_error_warn int n/a 1 Generates warning when HDD recovered error reaches a specified value or higher. read_data_rate_warn float MB/s 360.0 Generates warning when HDD read data rate reaches a specified value or higher. write_data_rate_warn float MB/s 103.5 Generates warning when HDD write data rate reaches a specified value or higher. read_iops_warn float IOPS 63360.0 Generates warning when HDD read IOPS reaches a specified value or higher. write_iops_warn float IOPS 24120.0 Generates warning when HDD write IOPS reaches a specified value or higher.hdd_monitor:
Name Type Unit Default Notes hdd_reader_port int n/a 7635 Port number to connect to hdd_reader. usage_warn float %(1e-2) 0.95 Generates warning when disk usage reaches a specified value or higher. usage_error float %(1e-2) 0.99 Generates error when disk usage reaches a specified value or higher."},{"location":"system/system_monitor/docs/ros_parameters/#memory-monitor","title":"Memory Monitor","text":"mem_monitor:
Name Type Unit Default Notes usage_warn float %(1e-2) 0.95 Generates warning when physical memory usage reaches a specified value or higher. usage_error float %(1e-2) 0.99 Generates error when physical memory usage reaches a specified value or higher."},{"location":"system/system_monitor/docs/ros_parameters/#net-monitor","title":"Net Monitor","text":"net_monitor:
Name Type Unit Default Notes devices list[string] n/a none The name of network interface to monitor. (e.g. eth0, * for all network interfaces) monitor_program string n/a greengrass program name to be monitored by nethogs name. crc_error_check_duration int sec 1 CRC error check duration. crc_error_count_threshold int n/a 1 Generates warning when count of CRC errors during CRC error check duration reaches a specified value or higher. reassembles_failed_check_duration int sec 1 IP packet reassembles failed check duration. reassembles_failed_check_count int n/a 1 Generates warning when count of IP packet reassembles failed during IP packet reassembles failed check duration reaches a specified value or higher."},{"location":"system/system_monitor/docs/ros_parameters/#ntp-monitor","title":"NTP Monitor","text":"ntp_monitor:
Name Type Unit Default Notes server string n/a ntp.ubuntu.com The name of NTP server to synchronize date and time. (e.g. ntp.nict.jp for Japan) offset_warn float sec 0.1 Generates warning when NTP offset reaches a specified value or higher. (default is 100ms) offset_error float sec 5.0 Generates warning when NTP offset reaches a specified value or higher. (default is 5sec)"},{"location":"system/system_monitor/docs/ros_parameters/#process-monitor","title":"Process Monitor","text":"process_monitor:
Name Type Unit Default Notes num_of_procs int n/a 5 The number of processes to generate High-load Proc[0-9] and High-mem Proc[0-9]."},{"location":"system/system_monitor/docs/ros_parameters/#gpu-monitor","title":"GPU Monitor","text":"gpu_monitor:
Name Type Unit Default Notes temp_warn float DegC 90.0 Generates warning when GPU temperature reaches a specified value or higher. temp_error float DegC 95.0 Generates error when GPU temperature reaches a specified value or higher. gpu_usage_warn float %(1e-2) 0.90 Generates warning when GPU usage reaches a specified value or higher. gpu_usage_error float %(1e-2) 1.00 Generates error when GPU usage reaches a specified value or higher. memory_usage_warn float %(1e-2) 0.90 Generates warning when GPU memory usage reaches a specified value or higher. memory_usage_error float %(1e-2) 1.00 Generates error when GPU memory usage reaches a specified value or higher."},{"location":"system/system_monitor/docs/ros_parameters/#voltage-monitor","title":"Voltage Monitor","text":"voltage_monitor:
Name Type Unit Default Notes cmos_battery_warn float volt 2.9 Generates warning when voltage of CMOS Battery is lower. cmos_battery_error float volt 2.7 Generates error when voltage of CMOS Battery is lower. cmos_battery_label string n/a \"\" voltage string in sensors command outputs. if empty no voltage will be checked."},{"location":"system/system_monitor/docs/seq_diagrams/","title":"Sequence diagrams","text":""},{"location":"system/system_monitor/docs/seq_diagrams/#sequence-diagrams","title":"Sequence diagrams","text":""},{"location":"system/system_monitor/docs/seq_diagrams/#cpu-monitor","title":"CPU Monitor","text":""},{"location":"system/system_monitor/docs/seq_diagrams/#hdd-monitor","title":"HDD Monitor","text":""},{"location":"system/system_monitor/docs/seq_diagrams/#memory-monitor","title":"Memory Monitor","text":""},{"location":"system/system_monitor/docs/seq_diagrams/#net-monitor","title":"Net Monitor","text":""},{"location":"system/system_monitor/docs/seq_diagrams/#ntp-monitor","title":"NTP Monitor","text":""},{"location":"system/system_monitor/docs/seq_diagrams/#process-monitor","title":"Process Monitor","text":""},{"location":"system/system_monitor/docs/seq_diagrams/#gpu-monitor","title":"GPU Monitor","text":""},{"location":"system/system_monitor/docs/topics_cpu_monitor/","title":"ROS topics: CPU Monitor","text":""},{"location":"system/system_monitor/docs/topics_cpu_monitor/#ros-topics-cpu-monitor","title":"ROS topics: CPU Monitor","text":""},{"location":"system/system_monitor/docs/topics_cpu_monitor/#cpu-temperature","title":"CPU Temperature","text":"/diagnostics/cpu_monitor: CPU Temperature
[summary]
level message OK OK[values]
key (example) value (example) Package id 0, Core [0-9], thermal_zone[0-9] 50.0 DegC*key: thermal_zone[0-9] for ARM architecture.
"},{"location":"system/system_monitor/docs/topics_cpu_monitor/#cpu-usage","title":"CPU Usage","text":"/diagnostics/cpu_monitor: CPU Usage
[summary]
level message OK OK WARN high load ERROR very high load[values]
key value (example) CPU [all,0-9]: status OK / high load / very high load CPU [all,0-9]: usr 2.00% CPU [all,0-9]: nice 0.00% CPU [all,0-9]: sys 1.00% CPU [all,0-9]: idle 97.00%"},{"location":"system/system_monitor/docs/topics_cpu_monitor/#cpu-load-average","title":"CPU Load Average","text":"/diagnostics/cpu_monitor: CPU Load Average
[summary]
level message OK OK WARN high load[values]
key value (example) 1min 14.50% 5min 14.55% 15min 9.67%"},{"location":"system/system_monitor/docs/topics_cpu_monitor/#cpu-thermal-throttling","title":"CPU Thermal Throttling","text":"Intel and raspi platform only. Tegra platform not supported.
/diagnostics/cpu_monitor: CPU Thermal Throttling
[summary]
level message OK OK ERROR throttling[values for intel platform]
key value (example) CPU [0-9]: Pkg Thermal Status OK / throttling[values for raspi platform]
key value (example) status All clear / Currently throttled / Soft temperature limit active"},{"location":"system/system_monitor/docs/topics_cpu_monitor/#cpu-frequency","title":"CPU Frequency","text":"/diagnostics/cpu_monitor: CPU Frequency
[summary]
level message OK OK[values]
key value (example) CPU [0-9]: clock 2879MHz"},{"location":"system/system_monitor/docs/topics_gpu_monitor/","title":"ROS topics: GPU Monitor","text":""},{"location":"system/system_monitor/docs/topics_gpu_monitor/#ros-topics-gpu-monitor","title":"ROS topics: GPU Monitor","text":"Intel and tegra platform only. Raspi platform not supported.
"},{"location":"system/system_monitor/docs/topics_gpu_monitor/#gpu-temperature","title":"GPU Temperature","text":"/diagnostics/gpu_monitor: GPU Temperature
[summary]
level message OK OK WARN warm ERROR hot[values]
key (example) value (example) GeForce GTX 1650, thermal_zone[0-9] 46.0 DegC*key: thermal_zone[0-9] for ARM architecture.
"},{"location":"system/system_monitor/docs/topics_gpu_monitor/#gpu-usage","title":"GPU Usage","text":"/diagnostics/gpu_monitor: GPU Usage
[summary]
level message OK OK WARN high load ERROR very high load[values]
key value (example) GPU [0-9]: status OK / high load / very high load GPU [0-9]: name GeForce GTX 1650, gpu.[0-9] GPU [0-9]: usage 19.0%*key: gpu.[0-9] for ARM architecture.
"},{"location":"system/system_monitor/docs/topics_gpu_monitor/#gpu-memory-usage","title":"GPU Memory Usage","text":"Intel platform only. There is no separate gpu memory in tegra. Both cpu and gpu uses cpu memory.
/diagnostics/gpu_monitor: GPU Memory Usage
[summary]
level message OK OK WARN high load ERROR very high load[values]
key value (example) GPU [0-9]: status OK / high load / very high load GPU [0-9]: name GeForce GTX 1650 GPU [0-9]: usage 13.0% GPU [0-9]: total 3G GPU [0-9]: used 1G GPU [0-9]: free 2G"},{"location":"system/system_monitor/docs/topics_gpu_monitor/#gpu-thermal-throttling","title":"GPU Thermal Throttling","text":"Intel platform only. Tegra platform not supported.
/diagnostics/gpu_monitor: GPU Thermal Throttling
[summary]
level message OK OK ERROR throttling[values]
key value (example) GPU [0-9]: status OK / throttling GPU [0-9]: name GeForce GTX 1650 GPU [0-9]: graphics clock 1020 MHz GPU [0-9]: reasons GpuIdle / SwThermalSlowdown etc."},{"location":"system/system_monitor/docs/topics_gpu_monitor/#gpu-frequency","title":"GPU Frequency","text":"/diagnostics/gpu_monitor: GPU Frequency
"},{"location":"system/system_monitor/docs/topics_gpu_monitor/#intel-platform","title":"Intel platform","text":"[summary]
level message OK OK WARN unsupported clock[values]
key value (example) GPU [0-9]: status OK / unsupported clock GPU [0-9]: name GeForce GTX 1650 GPU [0-9]: graphics clock 1020 MHz"},{"location":"system/system_monitor/docs/topics_gpu_monitor/#tegra-platform","title":"Tegra platform","text":"[summary]
level message OK OK[values]
key (example) value (example) GPU 17000000.gv11b: clock 318 MHz"},{"location":"system/system_monitor/docs/topics_hdd_monitor/","title":"ROS topics: HDD Monitor","text":""},{"location":"system/system_monitor/docs/topics_hdd_monitor/#ros-topics-hdd-monitor","title":"ROS topics: HDD Monitor","text":""},{"location":"system/system_monitor/docs/topics_hdd_monitor/#hdd-temperature","title":"HDD Temperature","text":"/diagnostics/hdd_monitor: HDD Temperature
[summary]
level message OK OK WARN hot ERROR critical hot[values]
key value (example) HDD [0-9]: status OK / hot / critical hot HDD [0-9]: name /dev/nvme0 HDD [0-9]: model SAMSUNG MZVLB1T0HBLR-000L7 HDD [0-9]: serial S4EMNF0M820682 HDD [0-9]: temperature 37.0 DegC not available"},{"location":"system/system_monitor/docs/topics_hdd_monitor/#hdd-poweronhours","title":"HDD PowerOnHours","text":"/diagnostics/hdd_monitor: HDD PowerOnHours
[summary]
level message OK OK WARN lifetime limit[values]
key value (example) HDD [0-9]: status OK / lifetime limit HDD [0-9]: name /dev/nvme0 HDD [0-9]: model PHISON PS5012-E12S-512G HDD [0-9]: serial FB590709182505050767 HDD [0-9]: power on hours 4834 Hours not available"},{"location":"system/system_monitor/docs/topics_hdd_monitor/#hdd-totaldatawritten","title":"HDD TotalDataWritten","text":"/diagnostics/hdd_monitor: HDD TotalDataWritten
[summary]
level message OK OK WARN warranty period[values]
key value (example) HDD [0-9]: status OK / warranty period HDD [0-9]: name /dev/nvme0 HDD [0-9]: model PHISON PS5012-E12S-512G HDD [0-9]: serial FB590709182505050767 HDD [0-9]: total data written 146295330 not available"},{"location":"system/system_monitor/docs/topics_hdd_monitor/#hdd-recoverederror","title":"HDD RecoveredError","text":"/diagnostics/hdd_monitor: HDD RecoveredError
[summary]
level message OK OK WARN high soft error rate[values]
key value (example) HDD [0-9]: status OK / high soft error rate HDD [0-9]: name /dev/nvme0 HDD [0-9]: model PHISON PS5012-E12S-512G HDD [0-9]: serial FB590709182505050767 HDD [0-9]: recovered error 0 not available"},{"location":"system/system_monitor/docs/topics_hdd_monitor/#hdd-usage","title":"HDD Usage","text":"/diagnostics/hdd_monitor: HDD Usage
[summary]
level message OK OK WARN low disk space ERROR very low disk space[values]
key value (example) HDD [0-9]: status OK / low disk space / very low disk space HDD [0-9]: filesystem /dev/nvme0n1p4 HDD [0-9]: size 264G HDD [0-9]: used 172G HDD [0-9]: avail 749G HDD [0-9]: use 69% HDD [0-9]: mounted on /"},{"location":"system/system_monitor/docs/topics_hdd_monitor/#hdd-readdatarate","title":"HDD ReadDataRate","text":"/diagnostics/hdd_monitor: HDD ReadDataRate
[summary]
level message OK OK WARN high data rate of read[values]
key value (example) HDD [0-9]: status OK / high data rate of read HDD [0-9]: name /dev/nvme0 HDD [0-9]: data rate of read 0.00 MB/s"},{"location":"system/system_monitor/docs/topics_hdd_monitor/#hdd-writedatarate","title":"HDD WriteDataRate","text":"/diagnostics/hdd_monitor: HDD WriteDataRate
[summary]
level message OK OK WARN high data rate of write[values]
key value (example) HDD [0-9]: status OK / high data rate of write HDD [0-9]: name /dev/nvme0 HDD [0-9]: data rate of write 0.00 MB/s"},{"location":"system/system_monitor/docs/topics_hdd_monitor/#hdd-readiops","title":"HDD ReadIOPS","text":"/diagnostics/hdd_monitor: HDD ReadIOPS
[summary]
level message OK OK WARN high IOPS of read[values]
key value (example) HDD [0-9]: status OK / high IOPS of read HDD [0-9]: name /dev/nvme0 HDD [0-9]: IOPS of read 0.00 IOPS"},{"location":"system/system_monitor/docs/topics_hdd_monitor/#hdd-writeiops","title":"HDD WriteIOPS","text":"/diagnostics/hdd_monitor: HDD WriteIOPS
[summary]
level message OK OK WARN high IOPS of write[values]
key value (example) HDD [0-9]: status OK / high IOPS of write HDD [0-9]: name /dev/nvme0 HDD [0-9]: IOPS of write 0.00 IOPS"},{"location":"system/system_monitor/docs/topics_hdd_monitor/#hdd-connection","title":"HDD Connection","text":"/diagnostics/hdd_monitor: HDD Connection
[summary]
level message OK OK WARN not connected[values]
key value (example) HDD [0-9]: status OK / not connected HDD [0-9]: name /dev/nvme0 HDD [0-9]: mount point /"},{"location":"system/system_monitor/docs/topics_mem_monitor/","title":"ROS topics: Memory Monitor","text":""},{"location":"system/system_monitor/docs/topics_mem_monitor/#ros-topics-memory-monitor","title":"ROS topics: Memory Monitor","text":""},{"location":"system/system_monitor/docs/topics_mem_monitor/#memory-usage","title":"Memory Usage","text":"/diagnostics/mem_monitor: Memory Usage
[summary]
level message OK OK WARN high load ERROR very high load[values]
key value (example) Mem: usage 29.72% Mem: total 31.2G Mem: used 6.0G Mem: free 20.7G Mem: shared 2.9G Mem: buff/cache 4.5G Mem: available 21.9G Swap: total 2.0G Swap: used 218M Swap: free 1.8G Total: total 33.2G Total: used 6.2G Total: free 22.5G Total: used+ 9.1G"},{"location":"system/system_monitor/docs/topics_net_monitor/","title":"ROS topics: Net Monitor","text":""},{"location":"system/system_monitor/docs/topics_net_monitor/#ros-topics-net-monitor","title":"ROS topics: Net Monitor","text":""},{"location":"system/system_monitor/docs/topics_net_monitor/#network-connection","title":"Network Connection","text":"/diagnostics/net_monitor: Network Connection
[summary]
level message OK OK WARN no such device[values]
key value (example) Network [0-9]: status OK / no such device HDD [0-9]: name wlp82s0"},{"location":"system/system_monitor/docs/topics_net_monitor/#network-usage","title":"Network Usage","text":"/diagnostics/net_monitor: Network Usage
[summary]
level message OK OK[values]
key value (example) Network [0-9]: status OK Network [0-9]: interface name wlp82s0 Network [0-9]: rx_usage 0.00% Network [0-9]: tx_usage 0.00% Network [0-9]: rx_traffic 0.00 MB/s Network [0-9]: tx_traffic 0.00 MB/s Network [0-9]: capacity 400.0 MB/s Network [0-9]: mtu 1500 Network [0-9]: rx_bytes 58455228 Network [0-9]: rx_errors 0 Network [0-9]: tx_bytes 11069136 Network [0-9]: tx_errors 0 Network [0-9]: collisions 0"},{"location":"system/system_monitor/docs/topics_net_monitor/#network-traffic","title":"Network Traffic","text":"/diagnostics/net_monitor: Network Traffic
[summary]
level message OK OK[values when specified program is detected]
key value (example) nethogs [0-9]: program /lambda/greengrassSystemComponents/1384/999 nethogs [0-9]: sent (KB/Sec) 1.13574 nethogs [0-9]: received (KB/Sec) 0.261914[values when error is occurring]
key value (example) error execve failed: No such file or directory"},{"location":"system/system_monitor/docs/topics_net_monitor/#network-crc-error","title":"Network CRC Error","text":"/diagnostics/net_monitor: Network CRC Error
[summary]
level message OK OK WARN CRC error[values]
key value (example) Network [0-9]: interface name wlp82s0 Network [0-9]: total rx_crc_errors 0 Network [0-9]: rx_crc_errors per unit time 0"},{"location":"system/system_monitor/docs/topics_net_monitor/#ip-packet-reassembles-failed","title":"IP Packet Reassembles Failed","text":"/diagnostics/net_monitor: IP Packet Reassembles Failed
[summary]
level message OK OK WARN reassembles failed[values]
key value (example) total packet reassembles failed 0 packet reassembles failed per unit time 0"},{"location":"system/system_monitor/docs/topics_ntp_monitor/","title":"ROS topics: NTP Monitor","text":""},{"location":"system/system_monitor/docs/topics_ntp_monitor/#ros-topics-ntp-monitor","title":"ROS topics: NTP Monitor","text":""},{"location":"system/system_monitor/docs/topics_ntp_monitor/#ntp-offset","title":"NTP Offset","text":"/diagnostics/ntp_monitor: NTP Offset
[summary]
level message OK OK WARN high ERROR too high[values]
key value (example) NTP Offset -0.013181 sec NTP Delay 0.053880 sec"},{"location":"system/system_monitor/docs/topics_process_monitor/","title":"ROS topics: Process Monitor","text":""},{"location":"system/system_monitor/docs/topics_process_monitor/#ros-topics-process-monitor","title":"ROS topics: Process Monitor","text":""},{"location":"system/system_monitor/docs/topics_process_monitor/#tasks-summary","title":"Tasks Summary","text":"/diagnostics/process_monitor: Tasks Summary
[summary]
level message OK OK[values]
key value (example) total 409 running 2 sleeping 321 stopped 0 zombie 0"},{"location":"system/system_monitor/docs/topics_process_monitor/#high-load-proc0-9","title":"High-load Proc[0-9]","text":"/diagnostics/process_monitor: High-load Proc[0-9]
[summary]
level message OK OK[values]
key value (example) COMMAND /usr/lib/firefox/firefox %CPU 37.5 %MEM 2.1 PID 14062 USER autoware PR 20 NI 0 VIRT 3461152 RES 669052 SHR 481208 S S TIME+ 23:57.49"},{"location":"system/system_monitor/docs/topics_process_monitor/#high-mem-proc0-9","title":"High-mem Proc[0-9]","text":"/diagnostics/process_monitor: High-mem Proc[0-9]
[summary]
level message OK OK[values]
key value (example) COMMAND /snap/multipass/1784/usr/bin/qemu-system-x86_64 %CPU 0 %MEM 2.5 PID 1565 USER root PR 20 NI 0 VIRT 3722320 RES 812432 SHR 20340 S S TIME+ 0:22.84"},{"location":"system/system_monitor/docs/topics_voltage_monitor/","title":"ROS topics: Voltage Monitor","text":""},{"location":"system/system_monitor/docs/topics_voltage_monitor/#ros-topics-voltage-monitor","title":"ROS topics: Voltage Monitor","text":"\"CMOS Battery Status\" and \"CMOS battery voltage\" are exclusive. Only one or the other is generated. Which one is generated depends on the value of cmos_battery_label.
"},{"location":"system/system_monitor/docs/topics_voltage_monitor/#cmos-battery-status","title":"CMOS Battery Status","text":"/diagnostics/voltage_monitor: CMOS Battery Status
[summary]
level message OK OK WARN Battery Dead[values]
key (example) value (example) CMOS battery status OK / Battery Dead*key: thermal_zone[0-9] for ARM architecture.
"},{"location":"system/system_monitor/docs/topics_voltage_monitor/#cmos-battery-voltage","title":"CMOS Battery Voltage","text":"/diagnostics/voltage_monitor: CMOS battery voltage
[summary]
level message OK OK WARN Low Battery WARN Battery Died[values]
key value (example) CMOS battery voltage 3.06"},{"location":"system/system_monitor/docs/traffic_reader/","title":"traffic_reader","text":""},{"location":"system/system_monitor/docs/traffic_reader/#traffic_reader","title":"traffic_reader","text":""},{"location":"system/system_monitor/docs/traffic_reader/#name","title":"Name","text":"traffic_reader - monitoring network traffic by process
"},{"location":"system/system_monitor/docs/traffic_reader/#synopsis","title":"Synopsis","text":"traffic_reader [OPTION]
"},{"location":"system/system_monitor/docs/traffic_reader/#description","title":"Description","text":"Monitoring network traffic by process. This runs as a daemon process and listens to a TCP/IP port (7636 by default).
Options: -h, --help \u00a0\u00a0\u00a0\u00a0Display help -p, --port # \u00a0\u00a0\u00a0\u00a0Port number to listen to
Exit status: Returns 0 if OK; non-zero otherwise.
"},{"location":"system/system_monitor/docs/traffic_reader/#notes","title":"Notes","text":"The 'traffic_reader' requires nethogs command.
"},{"location":"system/system_monitor/docs/traffic_reader/#operation-confirmed-platform","title":"Operation confirmed platform","text":"This node monitors input topic for abnormalities such as timeout and low frequency. The result of topic status is published as diagnostics.
"},{"location":"system/topic_state_monitor/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"The types of topic status and corresponding diagnostic status are following.
Topic status Diagnostic status DescriptionOK
OK The topic has no abnormalities NotReceived
ERROR The topic has not been received yet WarnRate
WARN The frequency of the topic is dropped ErrorRate
ERROR The frequency of the topic is significantly dropped Timeout
ERROR The topic subscription is stopped for a certain time"},{"location":"system/topic_state_monitor/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"system/topic_state_monitor/#input","title":"Input","text":"Name Type Description any name any type Subscribe target topic to monitor"},{"location":"system/topic_state_monitor/#output","title":"Output","text":"Name Type Description /diagnostics
diagnostic_msgs/DiagnosticArray
Diagnostics outputs"},{"location":"system/topic_state_monitor/#parameters","title":"Parameters","text":""},{"location":"system/topic_state_monitor/#node-parameters","title":"Node Parameters","text":"Name Type Default Value Description topic
string - Name of target topic topic_type
string - Type of target topic (used if the topic is not transform) frame_id
string - Frame ID of transform parent (used if the topic is transform) child_frame_id
string - Frame ID of transform child (used if the topic is transform) transient_local
bool false QoS policy of topic subscription (Transient Local/Volatile) best_effort
bool false QoS policy of topic subscription (Best Effort/Reliable) diag_name
string - Name used for the diagnostics to publish update_rate
double 10.0 Timer callback period [Hz]"},{"location":"system/topic_state_monitor/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description warn_rate
double 0.5 If the topic rate is lower than this value, the topic status becomes WarnRate
error_rate
double 0.1 If the topic rate is lower than this value, the topic status becomes ErrorRate
timeout
double 1.0 If the topic subscription is stopped for more than this time [s], the topic status becomes Timeout
window_size
int 10 Window size of target topic for calculating frequency"},{"location":"system/topic_state_monitor/#assumptions-known-limits","title":"Assumptions / Known limits","text":"TBD.
"},{"location":"system/velodyne_monitor/","title":"velodyne_monitor","text":""},{"location":"system/velodyne_monitor/#velodyne_monitor","title":"velodyne_monitor","text":""},{"location":"system/velodyne_monitor/#purpose","title":"Purpose","text":"This node monitors the status of Velodyne LiDARs. The result of the status is published as diagnostics. Take care not to use this diagnostics to decide the lidar error. Please read Assumptions / Known limits for the detail reason.
"},{"location":"system/velodyne_monitor/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"The status of Velodyne LiDAR can be retrieved from http://[ip_address]/cgi/{info, settings, status, diag}.json
.
The types of abnormal status and corresponding diagnostics status are following.
Abnormal status Diagnostic status No abnormality OK Top board temperature is too cold ERROR Top board temperature is cold WARN Top board temperature is too hot ERROR Top board temperature is hot WARN Bottom board temperature is too cold ERROR Bottom board temperature is cold WARN Bottom board temperature is too hot ERROR Bottom board temperature is hot WARN Rpm(Rotations per minute) of the motor is too low ERROR Rpm(Rotations per minute) of the motor is low WARN Connection error (cannot get Velodyne LiDAR status) ERROR"},{"location":"system/velodyne_monitor/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"system/velodyne_monitor/#input","title":"Input","text":"None
"},{"location":"system/velodyne_monitor/#output","title":"Output","text":"Name Type Description/diagnostics
diagnostic_msgs/DiagnosticArray
Diagnostics outputs"},{"location":"system/velodyne_monitor/#parameters","title":"Parameters","text":""},{"location":"system/velodyne_monitor/#node-parameters","title":"Node Parameters","text":"Name Type Default Value Description timeout
double 0.5 Timeout for HTTP request to get Velodyne LiDAR status [s]"},{"location":"system/velodyne_monitor/#core-parameters","title":"Core Parameters","text":"Name Type Default Value Description ip_address
string \"192.168.1.201\" IP address of target Velodyne LiDAR temp_cold_warn
double -5.0 If the temperature of Velodyne LiDAR is lower than this value, the diagnostics status becomes WARN [\u00b0C] temp_cold_error
double -10.0 If the temperature of Velodyne LiDAR is lower than this value, the diagnostics status becomes ERROR [\u00b0C] temp_hot_warn
double 75.0 If the temperature of Velodyne LiDAR is higher than this value, the diagnostics status becomes WARN [\u00b0C] temp_hot_error
double 80.0 If the temperature of Velodyne LiDAR is higher than this value, the diagnostics status becomes ERROR [\u00b0C] rpm_ratio_warn
double 0.80 If the rpm rate of the motor (= current rpm / default rpm) is lower than this value, the diagnostics status becomes WARN rpm_ratio_error
double 0.70 If the rpm rate of the motor (= current rpm / default rpm) is lower than this value, the diagnostics status becomes ERROR"},{"location":"system/velodyne_monitor/#config-files","title":"Config files","text":"Config files for several velodyne models are prepared. The temp_***
parameters are set with reference to the operational temperature from each datasheet. Moreover, the temp_hot_***
of each model are set highly as 20 from operational temperature. Now, VLP-16.param.yaml
is used as default argument because it is lowest spec.
This node uses the http_client and request results by GET method. It takes a few seconds to get results, or generate a timeout exception if it does not succeed the GET request. This occurs frequently and the diagnostics aggregator output STALE. Therefore I recommend to stop using this results to decide the lidar error, and only monitor it to confirm lidar status.
"},{"location":"tools/simulator_test/simulator_compatibility_test/","title":"simulator_compatibility_test","text":""},{"location":"tools/simulator_test/simulator_compatibility_test/#simulator_compatibility_test","title":"simulator_compatibility_test","text":""},{"location":"tools/simulator_test/simulator_compatibility_test/#purpose","title":"Purpose","text":"Test procedures (e.g. test codes) to check whether a certain simulator is compatible with Autoware
"},{"location":"tools/simulator_test/simulator_compatibility_test/#overview-of-the-test-codes","title":"Overview of the test codes","text":"File structure
source install/setup.bash\ncolcon build --packages-select simulator_compatibility_test\ncd src/universe/autoware.universe/tools/simulator_test/simulator_compatibility_test/test_sim_common_manual_testing\n
To run each test case manually
"},{"location":"tools/simulator_test/simulator_compatibility_test/#test-case-1","title":"Test Case #1","text":"Run the test using the following command
python -m pytest test_01_control_mode_and_report.py\n
Check if expected behavior is created within the simulator
Run the test using the following command
python -m pytest test_02_change_gear_and_report.py\n
Check if expected behavior is created within the simulator
Run the test using the following command
python -m pytest test_03_longitudinal_command_and_report.py\n
Check if expected behavior is created within the simulator
Run the test using the following command
python -m pytest test_04_lateral_command_and_report.py\n
Check if expected behavior is created within the simulator
Run the test using the following command
python -m pytest test_05_turn_indicators_cmd_and_report.py\n
Check if expected behavior is created within the simulator
Run the test using the following command
python -m pytest test_06_hazard_lights_cmd_and_report.py\n
Check if expected behavior is created within the simulator
source install/setup.bash\ncolcon build --packages-select simulator_compatibility_test\ncd src/universe/autoware.universe/tools/simulator_test/simulator_compatibility_test/test_morai_sim\n
Detailed process
(WIP)
"},{"location":"tools/simulator_test/simulator_compatibility_test/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":""},{"location":"tools/simulator_test/simulator_compatibility_test/#inputs-outputs","title":"Inputs / Outputs","text":""},{"location":"tools/simulator_test/simulator_compatibility_test/#input","title":"Input","text":"Name Type Description/vehicle/status/control_mode
autoware_auto_vehicle_msgs::msg::ControlModeReport
for [Test Case #1] /vehicle/status/gear_status
autoware_auto_vehicle_msgs::msg::GearReport
for [Test Case #2] /vehicle/status/velocity_status
autoware_auto_vehicle_msgs::msg::VelocityReport
for [Test Case #3] /vehicle/status/steering_status
autoware_auto_vehicle_msgs::msg::SteeringReport
for [Test Case #4] /vehicle/status/turn_indicators_status
autoware_auto_vehicle_msgs::msg::TurnIndicatorsReport
for [Test Case #5] /vehicle/status/hazard_lights_status
autoware_auto_vehicle_msgs::msg::HazardLightsReport
for [Test Case #6]"},{"location":"tools/simulator_test/simulator_compatibility_test/#output","title":"Output","text":"Name Type Description /control/command/control_mode_cmd
autoware_auto_vehicle_msgs/ControlModeCommand
for [Test Case #1] /control/command/gear_cmd
autoware_auto_vehicle_msgs/GearCommand
for [Test Case #2] /control/command/control_cmd
autoware_auto_vehicle_msgs/AckermannControlCommand
for [Test Case #3, #4] /vehicle/status/steering_status
autoware_auto_vehicle_msgs/TurnIndicatorsCommand
for [Test Case #5] /control/command/turn_indicators_cmd
autoware_auto_vehicle_msgs/HazardLightsCommand
for [Test Case #6]"},{"location":"tools/simulator_test/simulator_compatibility_test/#parameters","title":"Parameters","text":"None.
"},{"location":"tools/simulator_test/simulator_compatibility_test/#node-parameters","title":"Node Parameters","text":"None.
"},{"location":"tools/simulator_test/simulator_compatibility_test/#core-parameters","title":"Core Parameters","text":"None.
"},{"location":"tools/simulator_test/simulator_compatibility_test/#assumptions-known-limits","title":"Assumptions / Known limits","text":"None.
"},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/","title":"accel_brake_map_calibrator","text":""},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#accel_brake_map_calibrator","title":"accel_brake_map_calibrator","text":"The role of this node is to automatically calibrate accel_map.csv
/ brake_map.csv
used in the raw_vehicle_cmd_converter
node.
The base map, which is lexus's one by default, is updated iteratively with the loaded driving data.
"},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#how-to-calibrate","title":"How to calibrate","text":""},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#launch-calibrator","title":"Launch Calibrator","text":"After launching Autoware, run the accel_brake_map_calibrator
by the following command and then perform autonomous driving. Note: You can collect data with manual driving if it is possible to use the same vehicle interface as during autonomous driving (e.g. using a joystick).
ros2 launch accel_brake_map_calibrator accel_brake_map_calibrator.launch.xml rviz:=true\n
Or if you want to use rosbag files, run the following commands.
ros2 launch accel_brake_map_calibrator accel_brake_map_calibrator.launch.xml rviz:=true use_sim_time:=true\nros2 bag play <rosbag_file> --clock\n
During the calibration with setting the parameter progress_file_output
to true, the log file is output in [directory of accel_brake_map_calibrator]/config/ . You can also see accel and brake maps in [directory of accel_brake_map_calibrator]/config/accel_map.csv and [directory of accel_brake_map_calibrator]/config/brake_map.csv after calibration.
The rviz:=true
option displays the RViz with a calibration plugin as below.
The current status (velocity and pedal) is shown in the plugin. The color on the current cell varies green/red depending on the current data is valid/invalid. The data that doesn't satisfy the following conditions are considered invalid and will not be used for estimation since aggressive data (e.g. when the pedal is moving fast) causes bad calibration accuracy.
The detailed parameters are described in the parameter section.
Note: You don't need to worry about whether the current state is red or green during calibration. Just keep getting data until all the cells turn red.
The value of each cell in the map is gray at first, and it changes from blue to red as the number of valid data in the cell accumulates. It is preferable to continue the calibration until each cell of the map becomes close to red. In particular, the performance near the stop depends strongly on the velocity of 0 ~ 6m/s range and the pedal value of +0.2 ~ -0.4, range so it is desirable to focus on those areas.
"},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#diagnostics","title":"Diagnostics","text":"The accel brake map_calibrator
publishes diagnostics message depending on the calibration status. Diagnostic type WARN
indicates that the current accel/brake map is estimated to be inaccurate. In this situation, it is strongly recommended to perform a re-calibration of the accel/brake map.
OK
\"OK\" Calibration Required WARN
\"Accel/brake map Calibration is required.\" The accuracy of current accel/brake map may be low. This diagnostics status can be also checked on the following ROS topic.
ros2 topic echo /accel_brake_map_calibrator/output/update_suggest\n
When the diagnostics type is WARN
, True
is published on this topic and the update of the accel/brake map is suggested.
The accuracy of map is evaluated by the Root Mean Squared Error (RMSE) between the observed acceleration and predicted acceleration.
TERMS:
Observed acceleration
: the current vehicle acceleration which is calculated as a derivative value of the wheel speed.Predicted acceleration
: the output of the original accel/brake map, which the Autoware is expecting. The value is calculated using the current pedal and velocity.You can check additional error information with the following topics.
/accel_brake_map_calibrator/output/current_map_error
: The error of the original map set in the csv_path_accel/brake_map
path. The original map is not accurate if this value is large./accel_brake_map_calibrator/output/updated_map_error
: The error of the map calibrated in this node. The calibration quality is low if this value is large./accel_brake_map_calibrator/output/map_error_ratio
: The error ratio between the original map and updated map (ratio = updated / current). If this value is less than 1, it is desirable to update the map.The process of calibration can be visualized as below. Since these scripts need the log output of the calibration, the pedal_accel_graph_output
parameter must be set to true while the calibration is running for the visualization.
The following command shows the plot of used data in the calibration. In each plot of velocity ranges, you can see the distribution of the relationship between pedal and acceleration, and raw data points with colors according to their pitch angles.
ros2 run accel_brake_map_calibrator view_plot.py\n
"},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#visualize-statistics-about-accelerationvelocitypedal-data","title":"Visualize statistics about acceleration/velocity/pedal data","text":"The following command shows the statistics of the calibration:
of all data in each map cell.
ros2 run accel_brake_map_calibrator view_statistics.py\n
"},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#how-to-save-the-calibrated-accel-brake-map-anytime-you-want","title":"How to save the calibrated accel / brake map anytime you want","text":"You can save accel and brake map anytime with the following command.
ros2 service call /accel_brake_map_calibrator/update_map_dir tier4_vehicle_msgs/srv/UpdateAccelBrakeMap \"path: '<accel/brake map directory>'\"\n
You can also save accel and brake map in the default directory where Autoware reads accel_map.csv/brake_map.csv using the RViz plugin (AccelBrakeMapCalibratorButtonPanel) as following.
Click Panels tab, and select AccelBrakeMapCalibratorButtonPanel.
Select the panel, and the button will appear at the bottom of RViz.
Press the button, and the accel / brake map will be saved. (The button cannot be pressed in certain situations, such as when the calibrator node is not running.)
These scripts are useful to test for accel brake map calibration. These generate an ActuationCmd
with a constant accel/brake value given interactively by a user through CLI.
The accel/brake_tester.py
receives a target accel/brake command from CLI. It sends a target value to actuation_cmd_publisher.py
which generates the ActuationCmd
. You can run these scripts by the following commands in the different terminals, and it will be as in the screenshot below.
ros2 run accel_brake_map_calibrator accel_tester.py\nros2 run accel_brake_map_calibrator brake_tester.py\nros2 run accel_brake_map_calibrator actuation_cmd_publisher.py\n
"},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#calibration-method","title":"Calibration Method","text":"Two algorithms are selectable for the acceleration map update, update_offset_four_cell_around and update_offset_each_cell. Please see the link for details.
"},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#data-preprocessing","title":"Data Preprocessing","text":"Before calibration, missing or unusable data (e.g., too large handle angles) must first be eliminated. The following parameters are used to determine which data to remove.
"},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#parameters_1","title":"Parameters","text":"Name Description Default Value velocity_min_threshold Exclude minimal velocity 0.1 max_steer_threshold Exclude large steering angle 0.2 max_pitch_threshold Exclude large pitch angle 0.02 max_jerk_threshold Exclude large jerk 0.7 pedal_velocity_thresh Exclude large pedaling speed 0.15"},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#update_offset_each_cell","title":"update_offset_each_cell","text":"Update by Recursive Least Squares(RLS) method using data close enough to each grid.
Advantage : Only data close enough to each grid is used for calibration, allowing accurate updates at each point.
Disadvantage : Calibration is time-consuming due to a large amount of data to be excluded.
"},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#parameters_2","title":"Parameters","text":"Data selection is determined by the following thresholds. | Name | Default Value | | ----------------------- | ------------- | | velocity_diff_threshold | 0.556 | | pedal_diff_threshold | 0.03 |
"},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#update-formula","title":"Update formula","text":"\\[ \\begin{align} \\theta[n]=& \\theta[n-1]+\\frac{p[n-1]x^{(n)}}{\\lambda+p[n-1]{(x^{(n)})}^2}(y^{(n)}-\\theta[n-1]x^{(n)})\\\\ p[n]=&\\frac{p[n-1]}{\\lambda+p[n-1]{(x^{(n)})}^2} \\end{align} \\]"},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#variables","title":"Variables","text":"Variable name Symbol covariance \\(p[n-1]\\) map_offset \\(\\theta[n]\\) forgettingfactor \\(\\lambda\\) phi \\(x(=1)\\) measured_acc \\(y\\)"},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#update_offset_four_cell_around-1","title":"update_offset_four_cell_around [1]","text":"Update the offsets by RLS in four grids around newly obtained data. By considering linear interpolation, the update takes into account appropriate weights. Therefore, there is no need to remove data by thresholding.
Advantage : No data is wasted because updates are performed on the 4 grids around the data with appropriate weighting. Disadvantage : Accuracy may be degraded due to extreme bias of the data. For example, if data \\(z(k)\\) is biased near \\(Z_{RR}\\) in Fig. 2, updating is performed at the four surrounding points ( \\(Z_{RR}\\), \\(Z_{RL}\\), \\(Z_{LR}\\), and \\(Z_{LL}\\)), but accuracy at \\(Z_{LL}\\) is not expected.
"},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#implementation","title":"Implementation","text":"
See eq.(7)-(10) in [1] for the updated formula. In addition, eq.(17),(18) from [1] are used for Anti-Windup.
"},{"location":"vehicle/accel_brake_map_calibrator/accel_brake_map_calibrator/#references","title":"References","text":"[1] Gabrielle Lochrie, Michael Doljevic, Mario Nona, Yongsoon Yoon, Anti-Windup Recursive Least Squares Method for Adaptive Lookup Tables with Application to Automotive Powertrain Control Systems, IFAC-PapersOnLine, Volume 54, Issue 20, 2021, Pages 840-845
"},{"location":"vehicle/external_cmd_converter/","title":"external_cmd_converter","text":""},{"location":"vehicle/external_cmd_converter/#external_cmd_converter","title":"external_cmd_converter","text":"external_cmd_converter
is a node that converts desired mechanical input to acceleration and velocity by using accel/brake map.
~/in/external_control_cmd
tier4_external_api_msgs::msg::ControlCommand target throttle/brake/steering_angle/steering_angle_velocity
is necessary to calculate desired control command. ~/input/shift_cmd\"
autoware_auto_vehicle_msgs::GearCommand current command of gear. ~/input/emergency_stop
tier4_external_api_msgs::msg::Heartbeat emergency heart beat for external command. ~/input/current_gate_mode
tier4_control_msgs::msg::GateMode topic for gate mode. ~/input/odometry
navigation_msgs::Odometry twist topic in odometry is used."},{"location":"vehicle/external_cmd_converter/#output-topics","title":"Output topics","text":"Name Type Description ~/out/control_cmd
autoware_auto_control_msgs::msg::AckermannControlCommand ackermann control command converted from selected external command"},{"location":"vehicle/external_cmd_converter/#parameters","title":"Parameters","text":"Parameter Type Description timer_rate
double timer's update rate wait_for_first_topic
double if time out check is done after receiving first topic control_command_timeout
double time out check for control command emergency_stop_timeout
double time out check for emergency stop command"},{"location":"vehicle/external_cmd_converter/#limitation","title":"Limitation","text":"tbd.
"},{"location":"vehicle/raw_vehicle_cmd_converter/","title":"raw_vehicle_cmd_converter","text":""},{"location":"vehicle/raw_vehicle_cmd_converter/#raw_vehicle_cmd_converter","title":"raw_vehicle_cmd_converter","text":""},{"location":"vehicle/raw_vehicle_cmd_converter/#overview","title":"Overview","text":"The raw_vehicle_command_converter is a crucial node in vehicle automation systems, responsible for translating desired steering and acceleration inputs into specific vehicle control commands. This process is achieved through a combination of a lookup table and an optional feedback control system.
"},{"location":"vehicle/raw_vehicle_cmd_converter/#lookup-table","title":"Lookup Table","text":"The core of the converter's functionality lies in its use of a CSV-formatted lookup table. This table encapsulates the relationship between the throttle/brake pedal (depending on your vehicle control interface) and the corresponding vehicle acceleration across various speeds. The converter utilizes this data to accurately translate target accelerations into appropriate throttle/brake values.
"},{"location":"vehicle/raw_vehicle_cmd_converter/#creation-of-reference-data","title":"Creation of Reference Data","text":"Reference data for the lookup table is generated through the following steps:
Once the acceleration map is crafted, it should be loaded when the RawVehicleCmdConverter node is launched, with the file path defined in the launch file.
"},{"location":"vehicle/raw_vehicle_cmd_converter/#auto-calibration-tool","title":"Auto-Calibration Tool","text":"For ease of calibration and adjustments to the lookup table, an auto-calibration tool is available. More information and instructions for this tool can be found here.
"},{"location":"vehicle/raw_vehicle_cmd_converter/#input-topics","title":"Input topics","text":"Name Type Description~/input/control_cmd
autoware_auto_control_msgs::msg::AckermannControlCommand target velocity/acceleration/steering_angle/steering_angle_velocity
is necessary to calculate actuation command. ~/input/steering\"
autoware_auto_vehicle_msgs::SteeringReport current status of steering used for steering feed back control ~/input/twist
navigation_msgs::Odometry twist topic in odometry is used."},{"location":"vehicle/raw_vehicle_cmd_converter/#output-topics","title":"Output topics","text":"Name Type Description ~/output/actuation_cmd
tier4_vehicle_msgs::msg::ActuationCommandStamped actuation command for vehicle to apply mechanical input"},{"location":"vehicle/raw_vehicle_cmd_converter/#parameters","title":"Parameters","text":"Name Type Description Default Range convert_accel_cmd boolean use accel or not true N/A convert_brake_cmd boolean use brake or not true N/A convert_steer_cmd boolean use steer or not true N/A use_steer_ff boolean steering steer controller using steer feed forward or not true N/A use_steer_fb boolean steering steer controller using steer feed back or not true N/A is_debugging boolean debugging mode or not false N/A max_throttle float maximum value of throttle 0.4 \u22650.0 max_brake float maximum value of brake 0.8 \u22650.0 max_steer float maximum value of steer 10.0 N/A min_steer float minimum value of steer -10.0 N/A steer_pid.kp float proportional coefficient value in PID control 150.0 N/A steer_pid.ki float integral coefficient value in PID control 15.0 >0.0 steer_pid.kd float derivative coefficient value in PID control 0.0 N/A steer_pid.max float maximum value of PID 8.0 N/A steer_pid.min float minimum value of PID -8.0. N/A steer_pid.max_p float maximum value of Proportional in PID 8.0 N/A steer_pid.min_p float minimum value of Proportional in PID -8.0 N/A steer_pid.max_i float maximum value of Integral in PID 8.0 N/A steer_pid.min_i float minimum value of Integral in PID -8.0 N/A steer_pid.max_d float maximum value of Derivative in PID 0.0 N/A steer_pid.min_d float minimum value of Derivative in PID 0.0 N/A steer_pid.invalid_integration_decay float invalid integration decay value in PID control 0.97 >0.0"},{"location":"vehicle/raw_vehicle_cmd_converter/#limitation","title":"Limitation","text":"The current feed back implementation is only applied to steering control.
"},{"location":"vehicle/steer_offset_estimator/Readme/","title":"steer_offset_estimator","text":""},{"location":"vehicle/steer_offset_estimator/Readme/#steer_offset_estimator","title":"steer_offset_estimator","text":""},{"location":"vehicle/steer_offset_estimator/Readme/#purpose","title":"Purpose","text":"The role of this node is to automatically calibrate steer_offset
used in the vehicle_interface
node.
The base steer offset value is 0 by default, which is standard, is updated iteratively with the loaded driving data. This module is supposed to be used in below straight driving situation.
"},{"location":"vehicle/steer_offset_estimator/Readme/#inner-workings-algorithms","title":"Inner-workings / Algorithms","text":"Estimates sequential steering offsets from kinematic model and state observations. Calculate yaw rate error and then calculate steering error recursively by least squared method, for more details see updateSteeringOffset()
function.
~/input/twist
geometry_msgs::msg::TwistStamped
vehicle twist ~/input/steer
autoware_auto_vehicle_msgs::msg::SteeringReport
steering"},{"location":"vehicle/steer_offset_estimator/Readme/#output","title":"Output","text":"Name Type Description ~/output/steering_offset
tier4_debug_msgs::msg::Float32Stamped
steering offset ~/output/steering_offset_covariance
tier4_debug_msgs::msg::Float32Stamped
covariance of steering offset"},{"location":"vehicle/steer_offset_estimator/Readme/#launch-calibrator","title":"Launch Calibrator","text":"After launching Autoware, run the steer_offset_estimator
by the following command and then perform autonomous driving. Note: You can collect data with manual driving if it is possible to use the same vehicle interface as during autonomous driving (e.g. using a joystick).
ros2 launch steer_offset_estimator steer_offset_estimator.launch.xml\n
Or if you want to use rosbag files, run the following commands.
ros2 param set /use_sim_time true\nros2 bag play <rosbag_file> --clock\n
"},{"location":"vehicle/steer_offset_estimator/Readme/#parameters","title":"Parameters","text":"Name Type Description Default Range initial_covariance float steer offset is larger than tolerance 1000 N/A steer_update_hz float update hz of steer data 10 \u22650.0 forgetting_factor float weight of using previous value 0.999 \u22650.0 valid_min_velocity float velocity below this value is not used 5 \u22650.0 valid_max_steer float steer above this value is not used 0.05 N/A warn_steer_offset_deg float Warn if offset is above this value. ex. if absolute estimated offset is larger than 2.5[deg] => warning 2.5 N/A"},{"location":"vehicle/steer_offset_estimator/Readme/#diagnostics","title":"Diagnostics","text":"The steer_offset_estimator
publishes diagnostics message depending on the calibration status. Diagnostic type WARN
indicates that the current steer_offset is estimated to be inaccurate. In this situation, it is strongly recommended to perform a re-calibration of the steer_offset.
OK
\"Preparation\" Calibration Required WARN
\"Steer offset is larger than tolerance\" This diagnostics status can be also checked on the following ROS topic.
ros2 topic echo /vehicle/status/steering_offset\n
"},{"location":"vehicle/vehicle_info_util/Readme/","title":"Vehicle Info Util","text":""},{"location":"vehicle/vehicle_info_util/Readme/#vehicle-info-util","title":"Vehicle Info Util","text":""},{"location":"vehicle/vehicle_info_util/Readme/#purpose","title":"Purpose","text":"This package is to get vehicle info parameters.
"},{"location":"vehicle/vehicle_info_util/Readme/#description","title":"Description","text":"In here, you can check the vehicle dimensions with more detail. The current format supports only the Ackermann model. This file defines the model assumed in autoware path planning, control, etc. and does not represent the exact physical model. If a model other than the Ackermann model is used, it is assumed that a vehicle interface will be designed to change the control output for the model.
"},{"location":"vehicle/vehicle_info_util/Readme/#versioning-policy","title":"Versioning Policy","text":"We have implemented a versioning system for the vehicle_info.param.yaml
file to ensure clarity and consistency in file format across different versions of Autoware and its external applications. Please see discussion for the details.
version:
field is commented out).0.1.0
. Follow the semantic versioning format (MAJOR.MINOR.PATCH)./**:\nros__parameters:\n# version: 0.1.0 # Uncomment and update this line for future format changes.\nwheel_radius: 0.383\n...\n
"},{"location":"vehicle/vehicle_info_util/Readme/#why-versioning","title":"Why Versioning?","text":"vehicle_info.param.yaml
need to reference the correct file version for optimal compatibility and functionality.vehicle_info.param.yaml
file simplifies management compared to maintaining separate versions for multiple customized Autoware branches. This approach streamlines version tracking and reduces complexity.$ ros2 run vehicle_info_util min_turning_radius_calculator.py\nyaml path is /home/autoware/pilot-auto/install/vehicle_info_util/share/vehicle_info_util/config/vehicle_info.param.yaml\nMinimum turning radius is 3.253042620027102 [m] for rear, 4.253220695862465 [m] for front.\n
You can designate yaml file with -y
option as follows.
ros2 run vehicle_info_util min_turning_radius_calculator.py -y <path-to-yaml>\n
"},{"location":"vehicle/vehicle_info_util/Readme/#assumptions-known-limits","title":"Assumptions / Known limits","text":"TBD.
"}]} \ No newline at end of file diff --git a/latest/sitemap.xml b/latest/sitemap.xml index f145efe796b01..2c011639a5dfe 100644 --- a/latest/sitemap.xml +++ b/latest/sitemap.xml @@ -2,1657 +2,1657 @@